-
241
RCSAN residual enhanced channel spatial attention network for stock price forecasting
Published 2025-07-01“…The model reduces RMSE by 17.3–49.3% compared to traditional methods and 6.2–11.6% compared to Transformer variants, with the highest $$R^2$$ reaching 93.17% and an increase in return on investment to 482.64%. …”
Get full text
Article -
242
Research on Shale Oil Well Productivity Prediction Model Based on CNN-BiGRU Algorithm
Published 2025-05-01“…The CNN-BiGRU model was implemented on the TensorFlow framework, with rigorous validation of model robustness and systematic evaluation of feature importance. Hyperparameter optimization via grid searching yielded optimal configurations, while field applications demonstrated operational feasibility. …”
Get full text
Article -
243
MRMS-CNNFormer: A Novel Framework for Predicting the Biochemical Recurrence of Prostate Cancer on Multi-Sequence MRI
Published 2025-05-01“…Accurate preoperative prediction of biochemical recurrence (BCR) in prostate cancer (PCa) is essential for treatment optimization, and demands an explicit focus on tumor microenvironment (TME). …”
Get full text
Article -
244
PBX micro defect characterization by using deep learning and image processing of micro CT images
Published 2025-06-01“…The PBX_SegNet is built on the encoder–decoder architecture of U-Net. We optimize the structure of skip connection in PBX_SegNet and introduce a concurrent spatial and channel squeeze and excitation (SCSE) module on each stage in the encoder network and in the decoder network. …”
Get full text
Article -
245
A fake news detection model using the integration of multimodal attention mechanism and residual convolutional network
Published 2025-07-01“…Baseline models used for comparison include Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized Bidirectional Encoder Representations from Transformers Approach (RoBERTa), Generalized Autoregressive Pretraining for Language Understanding (XLNet), Enhanced Representation through Knowledge Integration (ERNIE), and Generative Pre-trained Transformer 3.5 (GPT-3.5). …”
Get full text
Article -
246
Fast adaptive mode decision algorithm for H.264 based on spatial correlation
Published 2006-01-01“…A fast adaptive inter prediction modes decision method was proposed to reduce the complexity of H.264 encoder.Firstly the candidate inter modes used in rate distortion optimization can be limited in a small mode group(MG)by using the characteristics of the motion compensated residual image.Then the two most probable modes of the chosen MG are obtained based on the modes of the up MB and the left MB.By calculating and comparing the rate distortion cost of the two modes,the optimum mode of the MB is determined.The experimental results show that the proposed method can save the encoding time up to 65% on average with ?…”
Get full text
Article -
247
A lightweight mechanism for vision-transformer-based object detection
Published 2025-05-01“…Through computational optimization, XFCOS reduces encoder FLOPs to 13.5G, representing a 17.2% decrease compared to TSP-FCOS’s 16.3G, while cutting activation memory from 285.78 to 264.64M, a reduction of 7.4%. …”
Get full text
Article -
248
From Coarse to Crisp: Enhancing Tree Species Maps with Deep Learning and Satellite Imagery
Published 2025-06-01“…Applying the proposed methodology to Sobaeksan and Jirisan National Parks in South Korea, the performance of various machine learning (ML) and deep learning (DL) models was compared, including traditional ML (linear regression, random forest) and DL architectures (multilayer perceptron (MLP), spectral encoder block (SEB)—linear, and SEB-transformer). …”
Get full text
Article -
249
An Evolutionary Deep Reinforcement Learning-Based Framework for Efficient Anomaly Detection in Smart Power Distribution Grids
Published 2025-05-01“…The proposed DRL-NSABC model is evaluated using four benchmark datasets: smart grid, advanced metering infrastructure (AMI), smart meter, and Pecan Street, widely recognized in anomaly detection research. A comparative analysis against state-of-the-art deep learning (DL) models, including RL, CNN, RNN, the generative adversarial network (GAN), the time-series transformer (TST), and bidirectional encoder representations from transformers (BERT), demonstrates the superiority of the proposed DRL-NSABC. …”
Get full text
Article -
250
A Near-Infrared Imaging System for Robotic Venous Blood Collection
Published 2024-11-01“…The U-Net+ResNet18 neural network integrates the residual blocks from ResNet18 into the encoder of the U-Net to form a new neural network. …”
Get full text
Article -
251
Embedded Image and Video Coding Algorithm Based on Adaptive Filtering Equation
Published 2021-01-01“…The algorithm is imported into the encoder for video capture and encoding. By capturing videos of different formats, resolutions, and times, the memory size of the video files collected before and after the algorithm optimization is compared, and the optimized algorithm occupies the memory space of the video file in the actual system. …”
Get full text
Article -
252
GLN-LRF: global learning network based on large receptive fields for hyperspectral image classification
Published 2025-05-01“…The proposed GLNet adopts an encoder-decoder architecture with skip connections. …”
Get full text
Article -
253
HR Management Big Data Mining Based on Computational Intelligence and Deep Learning
Published 2021-01-01“…Finally, extensive experiments on real-world HR data sets clearly validate the effectiveness and interpretability of the proposed framework and its variants compared to state-of-the-art benchmarks.…”
Get full text
Article -
254
A VVC intra coding method based on fast partition for coding unit
Published 2024-08-01“…Then, an early prediction for MTT partition direction method was adopted for further optimization of residual MTT. Experimental results show that the proposed method can significantly reduce encoding complexity, with a 74.3% reduction in encoding time compared to the original encoder with only 3.3% rate loss. …”
Get full text
Article -
255
Audio-visual speech enhancement with multi-level feature deep fusion under low signal-to-noise ratio
Published 2025-05-01“…The method consisted of an audio-visual encoding network, a fusion network, and an auditory decoding network. …”
Get full text
Article -
256
Transformer-based latency prediction for stream processing task
Published 2025-07-01“…A novel model based on Auto-encoders and Transformers was proposed to address the above challenges. …”
Get full text
Article -
257
Prediction of crop growth environmental data using LSTM
Published 2024-09-01“…Finally, the prediction results obtained from the proposed method were compared with those obtained from least alosolute shrinkage and selection operator (LASSO), random forest regression, bidirectional LSTM, and encoder-decoder LSTM. …”
Get full text
Article -
258
A multi objective collaborative reinforcement learning algorithm for flexible job shop scheduling
Published 2025-07-01“…First, a mathematical model for flexible job shop scheduling optimization is established, with the makespan and total energy consumption of the shop as optimization objectives, and a disjunctive-graph is introduced to represent state features. …”
Get full text
Article -
259
Enhancing Missense Variant Pathogenicity Prediction with MissenseNet: Integrating Structural Insights and ShuffleNet-Based Deep Learning Techniques
Published 2024-09-01“…This model, advancing beyond standard predictive features, incorporates structural insights from AlphaFold2 protein predictions, thus optimizing structural data utilization. MissenseNet, built on the ShuffleNet architecture, incorporates an encoder-decoder framework and a Squeeze-and-Excitation (SE) module designed to adaptively adjust channel weights and enhance feature fusion and interaction. …”
Get full text
Article -
260
Chinese Sequence Labeling Based on Stack Pre-training Model
Published 2022-02-01“…In this paper, according to the relevance of tasks, we use stacking pretraining model to extract features, segment words, and name entity recognition/chunk tagging.Through in-depth research on the internal structure of BERT, while ensuring the accuracy of the original model, the Bidirectional Encoder Representation from Transformers (BERT) is optimized, which reduces the complexity and the time cost of the model in the training and prediction process.In the upper layer structure, compared with the traditional long-short-term memory network (LSTM), this paper uses a two-layer bidirectional LSTM structure, the bottom layer uses a bidirectional long-short-term memory network (Bi-LSTM) for word segmentation, and the top layer is used for sequence labeling tasks.On the New Semi-Conditional Random Field (NSCRF), the traditional semi-Markov Conditional Random Field (Semi-CRF) and Conditional Random Field (CRF) are combined while considering the segmentation.The labeling of words improves accuracy in training and decoding. …”
Get full text
Article