Showing 241 - 260 results of 549 for search 'optimal encoder and comparator', query time: 0.14s Refine Results
  1. 241

    RCSAN residual enhanced channel spatial attention network for stock price forecasting by WenJie Sun, Ziyang Liu, ChunHong Yuan, Xiang Zhou, YuTing Pei, Cui Wei

    Published 2025-07-01
    “…The model reduces RMSE by 17.3–49.3% compared to traditional methods and 6.2–11.6% compared to Transformer variants, with the highest $$R^2$$ reaching 93.17% and an increase in return on investment to 482.64%. …”
    Get full text
    Article
  2. 242

    Research on Shale Oil Well Productivity Prediction Model Based on CNN-BiGRU Algorithm by Yuan Pan, Xuewei Liu, Fuchun Tian, Liyong Yang, Xiaoting Gou, Yunpeng Jia, Quan Wang, Yingxi Zhang

    Published 2025-05-01
    “…The CNN-BiGRU model was implemented on the TensorFlow framework, with rigorous validation of model robustness and systematic evaluation of feature importance. Hyperparameter optimization via grid searching yielded optimal configurations, while field applications demonstrated operational feasibility. …”
    Get full text
    Article
  3. 243

    MRMS-CNNFormer: A Novel Framework for Predicting the Biochemical Recurrence of Prostate Cancer on Multi-Sequence MRI by Tao Lian, Mengting Zhou, Yangyang Shao, Xiaqing Chen, Yinghua Zhao, Qianjin Feng

    Published 2025-05-01
    “…Accurate preoperative prediction of biochemical recurrence (BCR) in prostate cancer (PCa) is essential for treatment optimization, and demands an explicit focus on tumor microenvironment (TME). …”
    Get full text
    Article
  4. 244

    PBX micro defect characterization by using deep learning and image processing of micro CT images by Liang-liang Lv, Wei-bin Zhang, Xiao-dong Pan, Gong-ping Li, Cui Zhang

    Published 2025-06-01
    “…The PBX_SegNet is built on the encoder–decoder architecture of U-Net. We optimize the structure of skip connection in PBX_SegNet and introduce a concurrent spatial and channel squeeze and excitation (SCSE) module on each stage in the encoder network and in the decoder network. …”
    Get full text
    Article
  5. 245

    A fake news detection model using the integration of multimodal attention mechanism and residual convolutional network by Ying Lu, Naiwei Yao

    Published 2025-07-01
    “…Baseline models used for comparison include Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized Bidirectional Encoder Representations from Transformers Approach (RoBERTa), Generalized Autoregressive Pretraining for Language Understanding (XLNet), Enhanced Representation through Knowledge Integration (ERNIE), and Generative Pre-trained Transformer 3.5 (GPT-3.5). …”
    Get full text
    Article
  6. 246

    Fast adaptive mode decision algorithm for H.264 based on spatial correlation by FENG Bin, LIU Wen-yu, ZHU Guang-xi

    Published 2006-01-01
    “…A fast adaptive inter prediction modes decision method was proposed to reduce the complexity of H.264 encoder.Firstly the candidate inter modes used in rate distortion optimization can be limited in a small mode group(MG)by using the characteristics of the motion compensated residual image.Then the two most probable modes of the chosen MG are obtained based on the modes of the up MB and the left MB.By calculating and comparing the rate distortion cost of the two modes,the optimum mode of the MB is determined.The experimental results show that the proposed method can save the encoding time up to 65% on average with ?…”
    Get full text
    Article
  7. 247

    A lightweight mechanism for vision-transformer-based object detection by Yanming Ye, Qiang Sun, Kailong Cheng, Xingfa Shen, Dongjing Wang

    Published 2025-05-01
    “…Through computational optimization, XFCOS reduces encoder FLOPs to 13.5G, representing a 17.2% decrease compared to TSP-FCOS’s 16.3G, while cutting activation memory from 285.78 to 264.64M, a reduction of 7.4%. …”
    Get full text
    Article
  8. 248

    From Coarse to Crisp: Enhancing Tree Species Maps with Deep Learning and Satellite Imagery by Taebin Choe, Seungpyo Jeon, Byeongcheol Kim, Seonyoung Park

    Published 2025-06-01
    “…Applying the proposed methodology to Sobaeksan and Jirisan National Parks in South Korea, the performance of various machine learning (ML) and deep learning (DL) models was compared, including traditional ML (linear regression, random forest) and DL architectures (multilayer perceptron (MLP), spectral encoder block (SEB)—linear, and SEB-transformer). …”
    Get full text
    Article
  9. 249

    An Evolutionary Deep Reinforcement Learning-Based Framework for Efficient Anomaly Detection in Smart Power Distribution Grids by Mohammad Mehdi Sharifi Nevisi, Mehrdad Shoeibi, Francisco Hernando-Gallego, Diego Martín, Sarvenaz Sadat Khatami

    Published 2025-05-01
    “…The proposed DRL-NSABC model is evaluated using four benchmark datasets: smart grid, advanced metering infrastructure (AMI), smart meter, and Pecan Street, widely recognized in anomaly detection research. A comparative analysis against state-of-the-art deep learning (DL) models, including RL, CNN, RNN, the generative adversarial network (GAN), the time-series transformer (TST), and bidirectional encoder representations from transformers (BERT), demonstrates the superiority of the proposed DRL-NSABC. …”
    Get full text
    Article
  10. 250

    A Near-Infrared Imaging System for Robotic Venous Blood Collection by Zhikang Yang, Mao Shi, Yassine Gharbi, Qian Qi, Huan Shen, Gaojian Tao, Wu Xu, Wenqi Lyu, Aihong Ji

    Published 2024-11-01
    “…The U-Net+ResNet18 neural network integrates the residual blocks from ResNet18 into the encoder of the U-Net to form a new neural network. …”
    Get full text
    Article
  11. 251

    Embedded Image and Video Coding Algorithm Based on Adaptive Filtering Equation by Zhe Fu

    Published 2021-01-01
    “…The algorithm is imported into the encoder for video capture and encoding. By capturing videos of different formats, resolutions, and times, the memory size of the video files collected before and after the algorithm optimization is compared, and the optimized algorithm occupies the memory space of the video file in the actual system. …”
    Get full text
    Article
  12. 252

    GLN-LRF: global learning network based on large receptive fields for hyperspectral image classification by Mengyun Dai, Tianzhe Liu, Youzhuang Lin, Zhengyu Wang, Yaohai Lin, Changcai Yang, Riqing Chen

    Published 2025-05-01
    “…The proposed GLNet adopts an encoder-decoder architecture with skip connections. …”
    Get full text
    Article
  13. 253

    HR Management Big Data Mining Based on Computational Intelligence and Deep Learning by Genliang Zhao, Zhe Xue

    Published 2021-01-01
    “…Finally, extensive experiments on real-world HR data sets clearly validate the effectiveness and interpretability of the proposed framework and its variants compared to state-of-the-art benchmarks.…”
    Get full text
    Article
  14. 254

    A VVC intra coding method based on fast partition for coding unit by ZHONG Hui, LU Yu, YIN Haibing, HUANG Xiaofeng

    Published 2024-08-01
    “…Then, an early prediction for MTT partition direction method was adopted for further optimization of residual MTT. Experimental results show that the proposed method can significantly reduce encoding complexity, with a 74.3% reduction in encoding time compared to the original encoder with only 3.3% rate loss. …”
    Get full text
    Article
  15. 255

    Audio-visual speech enhancement with multi-level feature deep fusion under low signal-to-noise ratio by ZHANG Tianqi, SHEN Xiwen, TANG Juan, TAN Shuang

    Published 2025-05-01
    “…The method consisted of an audio-visual encoding network, a fusion network, and an auditory decoding network. …”
    Get full text
    Article
  16. 256

    Transformer-based latency prediction for stream processing task by Zheng Chu, Baozhu Li, Changtian Ying

    Published 2025-07-01
    “…A novel model based on Auto-encoders and Transformers was proposed to address the above challenges. …”
    Get full text
    Article
  17. 257

    Prediction of crop growth environmental data using LSTM by WU Chao, ZHOU Zijing, HUANG Jinhua, XU Xiaoyin, QIU Hong, PENG Yeping

    Published 2024-09-01
    “…Finally, the prediction results obtained from the proposed method were compared with those obtained from least alosolute shrinkage and selection operator (LASSO), random forest regression, bidirectional LSTM, and encoder-decoder LSTM. …”
    Get full text
    Article
  18. 258

    A multi objective collaborative reinforcement learning algorithm for flexible job shop scheduling by Jian Li, Shifa Li, Pengbo He, Huankun Li

    Published 2025-07-01
    “…First, a mathematical model for flexible job shop scheduling optimization is established, with the makespan and total energy consumption of the shop as optimization objectives, and a disjunctive-graph is introduced to represent state features. …”
    Get full text
    Article
  19. 259

    Enhancing Missense Variant Pathogenicity Prediction with MissenseNet: Integrating Structural Insights and ShuffleNet-Based Deep Learning Techniques by Jing Liu, Yingying Chen, Kai Huang, Xiao Guan

    Published 2024-09-01
    “…This model, advancing beyond standard predictive features, incorporates structural insights from AlphaFold2 protein predictions, thus optimizing structural data utilization. MissenseNet, built on the ShuffleNet architecture, incorporates an encoder-decoder framework and a Squeeze-and-Excitation (SE) module designed to adaptively adjust channel weights and enhance feature fusion and interaction. …”
    Get full text
    Article
  20. 260

    Chinese Sequence Labeling Based on Stack Pre-training Model by LIU Yu-peng, LI Guo-dong

    Published 2022-02-01
    “…In this paper, according to the relevance of tasks, we use stacking pretraining model to extract features, segment words, and name entity recognition/chunk tagging.Through in-depth research on the internal structure of BERT, while ensuring the accuracy of the original model, the Bidirectional Encoder Representation from Transformers (BERT) is optimized, which reduces the complexity and the time cost of the model in the training and prediction process.In the upper layer structure, compared with the traditional long-short-term memory network (LSTM), this paper uses a two-layer bidirectional LSTM structure, the bottom layer uses a bidirectional long-short-term memory network (Bi-LSTM) for word segmentation, and the top layer is used for sequence labeling tasks.On the New Semi-Conditional Random Field (NSCRF), the traditional semi-Markov Conditional Random Field (Semi-CRF) and Conditional Random Field (CRF) are combined while considering the segmentation.The labeling of words improves accuracy in training and decoding. …”
    Get full text
    Article