Showing 1,121 - 1,140 results of 2,900 for search '(feature OR features) parameters (computation OR computational)', query time: 0.32s Refine Results
  1. 1121

    Optimized DenseNet Architectures for Precise Classification of Edible and Poisonous Mushrooms by Jay Prakash Singh, Debolina Ghosh, Jagannath Singh, Anurag Bhattacharjee, Mahendra Kumar Gourisaria

    Published 2025-06-01
    “…Despite problems such as possible over-reliance on pre-trained features and computational complexity, the modified DenseNet-121 is useful for accurate mushroom classification. …”
    Get full text
    Article
  2. 1122

    Assessing the Efficiency of Transformer Models with Varying Sizes for Text Classification: A Study of Rule-Based Annotation with DistilBERT and Other Transformers by Arafet Sbei, Khaoula ElBedoui, Walid Barhoumi

    Published 2025-08-01
    “…This efficiency is attributed to the distillation process that retains essential features while reducing computational demands. Notably, DistilBERT achieved an overall accuracy of 0.86, significantly surpassing BERT’s 0.55, GPT-2’s 0.36, XLNet’s 0.51, Ernie 2.0’s 0.72, Electra’s 0.74, ALBERT’s 0.72, and RoBERTa’s 0.71. …”
    Get full text
    Article
  3. 1123
  4. 1124

    Using Cuckoo Search Algorithm to Predict Corporate Financial Risks and Alleviate Economic Uncertainty by Muqiao Cai

    Published 2025-08-01
    “…CSA is used to tune the network weights and feature parameters worldwide to improve the neural model’s convergence speed and predictive ability. …”
    Get full text
    Article
  5. 1125
  6. 1126
  7. 1127
  8. 1128

    Application of Gated Recurrent Unit in Electroencephalogram (EEG)-Based Mental State Classification by Gst. Ayu Vida Mastrika Giri, Ngurah Agus Sanjaya ER, I Ketut Gede Suhartana

    Published 2025-01-01
    “…The mean, standard deviation, skewness, kurtosis, power spectral density, zero-crossing rate, and root mean square were extracted as statistical features from the raw EEG data. After parameter tuning, the GRU-based model achieved an excellent average accuracy value of 95.94% and also yielded precision, recall, and F1-scores within the range of 0.95 to 0.97 over 5-fold cross-validation. …”
    Get full text
    Article
  9. 1129
  10. 1130

    Self‐Supervised Pre‐Training and Few‐Shot Finetuning for Gas‐Bearing Prediction by Long Han, Xinming Wu, Renjie Chen, Yunhua Shi, Zhanxuan Hu, Huijing Fang

    Published 2025-06-01
    “…Based on the geological understanding that different attributes computed from the same sample convey correlated features reflecting geological properties, we pre‐train the network on large‐scale unlabeled data using a self‐supervised learning strategy of attribute masking and recovery. …”
    Get full text
    Article
  11. 1131

    Enhancing task execution: a dual-layer approach with multi-queue adaptive priority scheduling by Mansoor Iqbal, Muhammad Umar Shafiq, Shouzab Khan, Obaidullah, Saad Alahmari, Zahid Ullah

    Published 2024-12-01
    “…Efficient task execution is critical to optimize the usage of computing resources in process scheduling. Various task scheduling algorithms ensure optimized and efficient use of computing resources. …”
    Get full text
    Article
  12. 1132

    Enhanced YOLOv8 for Efficient Parcel Identification in Disordered Logistics Environments by Han Yu, Zhang Fengshou, Zhuang Gaoshuai, Qu Yuanhao, He Aohui, Duan Qingyang

    Published 2025-04-01
    “…The proposed algorithm introduces the C2f-OR module to reduce parameters and computation, the Conv-Ghost module for efficient feature extraction, and the HIoU loss function to enhance identification accuracy. …”
    Get full text
    Article
  13. 1133

    Advancing breast cancer diagnosis: token vision transformers for faster and accurate classification of histopathology images by Mouhamed Laid Abimouloud, Khaled Bensid, Mohamed Elleuch, Mohamed Ben Ammar, Monji Kherallah

    Published 2025-01-01
    “…This hybrid architecture aims to enhance feature extraction and classification accuracy with shorter training time and fewer parameters by minimizing the number of input patches employed during training, while incorporating tokenization of input patches using convolutional layers and encoder transformer layers to process patches across all network layers for fast and accurate breast cancer tumor subtype classification. …”
    Get full text
    Article
  14. 1134

    MDMU-Net: 3D multi-dimensional decoupled multi-scale U-Net for pancreatic cancer segmentation by Lian Lu, Miao Wu, Gan Sen, Fei Ren, Tao Hu

    Published 2025-08-01
    “…While maintaining clinically viable precision, the model significantly improves computational efficiency, with parameter count (26.97M) and FLOPs (84.837G) reduced by 65.5% and 71%, respectively, compared to UNETR, providing reliable algorithmic support for precise diagnosis and treatment of pancreatic cancer.…”
    Get full text
    Article
  15. 1135
  16. 1136
  17. 1137
  18. 1138
  19. 1139
  20. 1140