Showing 1,361 - 1,380 results of 2,900 for search '"(feature OR features) parameters (computation" OR computational")', query time: 0.22s Refine Results
  1. 1361

    LI-YOLOv8: Lightweight small target detection algorithm for remote sensing images that combines GSConv and PConv. by Pingping Yan, Xiangming Qi, Liang Jiang

    Published 2025-01-01
    “…In the domain of remote sensing image small target detection, challenges such as difficulties in extracting features of small targets, complex backgrounds that easily lead to confusion with targets, and high computational complexity with significant resource consumption are prevalent. …”
    Get full text
    Article
  2. 1362

    Finite-size effects in molecular simulations: a physico-mathematical view by Benedikt M. Reible, Carsten Hartmann, Luigi Delle Site

    Published 2025-12-01
    “…Here this feature is treated employing the same statistical mechanics framework developed for the first problem.…”
    Get full text
    Article
  3. 1363

    Contour wavelet diffusion – a fast and high-quality facial expression generation model by Chenwei Xu, Yuntao Zou

    Published 2024-12-01
    “…Latent space diffusion models have shown promise in speeding up training by leveraging feature space parameters, but they require additional network structures. …”
    Get full text
    Article
  4. 1364

    TFDense-GAN: a generative adversarial network for single-channel speech enhancement by Haoxiang Chen, Jinxiu Zhang, Yaogang Fu, Xintong Zhou, Ruilong Wang, Yanyan Xu, Dengfeng Ke

    Published 2025-03-01
    “…Abstract Research indicates that utilizing the spectrum in the time–frequency domain plays a crucial role in speech enhancement tasks, as it can better extract audio features and reduce computational consumption. For the speech enhancement methods in the time–frequency domain, the introduction of attention mechanisms and the application of DenseBlock have yielded promising results. …”
    Get full text
    Article
  5. 1365

    MLHI-Net: multi-level hybrid lightweight water body segmentation network for urban shoreline detection by Jianhua Ye, Pan Li, Yunda Zhang, Ze Guo, Shoujin Zeng, Youji Zhan

    Published 2025-02-01
    “…Additionally, the network’s computational GLOPS is 13.45 G, and the number of parameters is 46.92 M, which can meet the requirements for real-time detection. …”
    Get full text
    Article
  6. 1366

    YOLO-GCOF: A Lightweight Low-Altitude Drone Detection Model by Wanjun Yu, Kongxin Mo

    Published 2025-01-01
    “…The YOLO-GCOF model outperforms the original YOLOv8n, as demonstrated by a 1.1% improvement in mAP@50, alongside reductions in parameter count, computational overhead, and model size by 60%, 49.4%, and 55.1%, respectively. …”
    Get full text
    Article
  7. 1367

    Low-Damage Grasp Method for Plug Seedlings Based on Machine Vision and Deep Learning by Fengwei Yuan, Gengzhen Ren, Zhang Xiao, Erjie Sun, Guoning Ma, Shuaiyin Chen, Zhenlong Li, Zhenhong Zou, Xiangjiang Wang

    Published 2025-06-01
    “…The lightweight network Mobilenet is used as the feature extraction network to reduce the number of parameters of the network. …”
    Get full text
    Article
  8. 1368

    CGDINet: A Deep Learning-Based Salient Object Detection Algorithm by Chengyu Hu, Jianxin Guo, Hanfei Xie, Qing Zhu, Baoxi Yuan, Yujie Gao, Xiangyang Ma, Jialu Chen, Juan Tian

    Published 2025-01-01
    “…The results show that CGDINet outperforms other mainstream significance object detection models in evaluation metrics such as <inline-formula> <tex-math notation="LaTeX">${\mathrm {maxF}}_{\mathrm {\beta }}$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$\mathrm {S}_{\mathrm {\alpha }}$ </tex-math></inline-formula>, and MAE, with almost no increase in computational cost (FLOPs) and parameters. The experimental results validate that CGDINet can effectively address the issues of incomplete global feature extraction and insufficient attention to key areas, thereby significantly enhancing the performance of significance object detection.…”
    Get full text
    Article
  9. 1369

    Fusion of Multimodal Audio Data for Enhanced Speaker Identification Using Kolmogorov-Arnold Networks by Aryaman Tamotia, Dhruv Shantu Karmokar, Rethi Komal, K. Khadar Nawas, A. Shahina, A. Nayeemulla Khan

    Published 2025-01-01
    “…Although the classical deep learning methods are effective, they have rather high computational cost, which leads to usually cumbersome parameter tuning processes and hence reduce their applicability to real-world deployments. …”
    Get full text
    Article
  10. 1370

    Flat U-Net: An Efficient Ultralightweight Model for Solar Filament Segmentation in Full-disk Hα Images by GaoFei Zhu, GangHua Lin, Xiao Yang, Cheng Zeng

    Published 2025-01-01
    “…Each block effectively optimizes the channel features from the previous layer, significantly reducing parameters. …”
    Get full text
    Article
  11. 1371

    Data Integration Based on UAV Multispectra and Proximal Hyperspectra Sensing for Maize Canopy Nitrogen Estimation by Fuhao Lu, Haiming Sun, Lei Tao, Peng Wang

    Published 2025-04-01
    “…However, the CE of the integrated model decreased by 1.93% and 1.68%, respectively. Key features included multispectral red-edge indices (NREI, NDRE, CI) and texture parameters (R1m), alongside hyperspectral indices (SR, PRI) and spectral parameters (SDy, Rg) exhibited varying directional impacts on CNC estimation using RF. …”
    Get full text
    Article
  12. 1372

    Three-Dimensional Object Recognition Using Orthogonal Polynomials: An Embedded Kernel Approach by Aqeel Abdulazeez Mohammed, Ahlam Hanoon Al-sudani, Alaa M. Abdul-Hadi, Almuntadher Alwhelat, Basheera M. Mahmmod, Sadiq H. Abdulhussain, Muntadher Alsabah, Abir Hussain

    Published 2025-02-01
    “…Various signal preprocessing operations have been used for computer vision, including smoothing techniques, signal analyzing, resizing, sharpening, and enhancement, to reduce reluctant falsifications, segmentation, and image feature improvement. …”
    Get full text
    Article
  13. 1373

    Enhancing lung disease diagnosis with deep-learning-based CT scan image segmentation by Rima Tri Wahyuningrum, Achmad Bauravindah, Indah Agustien Siradjuddin, Budi Dwi Satoto, Amillia Kartika Sari, Anggraini Dwi Sensusiati

    Published 2025-09-01
    “…Whereas on the Kaggle dataset it achieved a Dice coefficient of 0.961, IoU of 0.930, computational time of 1.189 s, and 9.16 million trainable parameters. …”
    Get full text
    Article
  14. 1374

    Multi-Strategy Improvement of Coal Gangue Recognition Method of YOLOv11 by Hongjing Tao, Lei Zhang, Zhipeng Sun, Xinchao Cui, Weixun Yi

    Published 2025-03-01
    “…It exhibits a slight increase in computational load, despite an almost unchanged number of parameters, and demonstrates the best overall detection performance. …”
    Get full text
    Article
  15. 1375

    A global object-oriented dynamic network for low-altitude remote sensing object detection by Daoze Tang, Shuyun Tang, Yalin Wang, Shaoyun Guan, Yining Jin

    Published 2025-05-01
    “…This study introduces the Global Object-Oriented Dynamic Network (GOOD-Net) algorithm, comprising three fundamental components: an object-oriented, dynamically adaptive backbone network; a neck network designed to optimize the utilization of global information; and a task-specific processing head augmented for detailed feature refinement. Novel module components, such as the ReSSD Block, GPSA, and DECBS, are integrated to enable fine-grained feature extraction while maintaining computational and parameter efficiency. …”
    Get full text
    Article
  16. 1376

    LWheatNet: a lightweight convolutional neural network with mixed attention mechanism for wheat seed classification by Xiaojuan Guo, Jianping Wang, Guohong Gao, Zihao Cheng, Zongjie Qiao, Ranran Zhang, Zhanpeng Ma, Xing Wang

    Published 2025-01-01
    “…Each network consists of three core layers, with each core layer is comprising one downsampling unit and multiple basic units. To minimize model parameters and computational load without sacrificing performance, each unit utilizes depthwise separable convolutions, channel shuffle, and channel split techniques.ResultsTo validate the effectiveness of the proposed model, we conducted comparative experiments with five classic network models: AlexNet, VGG16, MobileNet V2, MobileNet V3, and ShuffleNet V2. …”
    Get full text
    Article
  17. 1377

    YOLOv8-OCHD: A Lightweight Wood Surface Defect Detection Method Based on Improved YOLOv8 by Zuxing Chen, Junjie Feng, Xueyan Zhu, Bin Wang

    Published 2025-01-01
    “…Secondly, a C2f_RVB module is designed, which uses the RepViTBlock technique to optimize feature representation and effectively reduce the number of model parameters. …”
    Get full text
    Article
  18. 1378

    Transcriptome Derived Artificial neural networks predict PRRC2A as a potent biomarker for epilepsy by Wayez Naqvi, Prekshi Garg, Prachi Srivastava

    Published 2025-06-01
    “…It aids clinicians in addressing patient parameters and translational research. Artificial neural networks (ANNs) are computer models that attempt to mimic the neurons present in the human brain. …”
    Get full text
    Article
  19. 1379

    Research on Lightweight Method of Insulator Target Detection Based on Improved SSD by Bing Zeng, Yu Zhou, Dilin He, Zhihao Zhou, Shitao Hao, Kexin Yi, Zhilong Li, Wenhua Zhang, Yunmin Xie

    Published 2024-09-01
    “…The experimental results show that the parameter number of the proposed model is reduced from 26.15 M to 0.61 M, the computational load is reduced from 118.95 G to 1.49 G, and the mAP is increased from 96.8% to 98%. …”
    Get full text
    Article
  20. 1380

    Analysing and Forecasting the Energy Consumption of Healthcare Facilities in the Short and Medium Term. A Case Study by Ali Koç, Serap Ulusam Seçkiner

    Published 2024-01-01
    “…The approach adopted for predicting hospital energy consumption involves five steps: data acquisition, data pre-processing, data prediction, hyper-parameter optimisation and feature analysis. Furthermore, all regression algorithms have undergone hyper-parameter optimisation using random search, grid search and Bayesian optimisation to achieve the minimum prediction errors represented by different metrics. …”
    Get full text
    Article