Showing 901 - 920 results of 1,673 for search 'forest (errors OR error)', query time: 0.11s Refine Results
  1. 901

    AI-driven competency recommendations based on attendance patterns and academic performance by Junaidi, Teguh Wahyono, Irwan Sembiring

    Published 2025-06-01
    “…Gradient Boosting (GB) was the most effective model for weighting discipline and learning outcomes (Mean Squared Error [MSE]: 2.9224, Root Mean Squared Error [RMSE]: 1.4252, Coefficient of Determination [R2]: 0.9667), outperforming three alternatives. …”
    Get full text
    Article
  2. 902
  3. 903

    Causal Physics-Infused Hybrid Learning (CPIHL) Framework for Next-Gen Battery Health Forecasting by Sahar Qaadan, Aiman Alshare, Rami Alazrai, Alexander Popp, Benedikt Schmuelling

    Published 2025-01-01
    “…The CPIHL model demonstrates exceptional performance, achieving an R2 score of 0.9994, a mean absolute error of 0.0007, and a root mean square error of 0.0025, outperforming all baseline machine learning and deep learning models, including Random Forest, Artificial Neural Networks, Long Short-Term Memory, and Gated Recurrent Units. …”
    Get full text
    Article
  4. 904

    Machine learning–based feature prediction of convergence zones in ocean front environments by Weishuai Xu, Lei Zhang, Hua Wang

    Published 2024-01-01
    “…The model achieved an accuracy of 82.43% in predicting the convergence zone’s distance with an error of less than 1 km. Additionally, it attained a 77.1% accuracy in predicting the convergence zone’s width within a similar error range. …”
    Get full text
    Article
  5. 905

    Comparison of Trivariate Copula-Based Conditional Quantile Regression Versus Machine Learning Methods for Estimating Copper Recovery by Heber Hernández, Martín Alberto Díaz-Viera, Elisabete Alberdi, Aitor Goti

    Published 2025-02-01
    “…To simulate a high undersampling scenario, only 5% of the copper recovery information was used for training and validation, while the remaining 95% was used for prediction, applying in all these stages error metrics, such as R<sup>2</sup>, MaxRE, MAE, MSE, MedAE, and MAPE. …”
    Get full text
    Article
  6. 906

    Geostatistics and Artificial Intelligence Applications for Spatial Evaluation of Bearing Capacity after Dynamic Compaction by Rodney Ewusi-Wilson, Junghee Park, Boyoung Yoon, Changho Lee

    Published 2022-01-01
    “…The model performance is examined using the correlations between SPT-based and predicted bearing capacity in the context of mean absolute error (MAE), coefficient of determination (r2), and root mean square error (RMSE). …”
    Get full text
    Article
  7. 907

    Optimized Demand Forecasting for Bike-Sharing Stations Through Multi-Method Fusion and Gated Graph Convolutional Neural Networks by Hebin Guo, Kexin Li, Yutong Rou

    Published 2024-01-01
    “…The results demonstrate that the multi-attribute, edge-weighted GGCN outperforms baseline and single-attribute models, achieving a Mean Absolute Error (MAE) of 0.521 and Mean Squared Error (MSE) of 0.918 for spring and autumn, and an MAE of 0.307 and MSE of 0.608 for summer and winter.…”
    Get full text
    Article
  8. 908

    Modeling Soil Temperature with Fuzzy Logic and Supervised Learning Methods by Bilal Cemek, Yunus Kültürel, Emirhan Cemek, Erdem Küçüktopçu, Halis Simsek

    Published 2025-06-01
    “…Performance was evaluated using the root mean square error (RMSE), the mean absolute error (MAE), and the coefficient of determination (R<sup>2</sup>). …”
    Get full text
    Article
  9. 909

    Advancing sedimentation modeling in large reservoir systems: Insights from multi-scale process coupling and machine learning by Yuning Tan, Huaixiang Liu, Yongjun Lu, Zhili Wang, Wenjie Li

    Published 2025-08-01
    “…New hydrological insights for the region: The proposed framework firstly reduced the total sedimentation error from 53.42 % to 3.44 % and the maximum group-wise error from 90.88 % to 13.46 %, highlighting the dominant influence of tributary sediment inputs and flocculation factor on reservoir sedimentation. …”
    Get full text
    Article
  10. 910

    A hybrid approach to financial big data analysis using extended ensemble learning and optimized spark streaming by Muhammad Babar

    Published 2025-09-01
    “…Empirical evaluations using the Portuguese Bank Marketing dataset demonstrate that the proposed architecture achieves a high prediction accuracy of 90.9%, outperforming individual models such as Logistic Regression, SVM, and Random Forest. The ensemble model also reports a mean absolute error (MAE) of 0.023 and a mean squared error (MSE) of 0.0018. …”
    Get full text
    Article
  11. 911

    Blended Ensemble Learning for Robust Normal Behavior Modeling of Wind Turbines by Jianghao Zhu, Tingting Pei, Le Su, Bin Lan, Wei Chen

    Published 2025-05-01
    “…The framework reduced mean absolute error by 25.1% and mean absolute percentage error by 33.4% compared to conventional methods. …”
    Get full text
    Article
  12. 912

    Snow depth estimation in Northeast China based on space-borne scatterometer data and ML model with optimal features by Wenfei Chen, Lingjia Gu, Xiaofeng Li, Xintong Fan

    Published 2025-08-01
    “…In comparison to the public SD product and the ground-based SD measurements, the experimental results using the RF model with optimal features demonstrate superior SD estimation performance, yielding a root mean square error (RMSE) of 3.91 cm, mean absolute error (MAE) of 2.27 cm, and an R2 of 0.80. …”
    Get full text
    Article
  13. 913

    Identifying the Peak Flowering Dates of Winter Rapeseed with a NBYVI Index Using Sentinel-1/2 by Fazhe Wu, Peng Lu, Shengbo Chen, Yucheng Xu, Zibo Wang, Rui Dai, Shuya Zhang

    Published 2025-03-01
    “…It evaluates the effectiveness of crop morphological indices in monitoring growth stages and explores the impacts of elevation and latitude on the peak flowering dates of winter rapeseed. The error ranges for predicting the peak flowering dates with the NDYI (traditional optical index) and the VV (crop morphological index) are generally 2–7 days and 2–6 days, respectively, while the error range for the NBYVI index is generally 0–4 days, demonstrating superior stability and accuracy compared to the NDYI and VV indices.…”
    Get full text
    Article
  14. 914

    Design and Evaluation of a Leader–Follower Isomorphic Vascular Interventional Surgical Robot by Pengfei Chen, Yutang Wang, Dapeng Tian

    Published 2025-01-01
    “…The leader–follower delivery error of the catheter/guidewire is less than 1 mm, and the leader–follower rotation error of the guidewire is less than 0.3° in an actual intervention task based on a human vascular model. …”
    Get full text
    Article
  15. 915

    An interpretable and stacking ensemble model for predicting heat and mass transfer of desiccant wheel by Mengyang Li, Liu Chen

    Published 2025-03-01
    “…Coefficient of Determination (R2), Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE) were used to measure this stacking model: the process side outlet temperature (R2 = 0.9467, RMSE=1.5239, and MAE = 1.2721), the process side outlet humidity ratio (R2 = 0.9743, RMSE = 0.5728, MAE = 0.4531). …”
    Get full text
    Article
  16. 916

    Comparing the effect of pre-anesthesia clonidine and tranexamic acid on intraoperative bleeding volume in rhinoplasty: a machine learning approach by Zahra Asghari Varzaneh, Akram Hemmatipour, Hadi Kazemi-Arpanahi

    Published 2025-08-01
    “…The results revealed that the Linear and Ridge regression algorithms outperformed all other models based on three evaluation metrics: mean absolute error (MAE), mean square error (MSE), and R-squared. …”
    Get full text
    Article
  17. 917

    Predicting tilling and seeding operation times in grain production: A comparison of machine learning and mechanistic models by Luca Scheurer, Tobias Zimpel, Joerg Leukel

    Published 2025-08-01
    “…Nine ML algorithms and two conventional mechanistic models proposed by the American Society of Agricultural and Biological Engineers (ASAE EP496.3) were evaluated in a temporal external validation. Random forest (RF) models outperformed all other models, achieving a normalized root mean square error (NRMSE) of 0.215 and a coefficient of determination (R2) of 0.910. …”
    Get full text
    Article
  18. 918

    Predictive model to identify multiple synergistic effects of geriatric syndromes on quality of life in older adults: a hospital-based pilot study by Chien-Chou Su, Yung-Chen Yu, Deng-Chi Yang

    Published 2025-04-01
    “…Model performance was evaluated by 5-fold cross-validation with metrics of R-square, the mean square error of estimation and the mean absolute error of estimation. …”
    Get full text
    Article
  19. 919

    Predictive Factors of Length of Stay in Intensive Care Unit after Coronary Artery Bypass Graft Surgery based on Machine Learning Methods by Alireza Jafarkhani, Behzad Imani, Soheila Saeedi, Amir Shams

    Published 2025-02-01
    “…Results: The most important predictors of the LOS of CABG patients in the ICU were the length of intubation, body mass index (BMI), age, duration of surgery, and the number of postoperative transfusions of packed cells. The Random Forest model also performed best in predicting the effective factors (Mean square Error = 1.64, Mean absolute error = 0.93, and R2 = 0.28) Conclusion: The insights gained from the mashine learning model highlight the significance of demographic and clinical variables in predicting LOS in ICU. …”
    Get full text
    Article
  20. 920

    Evaluating the Thermohydraulic Performance of Microchannel Gas Coolers: A Machine Learning Approach by Shehryar Ishaque, Naveed Ullah, Sanghun Choi, Man-Hoe Kim

    Published 2025-06-01
    “…The developed model was validated against a wide range of experimental data and was found to accurately predict the gas cooler capacity (Q) and pressure drop (ΔP) within an acceptable margin of error. Furthermore, advanced machine learning algorithms such as extreme gradient boosting (XGB), random forest (RF), support vector regression (SVR), k-nearest neighbors (KNNs), and artificial neural networks (ANNs) were employed to analyze their predictive capability. …”
    Get full text
    Article