Showing 561 - 580 results of 1,673 for search 'forest (errors OR error)', query time: 0.12s Refine Results
  1. 561

    LC oscillator frequency prediction using machine learning linear regression algorithm by Mandar Jatkar, Vasudeva G, Tripti R. Kulkarni, Roopa R. Kulkarni, Aneesh Pandurangi

    Published 2025-07-01
    “…It was found that Linear Regression reaches close to zero RMSE, but SVR and Random Forest show higher errors. The study proves that log-transformed linear regression works as a basic but efficient method for accurate frequency estimation in resonant LC circuits.…”
    Get full text
    Article
  2. 562

    Simulation of snow accumulation and melting in the Kama river basin using data from global prognostic models by S. V. Pyankov, A. N. Shikhov, P. G. Mikhaylyukova

    Published 2019-12-01
    “…The most important result is that under conditions of 2017/18 the mean square error of calculating the maximum snow storage by the GFS, GEM and PL-AB models was less than 25% of its measured values. …”
    Get full text
    Article
  3. 563

    Multi-Output Regression for the Prediction of World-Class Performances in Women’s Handball by Rayane Elimam, Nicolas NICOLAS, Jacques Prioux, Jacky Montmain, Stephane Perrey

    Published 2025-01-01
    “…We compared 4 single-output models (kNN, regression tree, random forest and(NN) Predictive models inspired by the human brain, used in this study for multi-output prediction in sports performance analysis (neural networks)), their multi-output counterparts and aA baseline model predicting future performance as the average of each player’s past performance, serving as a simple reference for comparison with more complex models (dummy baseline) (predicting the average performance of each player over the last month) in terms of average(Root Mean Squared Error) A measure of the quadratic difference between predicted and actual values in regression models (RMSE) (aRMSE) during aAn evaluation method where past training and game data are used sequentially to predict performance of the next game (chronological evaluation) where previous trainings and games data are used to train models to predict the next game performances. …”
    Get full text
    Article
  4. 564

    Transformers deep learning models for missing data imputation: an application of the ReMasker model on a psychometric scale by Monica Casella, Nicola Milano, Pasquale Dolce, Davide Marocco

    Published 2024-12-01
    “…Various factors contribute to this issue, including participant non-response, dropout, or technical errors during data collection. Traditional methods like mean imputation or regression, commonly used to handle missing data, rely upon assumptions that may not hold on psychological data and can lead to distorted results.MethodsThis study aims to evaluate the effectiveness of transformer-based deep learning for missing data imputation, comparing ReMasker, a masking autoencoding transformer model, with conventional imputation techniques (mean and median imputation, Expectation–Maximization algorithm) and machine learning approaches (K-nearest neighbors, MissForest, and an Artificial Neural Network). …”
    Get full text
    Article
  5. 565

    Comparison of Machine-Learning Algorithms for Near-Surface Air-Temperature Estimation from FY-4A AGRI Data by Ke Zhou, Hailei Liu, Xiaobo Deng, Hao Wang, Shenglan Zhang

    Published 2020-01-01
    “…The performance of each model and the temporal and spatial distribution of the estimated Tair errors were analyzed. The results showed that the XGB model had better overall performance, with R2 of 0.902, bias of −0.087°C, and root-mean-square error of 1.946°C. …”
    Get full text
    Article
  6. 566
  7. 567

    The Role of Landscape Metrics and Spatial Processes in Performance Evaluation of GEOMOD (Case Study: Neka River Basin) by Shrif Joorabian Shooshtari, Kamran Shayesteh, Mehdi Gholamalifard, Mahmood Azari, Juan Ignacio López-Moreno

    Published 2017-09-01
    “…According to the modeling results, a decrease 4225 ha was revealed in this class of land cover. The area under forest showed a decreasing trend from 2001 to 2010, and the model showed a good consistency between the forest areas of reference and simulated maps with a relative error value of zero. …”
    Get full text
    Article
  8. 568

    Estimating Maize Leaf Water Content Using Machine Learning with Diverse Multispectral Image Features by Yuchen Wang, Jianliang Wang, Jiayue Li, Jiacheng Wang, Hanzeyu Xu, Tao Liu, Juan Wang

    Published 2025-03-01
    “…The results indicate that the RFR model performs optimally during the seedling stage, with a root relative mean square error (RRMSE) of 2.99%, whereas estimation errors are larger during the tasseling stage, with an RRMSE of 4.13%. …”
    Get full text
    Article
  9. 569

    Reconstructing Evapotranspiration in British Columbia Since 1850 Using Publicly Available Tree-Ring Plots and Climate Data by Hang Li, John Rex

    Published 2025-03-01
    “…ET satellite images from 1982 to 2010 formed our dataset to train models for each vegetated pixel. The random forest regression outperformed the other approaches with lower errors and better robustness (adjusted R<sup>2</sup> value = 0.69; root mean square error = 10.72 mm/month). …”
    Get full text
    Article
  10. 570

    Advanced sentiment analysis in online shopping: Implementing LSTM models analyzing E-commerce user sentiments by Lu Liyuan

    Published 2025-07-01
    “…Sarcasm and irony accounted for 22% of the classification errors, while mixed sentiment accounted for 18%, and implicit accounted for 15%. …”
    Get full text
    Article
  11. 571

    Predictive Modeling of River Water Temperatures in Catu River: A Neural Network-Based Approach by Carmen Goncalves de Macedo e Silva, José Roberto de Araújo Fontoura, Alarcon Matos de Oliveira, Thais de Souza Neri, Roberto Luiz Souza Monteiro, Thiago Barros Murari, Alexandre do Nascimento Silva, Leandro Brito Santos, Marcos Batista Figueredo

    Published 2025-01-01
    “…The results show that the BiLSTM model achieved the best performance, with a root mean square error (RMSE) of 0.12°C and R2 = 0.98, followed by BPNN with an RMSE of 0.18°C and R2 = 0.91, and the Random Forest model, which obtained an NSE of 0.95. …”
    Get full text
    Article
  12. 572

    Recensement d'éléphants dans la Réserve Communautaire du Lac Télé, République du Congo by Fortuné Iyenguet, Guy-Aimé Malanda, Bola Madzoke, Hugo Rainey, Catherine Schloeder, Michael Jacobs

    Published 2006-12-01
    “…However, there was a large error in calculating estimates because of the low number of dung piles. …”
    Get full text
    Article
  13. 573

    Spatially-informed interpolation for reconstructing lake area time series using semantic neighborhood correlation by Chen Liu

    Published 2025-07-01
    “…Compared with polynomial fitting, Random Forest, and Long Short-Term Memory, SNCI consistently achieves lower interpolation errors. …”
    Get full text
    Article
  14. 574

    Study on the temperature prediction model of residual coal in goaf based on ACO-KELM by ZHAI Xiaowei, WANG Chen, HAO Le, LI Xintian, HOU Qinyuan, MA Teng

    Published 2024-12-01
    “…Compared to the prediction models based on extreme learning machine (ELM) and random forest (RF) algorithms, the ACO-KELM model achieved an average absolute error of 0.0701 ℃ and a root mean square error (RMSE) of 0.0748 ℃ on the test set, reducing these errors by 65% and 195%, respectively, compared to the ELM-based model, and by 53% and 156%, respectively, compared to the RF-based model. …”
    Get full text
    Article
  15. 575

    Biomass and Volume Models Based on Stump Diameter for Assessing Degradation of Miombo Woodlands in Tanzania by Bernardol John Manyanda, Wilson Ancelm Mugasha, Emannuel F. Nzunda, Rogers Ernest Malimbwi

    Published 2019-01-01
    “…Models to estimate forest degradation in terms of removed volume and biomass from the extraction of wood fuel and logging using stump diameter (SD) are lacking. …”
    Get full text
    Article
  16. 576
  17. 577

    Improved estimation of two-phase capillary pressure with nuclear magnetic resonance measurements via machine learning by Oriyomi Raheem, Misael M. Morales, Wen Pan, Carlos Torres-Verdín

    Published 2025-12-01
    “…Extreme Gradient Boosting and Random Forest models performed the best, with average estimation errors of 5 % and 10 %, respectively, for capillary pressure and pore throat size distribution. …”
    Get full text
    Article
  18. 578

    Maximizing multi-source data integration and minimizing the parameters for greenhouse tomato crop water requirement prediction by Xinyue Lv, Youli Li, Lili Zhangzhong, Chaoyang Tong, Yibo Wei, Guangwei Li, Yingru Yang

    Published 2025-08-01
    “…The results show that the stacking model has the best prediction effect, and the error is lower than that of RandomForest, LightGBM, CatBoost, Average fusion model and Weighted fusion model. …”
    Get full text
    Article
  19. 579

    Investigating Tree Family Machine Learning Techniques for a Predictive System to Unveil Software Defects by Rashid Naseem, Bilal Khan, Arshad Ahmad, Ahmad Almogren, Saima Jabeen, Bashir Hayat, Muhammad Arif Shah

    Published 2020-01-01
    “…Performance of each technique is evaluated using different measures, i.e., mean absolute error (MAE), relative absolute error (RAE), root mean squared error (RMSE), root relative squared error (RRSE), specificity, precision, recall, F-measure (FM), G-measure (GM), Matthew’s correlation coefficient (MCC), and accuracy. …”
    Get full text
    Article
  20. 580