Multi strategy Horned Lizard Optimization Algorithm for complex optimization and advanced feature selection problems
Abstract Feature selection (FS) is a critical process in classification tasks within artificial intelligence, aimed at identifying a minimal yet highly informative subset of features to maximize classification accuracy. As a combinatorial NP-hard problem, FS necessitates using powerful metaheuristic...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
SpringerOpen
2025-06-01
|
| Series: | Journal of Big Data |
| Subjects: | |
| Online Access: | https://doi.org/10.1186/s40537-025-01205-7 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Feature selection (FS) is a critical process in classification tasks within artificial intelligence, aimed at identifying a minimal yet highly informative subset of features to maximize classification accuracy. As a combinatorial NP-hard problem, FS necessitates using powerful metaheuristic algorithms to serve as efficient wrapper-based FS strategies. However, when applied to high-dimensional datasets characterized by a vast number of features and limited samples-these methods often suffer from performance degradation and increased computational costs. The Horned Lizard Optimization Algorithm (HLOA) is a nature-inspired metaheuristic that mathematically mimics the adaptive defense mechanisms of horned lizards, including crypsis, skin color modulation, blood-squirting, and escape movements. While HLOA demonstrates simplicity, flexibility, and a minimal parameter set, it is hindered by slow convergence and a tendency to become trapped in local optima, leading to suboptimal FS performance. To address these limitations, this paper introduces a Multi-Strategy Enhanced HLOA (mHLOA), integrating three novel enhancement strategies: Local Escaping Operator (LEO), Orthogonal Learning (OL), and a RIME diversification mechanism. LEO enhances population diversity by enabling solutions to escape local optima, while the RIME operator improves exploration capabilities. Additionally, the OL disturbance mechanism refines the search process, ensuring better convergence and preventing premature stagnation. The efficacy of mHLOA is rigorously evaluated using 12 complex benchmark functions from the CEC’22 test suite and 14 standard datasets from the UCI Machine Learning Repository. Comparative analysis against recent state-of-the-art algorithms demonstrates that mHLOA achieves superior classification accuracy, selects a reduced yet highly effective feature subset, and exhibits robustness in high-dimensional FS tasks. These findings affirm the potential of mHLOA as a powerful optimization framework for advanced feature selection and complex optimization problems. |
|---|---|
| ISSN: | 2196-1115 |