A Closest Resemblance Classifier with Feature Interval Learning and Outranking Measures for Improved Performance

Classifiers today face numerous challenges, including overfitting, high computational costs, low accuracy, imbalanced datasets, and lack of interpretability. Additionally, traditional methods often struggle with noisy or missing data. To address these issues, we propose novel classification methods...

Full description

Saved in:
Bibliographic Details
Main Author: Nabil Belacel
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Algorithms
Subjects:
Online Access:https://www.mdpi.com/1999-4893/18/1/7
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Classifiers today face numerous challenges, including overfitting, high computational costs, low accuracy, imbalanced datasets, and lack of interpretability. Additionally, traditional methods often struggle with noisy or missing data. To address these issues, we propose novel classification methods based on feature partitioning and outranking measures. Our approach eliminates the need for prior domain knowledge by automatically learning feature intervals directly from the data. These intervals capture key patterns, enhancing adaptability and insight. To improve robustness, we incorporate outranking measures, which reduce the impact of noise and uncertainty through pairwise comparisons of alternatives across features. We evaluate our classifiers on multiple UCI repository datasets and compare them with established methods, including k-Nearest Neighbors (k-NN), Support Vector Machine (SVM), Random Forest (RF), Neural Networks (NNs), Naive Bayes (NB), and Nearest Centroid (NC). The results demonstrate that our methods are robust to imbalanced datasets and irrelevant features, achieving comparable or superior performance in many cases. Furthermore, our classifiers offer enhanced interpretability while maintaining high predictive accuracy.
ISSN:1999-4893