Sparse ℓ1- and ℓ2-Center Classifiers

In this article, we discuss two novel sparse versions of the classical nearest-centroid classifier. The proposed sparse classifiers are based on l1 and l2 distance criteria, respectively, and perform simultaneous feature selection and classification, by detecting the features that are most relevant for the classification purpose. We formally prove that the training of the proposed sparse models, with both distance criteria, can be performed exactly (i.e., the globally optimal set of features is selected) at a linear computational cost. Especially, the proposed sparse classifiers are trained in O(mn)+O(mlogk) operations, where n is the number of samples, m is the total number of features, and k ≤ m is the number of features to be retained in the classifier. Furthermore, the complexity of testing and classifying a new sample is simply O(k) for both methods. The proposed models can be employed either as stand-alone sparse classifiers or fast feature-selection techniques for prefiltering the features to be later fed to other types of classifiers (e.g., SVMs). The experimental results show that the proposed methods are competitive in accuracy with state-of-the-art feature selection and classification techniques while having a substantially lower computational cost.

Medienart:

E-Artikel

Erscheinungsjahr:

2022

Erschienen:

2022

Enthalten in:

Zur Gesamtaufnahme - volume:33

Enthalten in:

IEEE transactions on neural networks and learning systems - 33(2022), 3 vom: 21. März, Seite 996-1009

Sprache:

Englisch

Beteiligte Personen:

Calafiore, Giuseppe C [VerfasserIn]
Fracastoro, Giulia [VerfasserIn]

Links:

Volltext

Themen:

Journal Article

Anmerkungen:

Date Revised 01.03.2022

published: Print-Electronic

Citation Status PubMed-not-MEDLINE

doi:

10.1109/TNNLS.2020.3036838

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM317938991