Sparse ℓ1- and ℓ2-Center Classifiers
In this article, we discuss two novel sparse versions of the classical nearest-centroid classifier. The proposed sparse classifiers are based on l1 and l2 distance criteria, respectively, and perform simultaneous feature selection and classification, by detecting the features that are most relevant for the classification purpose. We formally prove that the training of the proposed sparse models, with both distance criteria, can be performed exactly (i.e., the globally optimal set of features is selected) at a linear computational cost. Especially, the proposed sparse classifiers are trained in O(mn)+O(mlogk) operations, where n is the number of samples, m is the total number of features, and k ≤ m is the number of features to be retained in the classifier. Furthermore, the complexity of testing and classifying a new sample is simply O(k) for both methods. The proposed models can be employed either as stand-alone sparse classifiers or fast feature-selection techniques for prefiltering the features to be later fed to other types of classifiers (e.g., SVMs). The experimental results show that the proposed methods are competitive in accuracy with state-of-the-art feature selection and classification techniques while having a substantially lower computational cost.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2022 |
---|---|
Erschienen: |
2022 |
Enthalten in: |
Zur Gesamtaufnahme - volume:33 |
---|---|
Enthalten in: |
IEEE transactions on neural networks and learning systems - 33(2022), 3 vom: 21. März, Seite 996-1009 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Calafiore, Giuseppe C [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Revised 01.03.2022 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.1109/TNNLS.2020.3036838 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM317938991 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM317938991 | ||
003 | DE-627 | ||
005 | 20231225164318.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TNNLS.2020.3036838 |2 doi | |
028 | 5 | 2 | |a pubmed24n1059.xml |
035 | |a (DE-627)NLM317938991 | ||
035 | |a (NLM)33226955 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Calafiore, Giuseppe C |e verfasserin |4 aut | |
245 | 1 | 0 | |a Sparse ℓ1- and ℓ2-Center Classifiers |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Revised 01.03.2022 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a In this article, we discuss two novel sparse versions of the classical nearest-centroid classifier. The proposed sparse classifiers are based on l1 and l2 distance criteria, respectively, and perform simultaneous feature selection and classification, by detecting the features that are most relevant for the classification purpose. We formally prove that the training of the proposed sparse models, with both distance criteria, can be performed exactly (i.e., the globally optimal set of features is selected) at a linear computational cost. Especially, the proposed sparse classifiers are trained in O(mn)+O(mlogk) operations, where n is the number of samples, m is the total number of features, and k ≤ m is the number of features to be retained in the classifier. Furthermore, the complexity of testing and classifying a new sample is simply O(k) for both methods. The proposed models can be employed either as stand-alone sparse classifiers or fast feature-selection techniques for prefiltering the features to be later fed to other types of classifiers (e.g., SVMs). The experimental results show that the proposed methods are competitive in accuracy with state-of-the-art feature selection and classification techniques while having a substantially lower computational cost | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Fracastoro, Giulia |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on neural networks and learning systems |d 2012 |g 33(2022), 3 vom: 21. März, Seite 996-1009 |w (DE-627)NLM23236897X |x 2162-2388 |7 nnns |
773 | 1 | 8 | |g volume:33 |g year:2022 |g number:3 |g day:21 |g month:03 |g pages:996-1009 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TNNLS.2020.3036838 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 33 |j 2022 |e 3 |b 21 |c 03 |h 996-1009 |