Pruning support vector machines without altering performances
Support vector machines (SV machines, SVMs) have many merits that distinguish themselves from many other machine-learning algorithms, such as the nonexistence of local minima, the possession of the largest distance from the separating hyperplane to the SVs, and a solid theoretical foundation. However, SVM training algorithms such as the efficient sequential minimal optimization (SMO) often produce many SVs. Some scholars have found that the kernel outputs are frequently of similar levels, which insinuate the redundancy of SVs. By analyzing the overlapped information of kernel outputs, a succinct separating-hyperplane-securing method for pruning the dispensable SVs based on crosswise propagation (CP) is systematically developed. The method also circumvents the problem of explicitly discerning SVs in feature space as the SVM formulation does. Experiments with the famous SMO-based software LibSVM reveal that all typical kernels with different parameters on the data sets contribute the dispensable SVs. Some 1% ~ 9% (in some scenarios, more than 50%) dispensable SVs are found. Furthermore, the experimental results also verify that the pruning method does not alter the SVMs' performances at all. As a corollary, this paper further contributes in theory a new lower upper bound on the number of SVs in the high-dimensional feature space.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2008 |
---|---|
Erschienen: |
2008 |
Enthalten in: |
Zur Gesamtaufnahme - volume:19 |
---|---|
Enthalten in: |
IEEE transactions on neural networks - 19(2008), 10 vom: 15. Okt., Seite 1792-803 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Liang, Xun [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 29.12.2008 Date Revised 20.10.2016 published: Print Citation Status MEDLINE |
---|
doi: |
10.1109/TNN.2008.2002696 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM182907864 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM182907864 | ||
003 | DE-627 | ||
005 | 20231223163651.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231223s2008 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TNN.2008.2002696 |2 doi | |
028 | 5 | 2 | |a pubmed24n0610.xml |
035 | |a (DE-627)NLM182907864 | ||
035 | |a (NLM)18842482 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Liang, Xun |e verfasserin |4 aut | |
245 | 1 | 0 | |a Pruning support vector machines without altering performances |
264 | 1 | |c 2008 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 29.12.2008 | ||
500 | |a Date Revised 20.10.2016 | ||
500 | |a published: Print | ||
500 | |a Citation Status MEDLINE | ||
520 | |a Support vector machines (SV machines, SVMs) have many merits that distinguish themselves from many other machine-learning algorithms, such as the nonexistence of local minima, the possession of the largest distance from the separating hyperplane to the SVs, and a solid theoretical foundation. However, SVM training algorithms such as the efficient sequential minimal optimization (SMO) often produce many SVs. Some scholars have found that the kernel outputs are frequently of similar levels, which insinuate the redundancy of SVs. By analyzing the overlapped information of kernel outputs, a succinct separating-hyperplane-securing method for pruning the dispensable SVs based on crosswise propagation (CP) is systematically developed. The method also circumvents the problem of explicitly discerning SVs in feature space as the SVM formulation does. Experiments with the famous SMO-based software LibSVM reveal that all typical kernels with different parameters on the data sets contribute the dispensable SVs. Some 1% ~ 9% (in some scenarios, more than 50%) dispensable SVs are found. Furthermore, the experimental results also verify that the pruning method does not alter the SVMs' performances at all. As a corollary, this paper further contributes in theory a new lower upper bound on the number of SVs in the high-dimensional feature space | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Research Support, Non-U.S. Gov't | |
700 | 1 | |a Chen, Rong-Chang |e verfasserin |4 aut | |
700 | 1 | |a Guo, Xinyu |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on neural networks |d 1990 |g 19(2008), 10 vom: 15. Okt., Seite 1792-803 |w (DE-627)NLM150586590 |x 1045-9227 |7 nnns |
773 | 1 | 8 | |g volume:19 |g year:2008 |g number:10 |g day:15 |g month:10 |g pages:1792-803 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TNN.2008.2002696 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 19 |j 2008 |e 10 |b 15 |c 10 |h 1792-803 |