Pruning support vector machines without altering performances

Support vector machines (SV machines, SVMs) have many merits that distinguish themselves from many other machine-learning algorithms, such as the nonexistence of local minima, the possession of the largest distance from the separating hyperplane to the SVs, and a solid theoretical foundation. However, SVM training algorithms such as the efficient sequential minimal optimization (SMO) often produce many SVs. Some scholars have found that the kernel outputs are frequently of similar levels, which insinuate the redundancy of SVs. By analyzing the overlapped information of kernel outputs, a succinct separating-hyperplane-securing method for pruning the dispensable SVs based on crosswise propagation (CP) is systematically developed. The method also circumvents the problem of explicitly discerning SVs in feature space as the SVM formulation does. Experiments with the famous SMO-based software LibSVM reveal that all typical kernels with different parameters on the data sets contribute the dispensable SVs. Some 1% ~ 9% (in some scenarios, more than 50%) dispensable SVs are found. Furthermore, the experimental results also verify that the pruning method does not alter the SVMs' performances at all. As a corollary, this paper further contributes in theory a new lower upper bound on the number of SVs in the high-dimensional feature space.

Medienart:

E-Artikel

Erscheinungsjahr:

2008

Erschienen:

2008

Enthalten in:

Zur Gesamtaufnahme - volume:19

Enthalten in:

IEEE transactions on neural networks - 19(2008), 10 vom: 15. Okt., Seite 1792-803

Sprache:

Englisch

Beteiligte Personen:

Liang, Xun [VerfasserIn]
Chen, Rong-Chang [VerfasserIn]
Guo, Xinyu [VerfasserIn]

Links:

Volltext

Themen:

Journal Article
Research Support, Non-U.S. Gov't

Anmerkungen:

Date Completed 29.12.2008

Date Revised 20.10.2016

published: Print

Citation Status MEDLINE

doi:

10.1109/TNN.2008.2002696

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM182907864