Playing the Lottery With Concave Regularizers for Sparse Trainable Neural Networks
The design of sparse neural networks, i.e., of networks with a reduced number of parameters, has been attracting increasing research attention in the last few years. The use of sparse models may significantly reduce the computational and storage footprint in the inference phase. In this context, the lottery ticket hypothesis (LTH) constitutes a breakthrough result, that addresses not only the performance of the inference phase, but also of the training phase. It states that it is possible to extract effective sparse subnetworks, called winning tickets, that can be trained in isolation. The development of effective methods to play the lottery, i.e., to find winning tickets, is still an open problem. In this article, we propose a novel class of methods to play the lottery. The key point is the use of concave regularization to promote the sparsity of a relaxed binary mask, which represents the network topology. We theoretically analyze the effectiveness of the proposed method in the convex framework. Then, we propose extended numerical tests on various datasets and architectures, that show that the proposed method can improve the performance of state-of-the-art algorithms.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2024 |
---|---|
Erschienen: |
2024 |
Enthalten in: |
Zur Gesamtaufnahme - volume:PP |
---|---|
Enthalten in: |
IEEE transactions on neural networks and learning systems - PP(2024) vom: 13. März |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Fracastoro, Giulia [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Revised 13.03.2024 published: Print-Electronic Citation Status Publisher |
---|
doi: |
10.1109/TNNLS.2024.3373609 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM369681614 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM369681614 | ||
003 | DE-627 | ||
005 | 20240315000117.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240315s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TNNLS.2024.3373609 |2 doi | |
028 | 5 | 2 | |a pubmed24n1329.xml |
035 | |a (DE-627)NLM369681614 | ||
035 | |a (NLM)38478446 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Fracastoro, Giulia |e verfasserin |4 aut | |
245 | 1 | 0 | |a Playing the Lottery With Concave Regularizers for Sparse Trainable Neural Networks |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Revised 13.03.2024 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status Publisher | ||
520 | |a The design of sparse neural networks, i.e., of networks with a reduced number of parameters, has been attracting increasing research attention in the last few years. The use of sparse models may significantly reduce the computational and storage footprint in the inference phase. In this context, the lottery ticket hypothesis (LTH) constitutes a breakthrough result, that addresses not only the performance of the inference phase, but also of the training phase. It states that it is possible to extract effective sparse subnetworks, called winning tickets, that can be trained in isolation. The development of effective methods to play the lottery, i.e., to find winning tickets, is still an open problem. In this article, we propose a novel class of methods to play the lottery. The key point is the use of concave regularization to promote the sparsity of a relaxed binary mask, which represents the network topology. We theoretically analyze the effectiveness of the proposed method in the convex framework. Then, we propose extended numerical tests on various datasets and architectures, that show that the proposed method can improve the performance of state-of-the-art algorithms | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Fosson, Sophie M |e verfasserin |4 aut | |
700 | 1 | |a Migliorati, Andrea |e verfasserin |4 aut | |
700 | 1 | |a Calafiore, Giuseppe C |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on neural networks and learning systems |d 2012 |g PP(2024) vom: 13. März |w (DE-627)NLM23236897X |x 2162-2388 |7 nnns |
773 | 1 | 8 | |g volume:PP |g year:2024 |g day:13 |g month:03 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TNNLS.2024.3373609 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d PP |j 2024 |b 13 |c 03 |