Toward Scalable, Efficient, and Accurate Deep Spiking Neural Networks With Backward Residual Connections, Stochastic Softmax, and Hybridization
Copyright © 2020 Panda, Aketi and Roy..
Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications. In recent years, there have been several proposals focused on supervised (conversion, spike-based gradient descent) and unsupervised (spike timing dependent plasticity) training methods to improve the accuracy of SNNs on large-scale tasks. However, each of these methods suffer from scalability, latency, and accuracy limitations. In this paper, we propose novel algorithmic techniques of modifying the SNN configuration with backward residual connections, stochastic softmax, and hybrid artificial-and-spiking neuronal activations to improve the learning ability of the training methodologies to yield competitive accuracy, while, yielding large efficiency gains over their artificial counterparts. Note, artificial counterparts refer to conventional deep learning/artificial neural networks. Our techniques apply to VGG/Residual architectures, and are compatible with all forms of training methodologies. Our analysis reveals that the proposed solutions yield near state-of-the-art accuracy with significant energy-efficiency and reduced parameter overhead translating to hardware improvements on complex visual recognition tasks, such as, CIFAR10, Imagenet datatsets.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2020 |
---|---|
Erschienen: |
2020 |
Enthalten in: |
Zur Gesamtaufnahme - volume:14 |
---|---|
Enthalten in: |
Frontiers in neuroscience - 14(2020) vom: 21., Seite 653 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Panda, Priyadarshini [VerfasserIn] |
---|
Links: |
---|
Themen: |
Backward residual connection |
---|
Anmerkungen: |
Date Revised 03.11.2023 published: Electronic-eCollection Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.3389/fnins.2020.00653 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM312711670 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM312711670 | ||
003 | DE-627 | ||
005 | 20231225145014.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3389/fnins.2020.00653 |2 doi | |
028 | 5 | 2 | |a pubmed24n1042.xml |
035 | |a (DE-627)NLM312711670 | ||
035 | |a (NLM)32694977 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Panda, Priyadarshini |e verfasserin |4 aut | |
245 | 1 | 0 | |a Toward Scalable, Efficient, and Accurate Deep Spiking Neural Networks With Backward Residual Connections, Stochastic Softmax, and Hybridization |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Revised 03.11.2023 | ||
500 | |a published: Electronic-eCollection | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a Copyright © 2020 Panda, Aketi and Roy. | ||
520 | |a Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications. In recent years, there have been several proposals focused on supervised (conversion, spike-based gradient descent) and unsupervised (spike timing dependent plasticity) training methods to improve the accuracy of SNNs on large-scale tasks. However, each of these methods suffer from scalability, latency, and accuracy limitations. In this paper, we propose novel algorithmic techniques of modifying the SNN configuration with backward residual connections, stochastic softmax, and hybrid artificial-and-spiking neuronal activations to improve the learning ability of the training methodologies to yield competitive accuracy, while, yielding large efficiency gains over their artificial counterparts. Note, artificial counterparts refer to conventional deep learning/artificial neural networks. Our techniques apply to VGG/Residual architectures, and are compatible with all forms of training methodologies. Our analysis reveals that the proposed solutions yield near state-of-the-art accuracy with significant energy-efficiency and reduced parameter overhead translating to hardware improvements on complex visual recognition tasks, such as, CIFAR10, Imagenet datatsets | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a backward residual connection | |
650 | 4 | |a energy-efficiency | |
650 | 4 | |a hybridization | |
650 | 4 | |a improved accuracy | |
650 | 4 | |a spiking neural networks | |
650 | 4 | |a stochastic softmax | |
700 | 1 | |a Aketi, Sai Aparna |e verfasserin |4 aut | |
700 | 1 | |a Roy, Kaushik |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Frontiers in neuroscience |d 2007 |g 14(2020) vom: 21., Seite 653 |w (DE-627)NLM184194563 |x 1662-4548 |7 nnns |
773 | 1 | 8 | |g volume:14 |g year:2020 |g day:21 |g pages:653 |
856 | 4 | 0 | |u http://dx.doi.org/10.3389/fnins.2020.00653 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 14 |j 2020 |b 21 |h 653 |