ETA : An Efficient Training Accelerator for DNNs Based on Hardware-Algorithm Co-Optimization
Recently, the efficient training of deep neural networks (DNNs) on resource-constrained platforms has attracted increasing attention for protecting user privacy. However, it is still a severe challenge since the DNN training involves intensive computations and a large amount of data access. To deal with these issues, in this work, we implement an efficient training accelerator (ETA) on field-programmable gate array (FPGA) by adopting a hardware-algorithm co-optimization approach. A novel training scheme is proposed to effectively train DNNs using 8-bit precision with arbitrary batch sizes, in which a compact but powerful data format and a hardware-oriented normalization layer are introduced. Thus the computational complexity and memory accesses are significantly reduced. In the ETA, a reconfigurable processing element (PE) is designed to support various computational patterns during training while avoiding redundant calculations from nonunit-stride convolutional layers. With a flexible network-on-chip (NoC) and a hierarchical PE array, computational parallelism and data reuse can be fully exploited, and memory accesses are further reduced. In addition, a unified computing core is developed to execute auxiliary layers such as normalization and weight update (WU), which works in a time-multiplexed manner and consumes only a small amount of hardware resources. The experiments show that our training scheme achieves the state-of-the-art accuracy across multiple models, including CIFAR-VGG16, CIFAR-ResNet20, CIFAR-InceptionV3, ResNet18, and ResNet50. Evaluated on three networks (CIFAR-VGG16, CIFAR-ResNet20, and ResNet18), our ETA on Xilinx VC709 FPGA achieves 610.98, 658.64, and 811.24 GOPS in terms of throughput, respectively. Compared with the prior art, our design demonstrates a speedup of 3.65× and an energy efficiency improvement of 8.54× on CIFAR-ResNet20.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:34 |
---|---|
Enthalten in: |
IEEE transactions on neural networks and learning systems - 34(2023), 10 vom: 03. Okt., Seite 7660-7674 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Lu, Jinming [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Revised 11.10.2023 published: Print-Electronic Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.1109/TNNLS.2022.3145850 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM336663366 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM336663366 | ||
003 | DE-627 | ||
005 | 20231225232644.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TNNLS.2022.3145850 |2 doi | |
028 | 5 | 2 | |a pubmed24n1122.xml |
035 | |a (DE-627)NLM336663366 | ||
035 | |a (NLM)35133969 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Lu, Jinming |e verfasserin |4 aut | |
245 | 1 | 0 | |a ETA |b An Efficient Training Accelerator for DNNs Based on Hardware-Algorithm Co-Optimization |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Revised 11.10.2023 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a Recently, the efficient training of deep neural networks (DNNs) on resource-constrained platforms has attracted increasing attention for protecting user privacy. However, it is still a severe challenge since the DNN training involves intensive computations and a large amount of data access. To deal with these issues, in this work, we implement an efficient training accelerator (ETA) on field-programmable gate array (FPGA) by adopting a hardware-algorithm co-optimization approach. A novel training scheme is proposed to effectively train DNNs using 8-bit precision with arbitrary batch sizes, in which a compact but powerful data format and a hardware-oriented normalization layer are introduced. Thus the computational complexity and memory accesses are significantly reduced. In the ETA, a reconfigurable processing element (PE) is designed to support various computational patterns during training while avoiding redundant calculations from nonunit-stride convolutional layers. With a flexible network-on-chip (NoC) and a hierarchical PE array, computational parallelism and data reuse can be fully exploited, and memory accesses are further reduced. In addition, a unified computing core is developed to execute auxiliary layers such as normalization and weight update (WU), which works in a time-multiplexed manner and consumes only a small amount of hardware resources. The experiments show that our training scheme achieves the state-of-the-art accuracy across multiple models, including CIFAR-VGG16, CIFAR-ResNet20, CIFAR-InceptionV3, ResNet18, and ResNet50. Evaluated on three networks (CIFAR-VGG16, CIFAR-ResNet20, and ResNet18), our ETA on Xilinx VC709 FPGA achieves 610.98, 658.64, and 811.24 GOPS in terms of throughput, respectively. Compared with the prior art, our design demonstrates a speedup of 3.65× and an energy efficiency improvement of 8.54× on CIFAR-ResNet20 | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Ni, Chao |e verfasserin |4 aut | |
700 | 1 | |a Wang, Zhongfeng |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on neural networks and learning systems |d 2012 |g 34(2023), 10 vom: 03. Okt., Seite 7660-7674 |w (DE-627)NLM23236897X |x 2162-2388 |7 nnns |
773 | 1 | 8 | |g volume:34 |g year:2023 |g number:10 |g day:03 |g month:10 |g pages:7660-7674 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TNNLS.2022.3145850 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 34 |j 2023 |e 10 |b 03 |c 10 |h 7660-7674 |