Machine Learning on Mainstream Microcontrollers
This paper presents the Edge Learning Machine (ELM), a machine learning framework for edge devices, which manages the training phase on a desktop computer and performs inferences on microcontrollers. The framework implements, in a platform-independent C language, three supervised machine learning algorithms (Support Vector Machine (SVM) with a linear kernel, k-Nearest Neighbors (K-NN), and Decision Tree (DT)), and exploits STM X-Cube-AI to implement Artificial Neural Networks (ANNs) on STM32 Nucleo boards. We investigated the performance of these algorithms on six embedded boards and six datasets (four classifications and two regression). Our analysis-which aims to plug a gap in the literature-shows that the target platforms allow us to achieve the same performance score as a desktop machine, with a similar time latency. ANN performs better than the other algorithms in most cases, with no difference among the target devices. We observed that increasing the depth of an NN improves performance, up to a saturation level. k-NN performs similarly to ANN and, in one case, even better, but requires all the training sets to be kept in the inference phase, posing a significant memory demand, which can be afforded only by high-end edge devices. DT performance has a larger variance across datasets. In general, several factors impact performance in different ways across datasets. This highlights the importance of a framework like ELM, which is able to train and compare different algorithms. To support the developer community, ELM is released on an open-source basis.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2020 |
---|---|
Erschienen: |
2020 |
Enthalten in: |
Zur Gesamtaufnahme - volume:20 |
---|---|
Enthalten in: |
Sensors (Basel, Switzerland) - 20(2020), 9 vom: 05. Mai |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Sakr, Fouad [VerfasserIn] |
---|
Links: |
---|
Themen: |
ANN |
---|
Anmerkungen: |
Date Completed 13.05.2020 Date Revised 11.06.2020 published: Electronic Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.3390/s20092638 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM309640431 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM309640431 | ||
003 | DE-627 | ||
005 | 20231225134348.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2020 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3390/s20092638 |2 doi | |
028 | 5 | 2 | |a pubmed24n1032.xml |
035 | |a (DE-627)NLM309640431 | ||
035 | |a (NLM)32380766 | ||
035 | |a (PII)E2638 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Sakr, Fouad |e verfasserin |4 aut | |
245 | 1 | 0 | |a Machine Learning on Mainstream Microcontrollers |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 13.05.2020 | ||
500 | |a Date Revised 11.06.2020 | ||
500 | |a published: Electronic | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a This paper presents the Edge Learning Machine (ELM), a machine learning framework for edge devices, which manages the training phase on a desktop computer and performs inferences on microcontrollers. The framework implements, in a platform-independent C language, three supervised machine learning algorithms (Support Vector Machine (SVM) with a linear kernel, k-Nearest Neighbors (K-NN), and Decision Tree (DT)), and exploits STM X-Cube-AI to implement Artificial Neural Networks (ANNs) on STM32 Nucleo boards. We investigated the performance of these algorithms on six embedded boards and six datasets (four classifications and two regression). Our analysis-which aims to plug a gap in the literature-shows that the target platforms allow us to achieve the same performance score as a desktop machine, with a similar time latency. ANN performs better than the other algorithms in most cases, with no difference among the target devices. We observed that increasing the depth of an NN improves performance, up to a saturation level. k-NN performs similarly to ANN and, in one case, even better, but requires all the training sets to be kept in the inference phase, posing a significant memory demand, which can be afforded only by high-end edge devices. DT performance has a larger variance across datasets. In general, several factors impact performance in different ways across datasets. This highlights the importance of a framework like ELM, which is able to train and compare different algorithms. To support the developer community, ELM is released on an open-source basis | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a ANN | |
650 | 4 | |a ARM | |
650 | 4 | |a STM32 Nucleo | |
650 | 4 | |a SVM | |
650 | 4 | |a X-Cube-AI | |
650 | 4 | |a decision trees | |
650 | 4 | |a edge analytics | |
650 | 4 | |a edge computing | |
650 | 4 | |a embedded devices | |
650 | 4 | |a k-NN | |
650 | 4 | |a machine learning | |
700 | 1 | |a Bellotti, Francesco |e verfasserin |4 aut | |
700 | 1 | |a Berta, Riccardo |e verfasserin |4 aut | |
700 | 1 | |a De Gloria, Alessandro |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Sensors (Basel, Switzerland) |d 2007 |g 20(2020), 9 vom: 05. Mai |w (DE-627)NLM187985170 |x 1424-8220 |7 nnns |
773 | 1 | 8 | |g volume:20 |g year:2020 |g number:9 |g day:05 |g month:05 |
856 | 4 | 0 | |u http://dx.doi.org/10.3390/s20092638 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 20 |j 2020 |e 9 |b 05 |c 05 |