Distributed learning : a reliable privacy-preserving strategy to change multicenter collaborations using AI
© 2021. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature..
PURPOSE: The present scoping review aims to assess the non-inferiority of distributed learning over centrally and locally trained machine learning (ML) models in medical applications.
METHODS: We performed a literature search using the term "distributed learning" OR "federated learning" in the PubMed/MEDLINE and EMBASE databases. No start date limit was used, and the search was extended until July 21, 2020. We excluded articles outside the field of interest; guidelines or expert opinion, review articles and meta-analyses, editorials, letters or commentaries, and conference abstracts; articles not in the English language; and studies not using medical data. Selected studies were classified and analysed according to their aim(s).
RESULTS: We included 26 papers aimed at predicting one or more outcomes: namely risk, diagnosis, prognosis, and treatment side effect/adverse drug reaction. Distributed learning was compared to centralized or localized training in 21/26 and 14/26 selected papers, respectively. Regardless of the aim, the type of input, the method, and the classifier, distributed learning performed close to centralized training, but two experiments focused on diagnosis. In all but 2 cases, distributed learning outperformed locally trained models.
CONCLUSION: Distributed learning resulted in a reliable strategy for model development; indeed, it performed equally to models trained on centralized datasets. Sensitive data can get preserved since they are not shared for model development. Distributed learning constitutes a promising solution for ML-based research and practice since large, diverse datasets are crucial for success.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2021 |
---|---|
Erschienen: |
2021 |
Enthalten in: |
Zur Gesamtaufnahme - volume:48 |
---|---|
Enthalten in: |
European journal of nuclear medicine and molecular imaging - 48(2021), 12 vom: 08. Nov., Seite 3791-3804 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Kirienko, Margarita [VerfasserIn] |
---|
Links: |
---|
Themen: |
Clinical trial |
---|
Anmerkungen: |
Date Completed 19.10.2021 Date Revised 15.03.2022 published: Print-Electronic Citation Status MEDLINE |
---|
doi: |
10.1007/s00259-021-05339-7 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM324027532 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM324027532 | ||
003 | DE-627 | ||
005 | 20231225185437.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s00259-021-05339-7 |2 doi | |
028 | 5 | 2 | |a pubmed24n1080.xml |
035 | |a (DE-627)NLM324027532 | ||
035 | |a (NLM)33847779 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Kirienko, Margarita |e verfasserin |4 aut | |
245 | 1 | 0 | |a Distributed learning |b a reliable privacy-preserving strategy to change multicenter collaborations using AI |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 19.10.2021 | ||
500 | |a Date Revised 15.03.2022 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a © 2021. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature. | ||
520 | |a PURPOSE: The present scoping review aims to assess the non-inferiority of distributed learning over centrally and locally trained machine learning (ML) models in medical applications | ||
520 | |a METHODS: We performed a literature search using the term "distributed learning" OR "federated learning" in the PubMed/MEDLINE and EMBASE databases. No start date limit was used, and the search was extended until July 21, 2020. We excluded articles outside the field of interest; guidelines or expert opinion, review articles and meta-analyses, editorials, letters or commentaries, and conference abstracts; articles not in the English language; and studies not using medical data. Selected studies were classified and analysed according to their aim(s) | ||
520 | |a RESULTS: We included 26 papers aimed at predicting one or more outcomes: namely risk, diagnosis, prognosis, and treatment side effect/adverse drug reaction. Distributed learning was compared to centralized or localized training in 21/26 and 14/26 selected papers, respectively. Regardless of the aim, the type of input, the method, and the classifier, distributed learning performed close to centralized training, but two experiments focused on diagnosis. In all but 2 cases, distributed learning outperformed locally trained models | ||
520 | |a CONCLUSION: Distributed learning resulted in a reliable strategy for model development; indeed, it performed equally to models trained on centralized datasets. Sensitive data can get preserved since they are not shared for model development. Distributed learning constitutes a promising solution for ML-based research and practice since large, diverse datasets are crucial for success | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Review | |
650 | 4 | |a Clinical trial | |
650 | 4 | |a Distributed learning | |
650 | 4 | |a Ethics | |
650 | 4 | |a Federated learning | |
650 | 4 | |a Machine learning | |
650 | 4 | |a Privacy | |
700 | 1 | |a Sollini, Martina |e verfasserin |4 aut | |
700 | 1 | |a Ninatti, Gaia |e verfasserin |4 aut | |
700 | 1 | |a Loiacono, Daniele |e verfasserin |4 aut | |
700 | 1 | |a Giacomello, Edoardo |e verfasserin |4 aut | |
700 | 1 | |a Gozzi, Noemi |e verfasserin |4 aut | |
700 | 1 | |a Amigoni, Francesco |e verfasserin |4 aut | |
700 | 1 | |a Mainardi, Luca |e verfasserin |4 aut | |
700 | 1 | |a Lanzi, Pier Luca |e verfasserin |4 aut | |
700 | 1 | |a Chiti, Arturo |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t European journal of nuclear medicine and molecular imaging |d 2002 |g 48(2021), 12 vom: 08. Nov., Seite 3791-3804 |w (DE-627)NLM116957360 |x 1619-7089 |7 nnns |
773 | 1 | 8 | |g volume:48 |g year:2021 |g number:12 |g day:08 |g month:11 |g pages:3791-3804 |
856 | 4 | 0 | |u http://dx.doi.org/10.1007/s00259-021-05339-7 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 48 |j 2021 |e 12 |b 08 |c 11 |h 3791-3804 |