Machine Learning-Based Scoring System to Predict the Risk and Severity of Ataxic Speech Using Different Speech Tasks
The assessment of speech in Cerebellar Ataxia (CA) is time-consuming and requires clinical interpretation. In this study, we introduce a fully automated objective algorithm that uses significant acoustic features from time, spectral, cepstral, and non-linear dynamics present in microphone data obtained from different repeated Consonant-Vowel (C-V) syllable paradigms. The algorithm builds machine-learning models to support a 3-tier diagnostic categorisation for distinguishing Ataxic Speech from healthy speech, rating the severity of Ataxic Speech, and nomogram-based supporting scoring charts for Ataxic Speech diagnosis and severity prediction. The selection of features was accomplished using a combination of mass univariate analysis and elastic net regularization for the binary outcome, while for the ordinal outcome, Spearman's rank-order correlation criterion was employed. The algorithm was developed and evaluated using recordings from 126 participants: 65 individuals with CA and 61 controls (i.e., individuals without ataxia or neurotypical). For Ataxic Speech diagnosis, the reduced feature set yielded an area under the curve (AUC) of 0.97 (95% CI 0.90-1), the sensitivity of 97.43%, specificity of 85.29%, and balanced accuracy of 91.2% in the test dataset. The mean AUC for severity estimation was 0.74 for the test set. The high C-indexes of the prediction nomograms for identifying the presence of Ataxic Speech (0.96) and estimating its severity (0.81) in the test set indicates the efficacy of this algorithm. Decision curve analysis demonstrated the value of incorporating acoustic features from two repeated C-V syllable paradigms. The strong classification ability of the specified speech features supports the framework's usefulness for identifying and monitoring Ataxic Speech.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:31 |
---|---|
Enthalten in: |
IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society - 31(2023) vom: 11., Seite 4839-4850 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Kashyap, Bipasha [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 16.12.2023 Date Revised 16.12.2023 published: Print-Electronic Citation Status MEDLINE |
---|
doi: |
10.1109/TNSRE.2023.3334718 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM364746726 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | NLM364746726 | ||
003 | DE-627 | ||
005 | 20231227133740.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/TNSRE.2023.3334718 |2 doi | |
028 | 5 | 2 | |a pubmed24n1231.xml |
035 | |a (DE-627)NLM364746726 | ||
035 | |a (NLM)37983150 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Kashyap, Bipasha |e verfasserin |4 aut | |
245 | 1 | 0 | |a Machine Learning-Based Scoring System to Predict the Risk and Severity of Ataxic Speech Using Different Speech Tasks |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 16.12.2023 | ||
500 | |a Date Revised 16.12.2023 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a The assessment of speech in Cerebellar Ataxia (CA) is time-consuming and requires clinical interpretation. In this study, we introduce a fully automated objective algorithm that uses significant acoustic features from time, spectral, cepstral, and non-linear dynamics present in microphone data obtained from different repeated Consonant-Vowel (C-V) syllable paradigms. The algorithm builds machine-learning models to support a 3-tier diagnostic categorisation for distinguishing Ataxic Speech from healthy speech, rating the severity of Ataxic Speech, and nomogram-based supporting scoring charts for Ataxic Speech diagnosis and severity prediction. The selection of features was accomplished using a combination of mass univariate analysis and elastic net regularization for the binary outcome, while for the ordinal outcome, Spearman's rank-order correlation criterion was employed. The algorithm was developed and evaluated using recordings from 126 participants: 65 individuals with CA and 61 controls (i.e., individuals without ataxia or neurotypical). For Ataxic Speech diagnosis, the reduced feature set yielded an area under the curve (AUC) of 0.97 (95% CI 0.90-1), the sensitivity of 97.43%, specificity of 85.29%, and balanced accuracy of 91.2% in the test dataset. The mean AUC for severity estimation was 0.74 for the test set. The high C-indexes of the prediction nomograms for identifying the presence of Ataxic Speech (0.96) and estimating its severity (0.81) in the test set indicates the efficacy of this algorithm. Decision curve analysis demonstrated the value of incorporating acoustic features from two repeated C-V syllable paradigms. The strong classification ability of the specified speech features supports the framework's usefulness for identifying and monitoring Ataxic Speech | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Pathirana, Pubudu N |e verfasserin |4 aut | |
700 | 1 | |a Horne, Malcolm |e verfasserin |4 aut | |
700 | 1 | |a Power, Laura |e verfasserin |4 aut | |
700 | 1 | |a Szmulewicz, David J |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society |d 2001 |g 31(2023) vom: 11., Seite 4839-4850 |w (DE-627)NLM113763190 |x 1558-0210 |7 nnns |
773 | 1 | 8 | |g volume:31 |g year:2023 |g day:11 |g pages:4839-4850 |
856 | 4 | 0 | |u http://dx.doi.org/10.1109/TNSRE.2023.3334718 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 31 |j 2023 |b 11 |h 4839-4850 |