Automated accurate emotion recognition system using rhythm-specific deep convolutional neural network technique with multi-channel EEG signals

Copyright © 2021 Elsevier Ltd. All rights reserved..

Emotion is interpreted as a psycho-physiological process, and it is associated with personality, behavior, motivation, and character of a person. The objective of affective computing is to recognize different types of emotions for human-computer interaction (HCI) applications. The spatiotemporal brain electrical activity is measured using multi-channel electroencephalogram (EEG) signals. Automated emotion recognition using multi-channel EEG signals is an exciting research topic in cognitive neuroscience and affective computing. This paper proposes the rhythm-specific multi-channel convolutional neural network (CNN) based approach for automated emotion recognition using multi-channel EEG signals. The delta (δ), theta (θ), alpha (α), beta (β), and gamma (γ) rhythms of EEG signal for each channel are evaluated using band-pass filters. The EEG rhythms from the selected channels coupled with deep CNN are used for emotion classification tasks such as low-valence (LV) vs. high valence (HV), low-arousal (LA) vs. high-arousal (HA), and low-dominance (LD) vs. high dominance (HD) respectively. The deep CNN architecture considered in the proposed work has eight convolutions, three average pooling, four batch-normalization, three spatial drop-outs, two drop-outs, one global average pooling and, three dense layers. We have validated our developed model using three publicly available databases: DEAP, DREAMER, and DASPS. The results reveal that the proposed multivariate deep CNN approach coupled with β-rhythm has obtained the accuracy values of 98.91%, 98.45%, and 98.69% for LV vs. HV, LA vs. HA, and LD vs. HD emotion classification strategies, respectively using DEAP database with 10-fold cross-validation (CV) scheme. Similarly, the accuracy values of 98.56%, 98.82%, and 98.99% are obtained for LV vs. HV, LA vs. HA, and LD vs. HD classification schemes, respectively, using deep CNN and θ-rhythm. The proposed multi-channel rhythm-specific deep CNN classification model has obtained the average accuracy value of 57.14% using α-rhythm and trial-specific CV using DASPS database. Moreover, for 8-quadrant based emotion classification strategy, the deep CNN based classifier has obtained an overall accuracy value of 24.37% using γ-rhythms of multi-channel EEG signals. Our developed deep CNN model can be used for real-time automated emotion recognition applications.

Medienart:

E-Artikel

Erscheinungsjahr:

2021

Erschienen:

2021

Enthalten in:

Zur Gesamtaufnahme - volume:134

Enthalten in:

Computers in biology and medicine - 134(2021) vom: 20. Juli, Seite 104428

Sprache:

Englisch

Beteiligte Personen:

Maheshwari, Daksh [VerfasserIn]
Ghosh, S K [VerfasserIn]
Tripathy, R K [VerfasserIn]
Sharma, Manish [VerfasserIn]
Acharya, U Rajendra [VerfasserIn]

Links:

Volltext

Themen:

Channel selection
Classification
Deep CNN
Emotion recognition
Journal Article
Multi-channel EEG
Research Support, Non-U.S. Gov't
Rhythms

Anmerkungen:

Date Completed 26.07.2021

Date Revised 26.07.2021

published: Print-Electronic

Citation Status MEDLINE

doi:

10.1016/j.compbiomed.2021.104428

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM325361312