Discrimination of Genuine and Acted Emotional Expressions Using Eeg Signal, Machine Learning and Deep Learning

  • MR. Batta Kumar Babu, Chelikani Sangeetha, Idamakanti Yuva Kiran, Karumanchi Sai Tarun, Parlikar Harshil,Madasu Naga Praneeth, Chintakindi Pruthviraj.
Keywords: No Keywords

Abstract

One of the first experiments to use EEG data to try and distinguish between real emotional emotions and performed ones is presented in this paper. For the first time, we've compiled an EEG dataset (accessible here) that includes recordings of people making real and false facial expressions. Real smiles are divided into three categories: genuine smiles, artificial or performed smiles, and neutral faces. The intrinsic features of three EEG emotional expressions, a genuine grin, a neutral grin, and a fake/acted grin, can be retrieved in several ways. Combining discrete wavelet transforms (DWT) with empirical mode decomposition (EMD) and DWT-EMD (DWT-EMD) over three frequency bands allowed us to extract EEG features from the data. The proposed strategies were then evaluated using a range of classifiers, such as k-nearest neighbours, support vector machines, and artificial neural networks (ANN). Our 28 volunteers were divided into three groups based on the way they expressed their emotions: genuine, neutral, and fake/acted. Better results were obtained by combining DWT and EMD than by using alone DWT or only EMD. There were distinct brain patterns in all frequency bands for each of the three emotional expressions, as revealed by the power spectral features derived from DWT, EMD, and DWT-EMD. We used only DWT or EMD to run binary classification trials and reached an acceptable accuracy of up to 84 percent for all types of emotions, classifiers, and bands... Classification accuracy in the alpha and beta bands was 94.3 percent on average for DWT-EMD and 84.1 percent for DWT-EMD in combination with ANN to identify real emotional expressions from false ones. As a result of our findings, future emotion research should include merging DWT-EMD and highlighting the link between the alpha and beta frequency regions.

Published
2021-11-10
How to Cite
Parlikar Harshil,Madasu Naga Praneeth, Chintakindi Pruthviraj., M. B. K. B. C. S. I. Y. K. K. S. T. (2021). Discrimination of Genuine and Acted Emotional Expressions Using Eeg Signal, Machine Learning and Deep Learning . Design Engineering, 11327-11339. Retrieved from http://www.thedesignengineering.com/index.php/DE/article/view/6200
Section
Articles