Room Emotion Recognition using CNN

  • Deevesh Chaudhary, Vijay Prakash Sharma, Siddharth Rana

Abstract

The identification of human emotions is crucial in interpersonal relationships. Automatic emotion recognition is a emerging topic for research in computer science. As a result, there have been significant advancements in this field. A nurnal network model is present for recognition the facial expressionto sidestep the laborious process of explicit feature extraction in traditional facial expression recognition. First, the faces present in the image are identified, and a cropped, grayscale image is provided to the model for classification. The sequential classifier then recognizes the facial expressions of faces and classifies them into one of seven classes- angry, fearful, sad, disgusted, neutral, happy, and surprised. It also displays the overall emotion of the room based on individual results. The FER2013 dataset is used to train and validate the model. Expression recognition has many applications, including human-computer interaction, marketing, entertainment, e-learning, and safety systems. The performance of the proposed architecture supports the usefulness and dependability of the suggested work for real-world applications. The trained model was further deployed to compute the room's overall emotion based on the individual expressions present in the image.The experimental findings reveal that the suggested method achieves a 73.4 % average recognition rate, compared to 63.2% for earlier models and with fewer iterations.

Published
2021-07-21
How to Cite
Deevesh Chaudhary, Vijay Prakash Sharma, Siddharth Rana. (2021). Room Emotion Recognition using CNN. Design Engineering, 2768 - 2777. Retrieved from http://www.thedesignengineering.com/index.php/DE/article/view/2845
Section
Articles