Abstract:
Recognition of human emotional states using physiological signals has become progressively
popular area of research. The motivation behind the use of physiological signals such as
Electrocardiogram (ECG), Electroencephalogram (EEG) and Galvanic Skin Response (GSR) for
brain computer interfaces is due to the unforgeable nature of physiological signals which
enhances the reliability of the system meant for people having motor disabilities. The use of deep
learning models for emotion recognition using EEG signals has proved to be very useful despite
the associated problems of EEG signals such as low SNR, high randomness and non-stationary
nature.
Most of the emotion classification approaches involve complex signal processing techniques for
extraction of features and hence underutilize the capabilities of neural networks to extract
meaningful features from the raw data. Most of the approaches do not take into consideration the
spatial correlation of EEG signals. Motivated by the remarkable performance of deep learning
approaches, in this research, a deep learning method employing 2D CNN is proposed in which
the input to CNN is formed based on spatial as well as temporal information of EEG signals and
has proved to be very effective in classification of emotions. This research is focused on
classifying the emotions based on valence-arousal model into 2 classes (high, low), 3 classes
(high, neutral, low) and 4 classes (LVLA, LVHA, HVLA, and HVHA) using sliding window
approach. The proposed approach has achieved a significant accuracy of 89.1% and 90.4% for 2
classes of Valence and Arousal dimensions respectively and 88.2% for 4 classes on publically
available dataset AMIGOS. The classification task of four classes is also tested using pretrained
network Alexnet and the accuracy in this case is