Abstract:
The rapidly developing endeavor of EEG-based emotion classification within the domain
of affective computing is one that has potential application in mental health diagnostics,
human-computer interaction, and adaptive systems. Nevertheless, as emotional states are
complex and non-linear concepts, identifying specific emotions obviously remains a big
problem that needs complicated methods to provide the required features of EEG patterns.
Though Convolutional Neural Networks (CNNs) have been shown to be an effective
method for EEG based classification, their success is heavily dependent on the feature
quality and diversity. In this thesis, we propose a new multi-feature CNN model which
consists of differential entropy (DE) and power spectral density (PSD), as well as Mel
frequency cepstral coefficients (MFCCs) that is an audio signal feature, conventionally
unused for EEG classification. With the fusion of these feature sets, it leverages the
recognition of emotion using EEG signals The model was tested on both SEED-IV and the
SEED datasets, outperforming baselines by a wide margin for the first time. It attained an
accuracy of 77.51% on the SEED-IV dataset and has outperformed the CGCNN model by
a margin having the accuracy of 75.48% using DE alone for emotion classification,
therefore setting new state of the art in emotion classification. In addition, it showed even
better validation results on the SEED dataset which is a 3-class less-complex dataset getting
the accuracy of 93.87% against CGCNN 93.36% proved the robustness of the model. The
incorporation of MFCCs with DE and PSD greatly improves the performance and stability
of CNN-based emotion recognition, making this work a complete solution to common real
time adaptive emotional classification systems.