Abstract:
About 1.3 billion people in the globe have some sort of motor disability. This amounts to
sixteen percent of the entire human population. Disabling in nature, these conditions provide
fresh difficulties to those who live with them every day. Assistive robots and other devices that
can improve the lives of people with impairments have benefited greatly from the widespread
usage of Motor Imagery Electroencephalography (MI-EEG) data in Brain-Computer Interface
(BCI) systems. The low performance of brain signal decoding remains a hurdle for the BCI
sector and limits its usefulness, despite the extensive research into EEG. In this study, we
propose a deep learning-based 2-D convolutional neural network (CNN) for classifying motor
imagery (MI). Not only is it possible to efficiently extract EEG-specific features from raw EEG
data using a convolutional neural network, but it is also small enough to fit in the palm of your
hand. We used BCI competition IV dataset 2a, a publicly accessible MI-EEG dataset, as our
standard. The results show that the suggested model outperforms the baseline algorithms in its
capacity to adapt to new settings. Additionally, it improves performance even when just a little
amount of data is available for training. Furthermore, a data augmentation strategy is proposed
to improve training and testing. The dataset is thoroughly examined using a variety of
visualization methods before any augmentation is performed. Both un-augmented and
augmented datasets are trained and evaluated using the proposed model, and the results are
compared. When compared to other state-of-the-art methods, the model's average classification
accuracy for four-class classification tasks was 82.13%.