Abstract:
Motor Imagery electroencephalography (MI EEG) data are employed in brain computer
interface (BCI) systems to identify the intention of participants. Several factors, such as
poor signal to noise ratios and a scarcity of high-quality samples, complicate the
classification of MI EEG signals. For BCI systems to operate well, it is necessary to analyse
MI-EEG signals. Recent successful applications of deep learning methods have been
observed in pattern recognition and other domains. Conversely, there have been few
successful implementations of deep learning algorithms in BCI systems, particularly those
based on machine intelligence. Brain-computer interfaces (BCI) can be crucial in
facilitating communication with the external environment for those with movement
impairments. Deep learning has achieved remarkable success across the many domains.
Nevertheless, deep learning has achieved only limited progress in the analysis of
Electroencephalogram (EEG) information. The present study suggests a novel approach to
address the problem by integrating the Continuous Wavelet Transform (CWT) with deep
learning-based transfer learning approach. Continuous Wavelet Transform (CWT)
converts one-dimensional EEG signals into a two-dimensional representation of time,
frequency, and amplitude images. This allows us to explore existing deep networks via
transfer learning. The present work assesses the efficacy of the suggested methodology by
utilising a publicly accessible dataset from the BCI competition VI-2b. Our study attained
a promising validation accuracy of 81.72% by comparing the findings of the approach with
previous efforts on the same dataset. A comparative analysis of the proposed algorithm
with existing algorithms demonstrates its superior performance in classification tasks. The
approach can enhance the classification accuracy of motor imagery (MI)-based braincomputer interfaces (BCIs) and BCI systems designed for individuals with impairment.