dc.description.abstract |
Human activity recognition (HAR) plays a vital role in various fields, including healthcare, sports, and human-computer interaction. In recent years, deep learning models have achieved remarkable success in HAR tasks. However, the majority of existing studies focus on traditional architectures such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), overlooking the potential of transformer models in this domain. This research uses the KU-HAR dataset to explore the application of transformer learning models specifically the Vision Transformer model for HAR. The proposed model is trained on the KU-HAR dataset. The KU-HAR dataset consists of multi-modal sensor data collected from wearable devices, providing rich and diverse information about human activities. The proposed approach leverages the self-attention mechanism of transformers to capture temporal dependencies and spatial interactions in the activity data. The proposed model is trained with variable LR-schedular which helped in getting more beneficial results. Extensive experiments are conducted to evaluate the performance of the transformer model, comparing it with traditional deep learning architectures such as LSTM-CNN, RNN and DBN. The results demonstrate that the transformer model achieves competitive accuracy of 94% on KU-HAR dataset. Furthermore, the model exhibits robustness to variations in activity patterns and outperforms existing methods on the KU-HAR dataset. The findings of this research highlight the potential of transformer models in HAR and pave the way for future advancements in activity recognition research using deep learning. |
en_US |