Abstract:
The Internet of Things (IoT) anticipates a future in which commonplace products will be incorporated with sensors. Sensors can detect both living environs and the human body, allowing for easier contact with the physical world. Sensors, such as inertial sensors, are found in devices such as smartphones, smartwatches, and fitness bands. There is a need for deep learning models that can work for the classification of different applications based on inertial sensors embedded in smart-wearables. In this thesis, we have presented an Inception ResNet inspired model which can be tuned to work for various applications. We began by experimenting with Emotion Recognition before fine-tuning it to work with Human Activity Recognition. Along with this, the proposed model is experimented with five different input features ranging from 1D to 6D which are ‘magnitude of 3D Accelerations’ i.e. mag \w a ’, ‘magnitude of 3D Velocities i.e. mag \w ω ’, ‘Both 3D Accelerations and 3D Velocities i.e. (a w x , aw y , aw z , ωw x , ωw y , ωw z )’, ‘3D Accelerations i.e. (a w x , aw y , aw z )’ and ‘3D Velocities i.e. (ω w x , ωw y , ωw z )’. Emotions are an integral part of our daily lives. If computers can sense emotions, they will be able to interact more effectively and humanely. We used the SEECS Emotion Recognition Dataset for this work, which contains 6 distinct emotions namely ‘happy’, ‘fear’, ‘sad’, ‘disgust’, ‘anger’, and ‘surprise’ data of human gait. Human activity recognition (HAR) is another popular topic in the IoT paradigm. HAR uses include fitness tracking, entertainment, childcare, security, driver behavior monitoring, ambient assisted living, and others. We used two datasets from Wireless Sensor Data Mining in this study: WISDM 2011 and WISDM 2019. We have achieved 95.23% accuracy for Emotion Recognition for all 6 classes with ‘mag \w a ’ and 95.01% accuracy with ‘ (a w x , aw y , aw z , ωw x , ωw y , ωw z )’. For Human Activity Recognition we have achieved 97.29% all 6 activities in WISDM 2011 dataset with ‘mag \w a ’ and 98.81% accuracy with ‘(a w x , aw y , aw z , ωw x , ωw y , ωw z )’. For all 18 activities in WISDM 2019, we have achieved we have achieved 97.5% with ‘mag \w a ’ and 98.4% accuracy with ‘(a w x , aw y , aw z , ωw x , ωw y , ωw z )’ using only the smartwatch dataset. We outperformed the stat