Abstract:
This project aims to develop an advanced human-computer-interacting system designed to
seamlessly control holographic 3D visuals using real-time dynamic hand gestures. The primary
goal is to enhance the learning experience in the educational sector by effectively displaying
and controlling complex 3D concepts such as models of human anatomy and various
engineering problems. This innovative approach leads to improved academic performances and
enhances the engagement and participation of students. The core of the system's functionality
lies in detecting hand gestures and translating them into real-time computer cursor control,
thereby enabling interactive control of holographic displays.
To achieve accurate hand gesture detection, two distinct approaches are employed: a vision based approach and a sensor-based approach. In the vision-based approach, a camera
continuously captures hand movements. This visual data is then used to detect and locate
different gestures of the hand by employing a learning-based gesture-detecting algorithm.
Conversely, the sensor-based approach involves the design of hardware (a sensor-based motion
tracking glove) to detect the position of fingers. An ANN model, capable of detecting the same
hand gestures as vision-based approach using sensor’s data, is developed. The primary purpose
of utilising these two different approaches is to enhance gesture recognition accuracy.
Therefore, the outputs from both methods are fused together using stacked ensemble learning,
with an additional ANN serving as a meta-model to further improve accuracy. These detected
gestures are then translated into mouse commands. Furthermore, to facilitate user interaction,
a desktop application is designed to initialise and manage the gesture-controlled mouse system.
Additionally, another Windows application is developed to upload and control various 3D
models displayed on a holographic interface