NUST Institutional Repository

Enhanced Drone Control Using Reinforcement Learning

Show simple item record

dc.contributor.author Hassan Moin, supervised by Dr Muhammad Jawad Khan
dc.date.accessioned 2022-07-25T07:07:49Z
dc.date.available 2022-07-25T07:07:49Z
dc.date.issued 2022
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/29934
dc.description.abstract Quadcopters have already proven their effectiveness in both civilian and military applications. Their control, however, is a difficult task due to their under-actuated, highly nonlinear, and coupled dynamics. Most quadcopter autopilot systems utilize cascaded control schemes, where the outer loop handles mission-level objectives in 3D Euclidean space, and the inner loop is responsible for stability and control. Such complex systems are generally operated using PID controllers, which have demonstrated exceptional performance in multiple scenarios, such as obstacle avoidance, trajectory tracking and path planning. However, tuning their gains for nonlinear systems using heuristics or rulebased methods is a tedious, time-consuming and difficult task. Rapid advances in the field of computational engineering, on the other hand, have paved way for intelligent flight control systems, which have become an important area of study addressing the limits of PID control, most recently through the application of reinforcement learning (RL). In this dissertation, an optimal gain auto-tuning strategy is implemented for altitude, attitude, and position controllers of a 6 DoF nonlinear drone system using a deep actor-critic RL algorithm having continuous observation and action spaces. The state equations are derived using Lagrange’s (energy-based) method, while the drone’s aerodynamic coefficients are estimated numerically using blade element momentum theory. Furthermore, the cascaded closed loop system’s asymptotic stability is studied using the theory of Lyapunov. Finally, the proposed strategy is validated by simulation results, where the gains learned by RL agents allow the quadcopter to track a given trajectory accurately. Moreover, these optimal gains satisfy the conditions obtained through Lyapunov’s stability analysis, indicating that the RL algorithm is an extremely powerful tool which can assess uncertainties existing within any complex nonlinear system. en_US
dc.language.iso en en_US
dc.publisher SMME en_US
dc.subject Enhanced Drone Control, Reinforcement Learning en_US
dc.title Enhanced Drone Control Using Reinforcement Learning en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [342]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account