NUST Institutional Repository

MOLPPO - Molecular Optimization through Deep Reinforcement Learning

Show simple item record

dc.contributor.author Navid, Ahmad
dc.date.accessioned 2023-08-09T05:33:11Z
dc.date.available 2023-08-09T05:33:11Z
dc.date.issued 2022-07-01
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/35858
dc.description.abstract With the rise of computational methods in medicine, there has been several important breakthroughs. Virtual screening, molecular docking and molecular dynamics simulations have revolutionized the field of drug design over the decades. More recently, artificial intelligence has also had major contributions to drug design. However, the problem of computational chemistry is a combinatorial one: molecular function is nonlinear and combinatorial in nature. Finding a relationship between chemical space and functional space has been quite challenging. Fortunately, deep reinforcement learning provides some hope in approaching this problem. Earlier work named MOLDQN has used a discrete deterministic approach to modeling molecules from scratch, this thesis aims to use a more generalized probabilistic approach. This thesis uses the Actor-Critic formulation in reinforcement learning to explore the chemical space in terms of the quantitative estimate of drug-likeness (QED), Tanimoto index, and a newly designed diversity score which penalizes highly similar molecules. Results from the algorithm show that the system can learn to model chemical bonds better than earlier work, however the system cannot model aromatic rings accurately. This may perhaps be because of the three-dimensional nature of resonance structures not captured with the Morgan fingerprint which the algorithm uses. en_US
dc.description.sponsorship Dr. Muhammad Tariq Saeed en_US
dc.language.iso en_US en_US
dc.publisher SINES NUST. en_US
dc.subject Reinforcement learning, Actor Critic, Q-function approximation, molecular optimization, Gleevec, Imatinib, tyrosine kinases en_US
dc.title MOLPPO - Molecular Optimization through Deep Reinforcement Learning en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [159]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account