NUST Institutional Repository

HINT Initial and Optimized: Employing Hperparameter Optimization and Transfer Learning

Show simple item record

dc.contributor.author Asghar, Sobia
dc.date.accessioned 2024-12-06T12:00:17Z
dc.date.available 2024-12-06T12:00:17Z
dc.date.issued 2024
dc.identifier.other 400570
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/48226
dc.description Supervisor: Dr Khuram Shahzad en_US
dc.description.abstract Employing transformer-based architectures in image inpainting has significantly advanced the quality of generated results. By leveraging self-attention mechanisms, transformers can cap ture long-range dependencies within an image, making them particularly effective in restoring missing regions with coherence. Recent developments, such as the HINT framework, have introduced enhanced attention mechanisms that further improve inpainting performance by incorporating mask-aware encoding.Transformer models often struggle with processing high resolution images due to their significant hardware requirements, which can limit their usability in broader applications and real-time scenarios.Reducing image resolution leads to information loss, which harms inpainting by causing blurred artifacts and vague structures in the recon structed output. We proposed two models , HINT Initial and HINT Optimized. The HINT initial model employed transfer learning.HINT optimized leverages advanced hyperparame ter tuning (uses Keras Tuner for advanced hyperparameter tuning, optimizing model parame ters for performance) and architectural refinements (MPD module and SCAL enhance image inpainting by improving attention and downsampling). Our methods are evaluated on two benchmark datasets namely, Places2 and CelebA-HQ. Simulation experiments validated our proposed methods which showed significant improvements in comparison with the state-of the-art image inpainting models. Notably, HINT Optimized effectively captured the complex relationships between pixels on both datasets. HINT initial showed improvements in (L1↓ i (loss),FID↓(Fréchet Inception Distance) and LPIPS↓(Learned Perceptual Image Patch Simi larity) on CelebA-HQ Dataset, whereas HINT optimized improved (PSNR↑ (Peak Signal-to Noise Ratio) and SSIM ↑ (Structural Similarity Index Measure).On Places2 Dataset ,HINT ini tial improved L1↓and LPIPS↓ .HINT Optimized showed improvement on PSNR↑,FID↓ and SSIM↑.The model demonstrated a significant improvement in both accuracy and loss metrics, reflecting enhanced performance and a more efficient learning process. en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science (SEECS-NUST) en_US
dc.title HINT Initial and Optimized: Employing Hperparameter Optimization and Transfer Learning en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [375]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account