dc.description.abstract |
Due to the exceptional efficiency of deep learning algorithms and the availability of vast amounts of data, the jobs of evaluating and comprehending digital images for various computer vision applications have gotten a lot of attention in recent years.
Image-to-image translation (I2I) is a significant and difficult topic in computer vision. It seeks to understand the mapping between two domains, with applications ranging from data augmentation to style transfer to super-resolution, and so on. Due to the effectiveness of deep learning approaches in visual generative tasks, researchers have deployed deep generative models, particularly generative adversarial networks (GANs), to image-to-image translation (I2I) since 2016 and made significant progress.
An advancement in security that allows for the embedding of hidden information in a cover is called steganography. This research is aimed to utilize the generative adversarial approach for I2I translation and evaluate performance of various I2I architectures using a single common benchmark dataset. We propose a new framework for Steganography by Image-To-Image Translation using Generative Adversarial Networks. Also, we use different image quality metrics to compare the image quality
with existing methods. |
en_US |