Generative models for multi-modality image inpainting and resolution enhancement
Abed jooy Divshali, Aref
MetadataShow full item record
Recently, deep learning methods specifically generative adversarial networks (GANs) have been used to rapidly improve a wide range of image enhancement tasks including image inpainting and image resolution enhancement also known as super-resolution. Image-to-image translation methods convert an image provided in a source modality (e.g., a nighttime image) to an image of a target modality (e.g., a daytime image) by learning an image generation function. These methods can be applied to a wide variety of problems in image processing and computer vision. The use of GANs for image-to-image translation has also been extensively studied. We propose the problem of combining the image-enhancement tasks (e.g., image inpainting or super-resolution) with the image-to-image translation task in a joint formulation. Given a distorted nighttime image of a scene can one recover a restored daytime image of the same scene? Two models to address the joint problem will be presented. Our models are validated on night-to-day joint image translation and enhancement for both super-resolution and inpainting. Promising qualitative and quantitative results will be reported.