Loading

Random-Shaped Image Inpainting using Dilated Convolution
Nermin M. Salem1, Hani M. K. Mahdi2, and Hazem M. Abbas3

1Nermin M. Salem, Computer and Systems Engineering Department, Ain Shams University, Cairo, Egypt.
2Electrical Engineering Department, Future University, Cairo, Egypt. Hani M. K. Mahdi, Computer and Systems Engineering Department, Ain Shams University, Cairo, Egypt.
3Hazem M. Abbas, Computer and Systems Engineering Department, Ain Shams University, Cairo, Egypt.
Manuscript received on July 20, 2019. | Revised Manuscript received on August 10, 2019. | Manuscript published on August 30, 2019. | PP: 641-647 | Volume-8 Issue-6, August 2019. | Retrieval Number: F8089088619/2019©BEIESP | DOI: 10.35940/ijeat.F8089.088619
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Over the past few years, Deep learning-based methods have shown encouraging and inspiring results for one of the most complex tasks of computer vision and image processing; Image Inpainting. The difficulty of image inpainting is derived from its’ need to fully and deeply understand of the structure and texture of images for producing accurate and visibly plausible results especially for the cases of inpainting a relatively larger region. Deep learning methods usually employ convolution neural network (CNN) for processing and analyzing images using filters that consider all image pixels as valid ones and usually use the mean value to substitute the missing pixels. This result in artifacts and blurry inpainted regions inconsistent with the rest of the image. In this paper, a new novel-based method is proposed for image inpainting of random-shaped missing regions with variable size and arbitrary locations across the image. We employed the use of dilated convolutions for composing multiscale context information without any loss in resolution as well as including a modification mask step after each convolution operation. The proposed method also includes a global discriminator that also considers the scale of patches as well as the whole image. The global discriminator is responsible for capturing local continuity of images texture as well as the overall global images’ features. The performance of the proposed method is evaluated using two datasets (Places2 and Paris Street View). Also, a comparison with the recent state-of-the-art is preformed to demonstrate and prove the effectiveness of our model in both qualitative and quantitative evaluations.
Keywords: Image inpainting, gan, l1 , dilated convolution.