An Intelligent Image Inpainting Autoencoder Model for Irregular Mask
Image inpainting is a method of filling in the best conceivable artifacts in the missing, damaged or blurring portions of a digital image or filling in the holes as opposed to eliminating the objects that exist in the image utilizing the data from the best region neighboring the holes. The existing inpainting methods had rendered the immeasurable result in reconstructing the damaged area in the picture. For the filling, however, the missing parts, which affect the complex structure and texture, are still a challenge. Deep learning that mimics the human brain makes a great impression on studying the missing part by extracting many features and choosing the most suitable loss.
In this study, an automatic encoder technique using a partial convolutional layer to process irregular masks is being used. The high-level feature losses are used in addition to l1 and l2 losses to reconstruct the damaged image. The Dice coefficient loss function is used to train the model to find the closest inpainted image that corresponds to ground truth image.
The performance of the trained model is validated and tested on celeba dataset with the irregular mask and quality of image is assessed by evaluating the Peak Signal-to-Noise (PSNR) and Structural Similarity Index (SSIM). Our model provides mean PSNR of 24.82 dB and mean SSIM of 91.86. The result shows that the intelligent image impainting model for irregular mask outperforms the existing image inpainting techniques.