Contextual Based Image Inpainting: Infer, Match and Translate (bibtex)
by Yuhang Song, Chao Yang, Zhe Lin, Xiaofeng Liu, Hao Li, Qin Huang
Abstract:
We study the task of image inpainting, which is to fill in the missing region of an incomplete image with plausible contents. To this end, we propose a learning-based approach to generate visually coherent completion given a high-resolution image with missing components. In order to overcome the difficulty to directly learn the distribution of highdimensional image data, we divide the task into inference and translation as two separate steps and model each step with a deep neural network. We also use simple heuristics to guide the propagation of local textures from the boundary to the hole. We show that, by using such techniques, inpainting reduces to the problem of learning two image-feature translation functions in much smaller space and hence easier to train. We evaluate our method on several public datasets and show that we generate results of better visual quality than previous state-of-the-art methods.
Reference:
Contextual Based Image Inpainting: Infer, Match and Translate (Yuhang Song, Chao Yang, Zhe Lin, Xiaofeng Liu, Hao Li, Qin Huang), In Proceedings of the 15th European Conference on Computer Vision, Computer Vision Foundation, 2018.
Bibtex Entry:
@inproceedings{song_contextual_2018,
	address = {Munich, Germany},
	title = {Contextual {Based} {Image} {Inpainting}: {Infer}, {Match} and {Translate}},
	url = {http://openaccess.thecvf.com/content_ECCV_2018/papers/Yuhang_Song_Contextual_Based_Image_ECCV_2018_paper.pdf},
	abstract = {We study the task of image inpainting, which is to fill in the missing region of an incomplete image with plausible contents. To this end, we propose a learning-based approach to generate visually coherent completion given a high-resolution image with missing components. In order to overcome the difficulty to directly learn the distribution of highdimensional image data, we divide the task into inference and translation as two separate steps and model each step with a deep neural network. We also use simple heuristics to guide the propagation of local textures from the boundary to the hole. We show that, by using such techniques, inpainting reduces to the problem of learning two image-feature translation functions in much smaller space and hence easier to train. We evaluate our method on several public datasets and show that we generate results of better visual quality than previous state-of-the-art methods.},
	booktitle = {Proceedings of the 15th {European} {Conference} on {Computer} {Vision}},
	publisher = {Computer Vision Foundation},
	author = {Song, Yuhang and Yang, Chao and Lin, Zhe and Liu, Xiaofeng and Li, Hao and Huang, Qin},
	month = sep,
	year = {2018},
	keywords = {Graphics, UARC}
}
Powered by bibtexbrowser