Transformable Bottleneck Networks (bibtex)
by Kyle Olszewski, Sergey Tulyakov, Oliver Woodford, Hao Li, Linjie Luo
Abstract:
We propose a novel approach to performing fine-grained 3D manipulation of image content via a convolutional neural network, which we call the Transformable Bottleneck Network (TBN). It applies given spatial transformations directly to a volumetric bottleneck within our encoder-bottleneck-decoder architecture. Multi-view supervision encourages the network to learn to spatially disentangle the feature space within the bottleneck. The resulting spatial structure can be manipulated with arbitrary spatial transformations. We demonstrate the efficacy of TBNs for novel view synthesis, achieving state-of-the-art results on a challenging benchmark. We demonstrate that the bottlenecks produced by networks trained for this task contain meaningful spatial structure that allows us to intuitively perform a variety of image manipulations in 3D, well beyond the rigid transformations seen during training. These manipulations include non-uniform scaling, non-rigid warping, and combining content from different images. Finally, we extract explicit 3D structure from the bottleneck, performing impressive 3D reconstruction from a single input image.
Reference:
Transformable Bottleneck Networks (Kyle Olszewski, Sergey Tulyakov, Oliver Woodford, Hao Li, Linjie Luo), In arXiv:1904.06458 [cs], 2019. (arXiv: 1904.06458)
Bibtex Entry:
@article{olszewski_transformable_2019,
	title = {Transformable {Bottleneck} {Networks}},
	url = {http://arxiv.org/abs/1904.06458},
	abstract = {We propose a novel approach to performing fine-grained 3D manipulation of image content via a convolutional neural network, which we call the Transformable Bottleneck Network (TBN). It applies given spatial transformations directly to a volumetric bottleneck within our encoder-bottleneck-decoder architecture. Multi-view supervision encourages the network to learn to spatially disentangle the feature space within the bottleneck. The resulting spatial structure can be manipulated with arbitrary spatial transformations. We demonstrate the efficacy of TBNs for novel view synthesis, achieving state-of-the-art results on a challenging benchmark. We demonstrate that the bottlenecks produced by networks trained for this task contain meaningful spatial structure that allows us to intuitively perform a variety of image manipulations in 3D, well beyond the rigid transformations seen during training. These manipulations include non-uniform scaling, non-rigid warping, and combining content from different images. Finally, we extract explicit 3D structure from the bottleneck, performing impressive 3D reconstruction from a single input image.},
	journal = {arXiv:1904.06458 [cs]},
	author = {Olszewski, Kyle and Tulyakov, Sergey and Woodford, Oliver and Li, Hao and Luo, Linjie},
	month = aug,
	year = {2019},
	note = {arXiv: 1904.06458},
	keywords = {Graphics}
}
Powered by bibtexbrowser