High-Resolution Stereo Matching based on Sampled Photoconsistency Computation (bibtex)
by Chloe LeGendre, Konstantinos Batsos, Philippos Mordohai
Abstract:
We propose an approach to binocular stereo that avoids exhaustive photoconsistency computations at every pixel, since they are redundant and computationally expensive, especially for high resolution images. We argue that developing scalable stereo algorithms is critical as image resolution is expected to continue increasing rapidly. Our approach relies on oversegmentation of the images into superpixels, followed by photoconsistency computation for only a random subset of the pixels of each superpixel. This generates sparse reconstructed points which are used to fit planes. Plane hypotheses are propagated among neighboring superpixels, and they are evaluated at each superpixel by selecting a random subset of pixels on which to aggregate photoconsistency scores for the competing planes. We performed extensive tests to characterize the performance of this algorithm in terms of accuracy and speed on the full-resolution stereo pairs of the 2014 Middlebury benchmark that contains up to 6-megapixel images. Our results show that very large computational savings can be achieved at a small loss of accuracy. A multi-threaded implementation of our method 1 is faster than other methods that achieve similar accuracy and thus it provides a useful accuracy-speed tradeoff.
Reference:
High-Resolution Stereo Matching based on Sampled Photoconsistency Computation (Chloe LeGendre, Konstantinos Batsos, Philippos Mordohai), In Proceedings of the British Machine Vision Conference 2017., 2017.
Bibtex Entry:
@inproceedings{legendre_high-resolution_2017,
	address = {London, UK},
	title = {High-{Resolution} {Stereo} {Matching} based on {Sampled} {Photoconsistency} {Computation}},
	url = {http://ict.usc.edu/pubs/High-Resolution%20Stereo%20Matching%20based%20on%20Sampled%20Photoconsistency%20Computation.pdf},
	abstract = {We propose an approach to binocular stereo that avoids exhaustive photoconsistency computations at every pixel, since they are redundant and computationally expensive, especially for high resolution images. We argue that developing scalable stereo algorithms is critical as image resolution is expected to continue increasing rapidly. Our approach relies on oversegmentation of the images into superpixels, followed by photoconsistency computation for only a random subset of the pixels of each superpixel. This generates sparse reconstructed points which are used to fit planes. Plane hypotheses are propagated among neighboring superpixels, and they are evaluated at each superpixel by selecting a random subset of pixels on which to aggregate photoconsistency scores for the competing planes. We performed extensive tests to characterize the performance of this algorithm in terms of accuracy and speed on the full-resolution stereo pairs of the 2014 Middlebury benchmark that contains up to 6-megapixel images. Our results show that very large computational savings can be achieved at a small loss of accuracy. A multi-threaded implementation of our method 1 is faster than other methods that achieve similar accuracy and thus it provides a useful accuracy-speed tradeoff.},
	booktitle = {Proceedings of the {British} {Machine} {Vision} {Conference} 2017.},
	author = {LeGendre, Chloe and Batsos, Konstantinos and Mordohai, Philippos},
	month = sep,
	year = {2017},
	keywords = {Graphics}
}
Powered by bibtexbrowser