Photogrammetric Point Cloud Segmentation and Object Information Extraction for Creating Virtual Environments and Simulations (bibtex)
by Chen, Meida, Feng, Andrew, McAlinden, Ryan and Soibelman, Lucio
Abstract:
Photogrammetric techniques have dramatically improved over the last few years, enabling the creation of visually compelling three-dimensional (3D) meshes using unmanned aerial vehicle imagery. These high-quality 3D meshes have attracted notice from both academicians and industry practitioners in developing virtual environments and simulations. However, photogrammetric generated point clouds and meshes do not allow both user-level and system-level interaction because they do not contain the semantic information to distinguish between objects. Thus, segmenting generated point clouds and meshes and extracting the associated object information is a necessary step. A framework for point cloud and mesh classification and segmentation is presented in this paper. The proposed framework was designed considering photogrammetric data-quality issues and provides a novel way of extracting object information, including (1) individual tree locations and related features and (2) building footprints. Experiments were conducted to rank different point descriptors and evaluate supervised machine-learning algorithms for segmenting photogrammetric generated point clouds. The proposed framework was validated using data collected at the University of Southern California (USC) and the Muscatatuck Urban Training Center (MUTC). DOI: 10.1061/(ASCE) ME.1943-5479.0000737. © 2019 American Society of Civil Engineers.
Reference:
Photogrammetric Point Cloud Segmentation and Object Information Extraction for Creating Virtual Environments and Simulations (Chen, Meida, Feng, Andrew, McAlinden, Ryan and Soibelman, Lucio), In Journal of Management in Engineering, volume 36, 2019.
Bibtex Entry:
@article{chen_photogrammetric_2019,
	title = {Photogrammetric {Point} {Cloud} {Segmentation} and {Object} {Information} {Extraction} for {Creating} {Virtual} {Environments} and {Simulations}},
	volume = {36},
	issn = {0742-597X, 1943-5479},
	url = {http://ascelibrary.org/doi/10.1061/%28ASCE%29ME.1943-5479.0000737},
	doi = {10.1061/(ASCE)ME.1943-5479.0000737},
	abstract = {Photogrammetric techniques have dramatically improved over the last few years, enabling the creation of visually compelling three-dimensional (3D) meshes using unmanned aerial vehicle imagery. These high-quality 3D meshes have attracted notice from both academicians and industry practitioners in developing virtual environments and simulations. However, photogrammetric generated point clouds and meshes do not allow both user-level and system-level interaction because they do not contain the semantic information to distinguish between objects. Thus, segmenting generated point clouds and meshes and extracting the associated object information is a necessary step. A framework for point cloud and mesh classification and segmentation is presented in this paper. The proposed framework was designed considering photogrammetric data-quality issues and provides a novel way of extracting object information, including (1) individual tree locations and related features and (2) building footprints. Experiments were conducted to rank different point descriptors and evaluate supervised machine-learning algorithms for segmenting photogrammetric generated point clouds. The proposed framework was validated using data collected at the University of Southern California (USC) and the Muscatatuck Urban Training Center (MUTC). DOI: 10.1061/(ASCE) ME.1943-5479.0000737. © 2019 American Society of Civil Engineers.},
	number = {2},
	journal = {Journal of Management in Engineering},
	author = {Chen, Meida and Feng, Andrew and McAlinden, Ryan and Soibelman, Lucio},
	month = nov,
	year = {2019},
	keywords = {STG, UARC},
	pages = {04019046}
}
Powered by bibtexbrowser