Fully Automated Photogrammetric Data Segmentation and Object Information Extraction Approach for Creating Simulation Terrain (bibtex)
by Chen, Meida, Feng, Andrew, Prasad, Pratusha Bhuvana, McAlinden, Ryan, Soibelman, Lucio and Enloe, Mike
Abstract:
Our previous works have demonstrated that visually realistic 3D meshes can be automatically reconstructed with lowcost, off-the-shelf unmanned aerial systems (UAS) equipped with capable cameras, and efficient photogrammetric software techniques (McAlinden, Suma, Grechkin, & Enloe, 2015; Spicer, McAlinden, Conover, & Adelphi, 2016). However, such generated data do not contain semantic information/features of objects (i.e., man-made objects, vegetation, ground, object materials, etc.) and cannot allow the sophisticated user-level and system-level interaction. Considering the use case of the data in creating realistic virtual environments for training and simulations (i.e., mission planning, rehearsal, threat detection, etc.), segmenting the data and extracting object information are essential tasks. Previous studies have focused on and made valuable contributions to segment Light Detection and Ranging (LIDAR) generated 3D point clouds and classifying ground materials from real-world images. However, only a few studies have focused on the data created using the photogrammetric technique.
Reference:
Fully Automated Photogrammetric Data Segmentation and Object Information Extraction Approach for Creating Simulation Terrain (Chen, Meida, Feng, Andrew, Prasad, Pratusha Bhuvana, McAlinden, Ryan, Soibelman, Lucio and Enloe, Mike), In Proceedings of the Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC),, ResearchGate, 2020.
Bibtex Entry:
@inproceedings{chen_fully_2020,
	address = {Orlando, FL},
	title = {Fully {Automated} {Photogrammetric} {Data} {Segmentation} and {Object} {Information} {Extraction} {Approach} for {Creating} {Simulation} {Terrain}},
	url = {https://www.researchgate.net/publication/338557943_Fully_Automated_Photogrammetric_Data_Segmentation_and_Object_Information_Extraction_Approach_for_Creating_Simulation_Terrain},
	abstract = {Our previous works have demonstrated that visually realistic 3D meshes can be automatically reconstructed with lowcost, off-the-shelf unmanned aerial systems (UAS) equipped with capable cameras, and efficient photogrammetric software techniques (McAlinden, Suma, Grechkin, \& Enloe, 2015; Spicer, McAlinden, Conover, \& Adelphi, 2016). However, such generated data do not contain semantic information/features of objects (i.e., man-made objects, vegetation, ground, object materials, etc.) and cannot allow the sophisticated user-level and system-level interaction. Considering the use case of the data in creating realistic virtual environments for training and simulations (i.e., mission planning, rehearsal, threat detection, etc.), segmenting the data and extracting object information are essential tasks. Previous studies have focused on and made valuable contributions to segment Light Detection and Ranging (LIDAR) generated 3D point clouds and classifying ground materials from real-world images. However, only a few studies have focused on the data created using the photogrammetric technique.},
	booktitle = {Proceedings of the  {Interservice}/{Industry} {Training}, {Simulation}, and {Education} {Conference} ({I}/{ITSEC}),},
	publisher = {ResearchGate},
	author = {Chen, Meida and Feng, Andrew and Prasad, Pratusha Bhuvana and McAlinden, Ryan and Soibelman, Lucio and Enloe, Mike},
	month = jan,
	year = {2020},
	keywords = {Graphics, Narrative, STG, UARC},
	pages = {13}
}
Powered by bibtexbrowser