A Framework for Action Detection in Virtual Training Simulations using Synthetic Training Data (bibtex)
by Feng, Andrew and Gordon, Andrew S
Abstract:
In virtual military training, tracking and evaluating trainee behavior throughout a simulation exercise help address the specific training needs, improve the realism of simulations, and customize the training experience. While it is straightforward to parse the event log of a simulation to identify atomic behaviors such as unit movements or attacks, it remains difficult to fuse these events into higher-level actions that better characterize trainees’ intentions and tactics. For example, if each unit is controlled by an individual trainee, how should the movement information from all units be aggregated to determine what formation the group is moving in? Similarly, how can all of the information from nearby terrain environments be combined with kinetic actions to determine whether the trainees are executing an ambush attack, or is simply engaging the enemy group? While an experienced human observer-controller can quickly assess the battle map to provide an appropriate interpretation for such events, it remains a challenging task for computers to automatically detect such high-level behaviors when performed by human trainees. In this work, we proposed a machine-learning (ML) framework for recognizing tactical events in virtual training environments. In our approach, unit movements, surrounding environments, and other atomic events are represented as a 2D image, allowing us to solve the action detection problem as image classification and video temporal segmentation tasks. In order to bootstrap ML models for these tasks, we utilize synthetic training data to procedurally generate a large amount of annotated data. We demonstrate the effectiveness of this framework in the context of a virtual military training prototype, detecting troop formations and other tactical events such as ambush and patrolling.
Reference:
A Framework for Action Detection in Virtual Training Simulations using Synthetic Training Data (Feng, Andrew and Gordon, Andrew S), In Proseedings of the I/ITSEC 2020 Conference, 2020.
Bibtex Entry:
@inproceedings{feng_framework_2020,
	address = {Orlando, FL},
	title = {A {Framework} for {Action} {Detection} in {Virtual} {Training} {Simulations} using {Synthetic} {Training} {Data}},
	url = {https://www.xcdsystem.com/iitsec/proceedings/index.cfm?Year=2020&AbID=79448&CID=572#View},
	abstract = {In virtual military training, tracking and evaluating trainee behavior throughout a simulation exercise help address the specific training needs, improve the realism of simulations, and customize the training experience. While it is straightforward to parse the event log of a simulation to identify atomic behaviors such as unit movements or attacks, it remains difficult to fuse these events into higher-level actions that better characterize trainees’ intentions and tactics. For example, if each unit is controlled by an individual trainee, how should the movement information from all units be aggregated to determine what formation the group is moving in? Similarly, how can all of the information from nearby terrain environments be combined with kinetic actions to determine whether the trainees are executing an ambush attack, or is simply engaging the enemy group? While an experienced human observer-controller can quickly assess the battle map to provide an appropriate interpretation for such events, it remains a challenging task for computers to automatically detect such high-level behaviors when performed by human trainees. 

In this work, we proposed a machine-learning (ML) framework for recognizing tactical events in virtual training environments. In our approach, unit movements, surrounding environments, and other atomic events are represented as a 2D image, allowing us to solve the action detection problem as image classification and video temporal segmentation tasks. In order to bootstrap ML models for these tasks, we utilize synthetic training data to procedurally generate a large amount of annotated data. We demonstrate the effectiveness of this framework in the context of a virtual military training prototype, detecting troop formations and other tactical events such as ambush and patrolling.},
	booktitle = {Proseedings of the {I}/{ITSEC} 2020 {Conference}},
	author = {Feng, Andrew and Gordon, Andrew S},
	month = nov,
	year = {2020},
	keywords = {Narrative}
}
Powered by bibtexbrowser