Military Terrain for Games Pipeline

Project Leader: Chirag Merchant

Commercial gaming technology has advanced dramatically over the past decades, with the fidelity of virtual environments reaching significant new bounds. With these growing capabilities has been a growing need for rapidly generated military terrain for use in virtual training environments, simulations and exercises. However, most leading AAA gaming titles take around 5 years and $50 million dollars to produce, and typically, 70% of those resources are devoted toward creating those virtual environments. The significant art resources are required because efforts are predominantly manual, which makes this process extremely labor intensive and time consuming. The US military desires immersive and realistic virtual terrain, but lacks those lofty resources, thus requiring that it be accomplished with a fraction of the costs and in a fraction of the time. Other challenges are presented as these datasets need to be current, accurate, relevant and correlated with other military simulations like OneSAF.

In conjunction with the US Army Simulation and Training Technology Center (STTC), the University of Southern California Institute for Creative Technologies (USC ICT) has developed the Military Terrain for Games Pipeline (MTGP), consisting of several automated processes and tools that procedurally analyze military terrain data obtained in COLLADA format from various forms (LIDAR, DTED, etc), corrects any issues, enhances the aesthetics of the scene, then exports it for use with the systems the Military uses most, such as the Virtual Battlespace 2 (VBS2) and OneSAF, and with other widely used game engines like Gamebryo and Source. Through this pipeline, the MTGP is able to produce immersive, game-engine ready environments with little to no human intervention and in a partial amount of time it would take to perform these tasks manually.

The MTGP optimizes and enhances incoming datasets through the following processes:

  • Pre-processing
  • Import COLLADA
  • Procedural texturing
  • Augmentation of geo-typical objects and clutter
  • Generate virtual environment for game/simulation engines

Click image to view larger.


  • Create geo-typical immersive virtual environments that tap into the capabilities of today’s gaming technologies, but in a fraction of the time, manpower and cost
  • Research and adapt existing photogrammetry, image processing and computer vision methods to incorporate real world information about the area of interest to make the generated environments increasingly “geo-specific”
  • Develop the user interfaces and tools necessary that would enable non-programming training developers to provide necessary configuration information and create these virtual environments with just a push of a button

External Collaborators

  • Simulation and Training Technology Center
  • TRADOC Capabilities Manager (TCM) Gaming
  • Applied Research Associates, Inc.
  • UCF Institute for Simulation and Training