Human-inspired Adaptive Teaming Systems (HATS)

Research Lead: Volkan Ustun

Interactive simulation environments that incorporate human behavior, as well as similar applications such as virtual worlds and video games, require computational models of intelligence to enable synthetic characters to produce authentic and convincing human-like behavior – performing coherent sequences of believable behavior sustained across different tasks and environments. Achieving credible behavior in these computational intelligence models relies on several crucial factors, including:

    1. Leveraging their perceptual abilities to observe their surroundings, as well as synthetic characters and real humans within it.
    2. Autonomously responding to their environment based on their knowledge and perception, including reacting and appropriately adapting to the unfolding events.
    3. Engaging in natural interactions with both real and virtual humans through verbal and nonverbal communication.
    4. Possessing a Theory of Mind (ToM) to model their own mental states and those of others.
    5. Demonstrating an understanding of and appropriately displaying emotions and related behaviors.
    6. Adapting their behavior through experience.

Experiential learning is essential for developing synthetic characters with credible behavior because attempting to achieve realistic and convincing behavior solely through strict behavioral control would be highly impractical. This approach would require creating rules for each unique task, environment, and potential interaction, making the task nearly impossible.

Our group historically leveraged cognitive architectures and developed the Sigma Cognitive Architecture to devise computational minds for synthetic characters. More recently, we have broadened our focus to generate courses of action and propose tactics and behaviors for operational planners using interactive simulation environments. In this shift, we have heavily incorporated machine learning approaches, primarily multi-agent reinforcement learning, in our group.

Our primary application domain is in new-generation simulated training environments for the military, such as the Synthetic Training Environment (STE), which demand intelligent behavior models for synthetic characters. However, these environments present significant challenges:

    1. Complexity: There is a collection of interrelated tasks to be performed by an agent rather than just a single task.
    2. Stochasticity: The processes and decisions in these environments are non-deterministic.
    3. Multi-agent: Several agents collaborate and compete in the environment.
    4. Partially observable and continuous: Many moves are available at any given simulation state, and the moves of one agent may not always be observable to other agents.
    5. Non-stationary: The processes used by the agents and the environment itself may change during a simulation.
    6. Doctrine-based: Conforming to military hierarchies and policies is essential.

The combination of these challenges makes the machine learning of behavior in military simulations a formidable task. Our research augments Multi-agent Reinforcement Learning (MARL) models, drawing inspiration from operations research, human judgment and decision-making, game theory, graph theory, and cognitive architectures to better address the challenges in military training simulations. We have successfully created proof-of-concept behavior models for various military training scenarios leveraging our enhanced MARL framework. We utilize both geo-specific terrains in Unity and abstractions derived from these terrains as simulation environments. Our sample simulation environments and enabling libraries for running MARL experiments can be found at our group’s public repository:

In summary, our group’s research objectives are:


    • Enhance the quality and complexity of non-player characters in training simulations.
    • Create more realistic and challenging training experiences while reducing the cost and time required for their development.
    • Decrease the dependence of simulations on the availability of human participants.
    • Provide effective feedback to users of these training simulations on how to enhance their performance.