By Dr. Nik Gurney, (Interim) Research Lead, Social Simulation Lab, USC ICT
My research exists at the intersection of information, communication, computation, and economics. I combine human-subject experiments with computational methods to answer questions like:
- Can a machine represent the mental states of a human?
- How do people ‘fill in the blanks’ when a better-informed party–like a conversational AI–strategically withholds information?
- Is a conversational format of disclosure better than the standard, tabular format?
- How do complex preferences emerge in an adaptive system, that is, a human, machine intelligence, a social group, or a society?
Essentially, I research how humans think about what machines know and how machines can think about what humans know.
I completed a Ph.D. in Carnegie Mellon University’s Department of Social and Decision Sciences in May of 2020, then joined USC ICT as a postdoctoral researcher, eventually becoming a research scientist at ICT. In late 2024, I took the position of (interim) Research Lead for the Social Simulation Lab, to continue my work.
Social Simulation Lab at ICT
ICT’s Social Simulation Lab models and simulates human social interaction within AI systems. Research includes both descriptive models for simulating human-like decision-making and prescriptive models for human-machine teaming with autonomous agents.
Core Models and Techniques
The Social Simulation Lab has long relied on a decision-theoretic AI modeling approach as a foundation to its core work of developing a theory of mind system that is both accurate and portable. Its accuracy stems from the decision-theoretic models’ abilities to capture realistic human decision making processes. Meanwhile, it maintains portability thanks to the modeling approaches’ domain agnostic nature allowing for usage across domains and applications. Data-driven algorithms provide an automated mechanism for building and validating social simulations. Abstraction and approximation methods allow the models to scale to larger and more complex social decision-making.
In less technical terms, the Social Systems Laboratory builds computer models that help understand how people make decisions and interact with each other. Their approach combines psychology and mathematics to create accurate representations of human behavior that can be applied across many different situations. By analyzing real-world data, they can automatically build and test social simulations. They also use simplified models to study larger and more complex social interactions. This helps them develop systems that can better understand human thinking and behavior in a variety of contexts.
PsychSim: An Open-Source Library
The social-simulation architecture, PsychSim, provides an open-source library of these algorithms. It has been used for:
- Large-scale simulations of urban populations (e.g., hurricane response, patterns of life, terrorist attacks).
- Small-scale human-machine teaming (e.g., search-and-rescue).
- Interactive training games for urban stabilization (UrbanSim), cross-cultural negotiation (BiLAT), foreign language and culture (TacLang), and avoiding risky behavior (SOLVE).
Advancing Human-Machine Teaming
A new investigation into human-machine teaming seeks to build an AI model of teamwork into PsychSim that can give autonomous systems the social skills to be good teammates when working together with people. This work builds on our past research as part of the DARPA Artificial Social Intelligence in Support of Teams (ASIST) program. The basic modeling approach is to train machine learning models on comms data from teaming experiments and then use the insights produced by those models to empower an AI to provide meaningful teaming feedback.
As part of the IARPA ReSCIND (Reimagining Security with Cyberpsychology-Informed Network Defenses) program, we are building statistical models of hackers’ cognitive biases that can inform a PsychSim-based approach to cyber defense. In essence, giving PsychSim agents descriptive models of attackers and prescriptive models of defenses against them enables us to simulate and predict normatively correct responses for different cyber attacks.
Decision-Making and Conversational Technology
People frequently face decisions that require making inferences about withheld information. The advent of large language models coupled with conversational technology, e.g., Alexa, Siri, Cortana, and the Google Assistant, is changing the mode in which people make these inferences.
Experimental Findings
We demonstrate that conversational modes of information provision, relative to traditional digital media, result in more critical responses to withheld information, including:
- A reduction in evaluations of a product or service for which information is withheld.
- An increased likelihood of recalling that information was withheld.
These effects are robust across multiple conversational modes: a recorded phone conversation, an unfolding chat conversation, and a conversation script. We provide further evidence that these effects hold for conversations with the Google Assistant, a prominent conversational technology. The experimental results point to participants’ intuitions about why the information was withheld as the driver of the effect.
Collective Intelligence and Aggregative Crowdsourced Forecasting
We explore the use of aggregative crowdsourced forecasting (ACF) as a mechanism to help operationalize “collective intelligence” of human-machine teams for coordinated actions.
Defining Collective Intelligence
We adopt the definition for Collective Intelligence as: “A property of groups that emerges from synergies among data-information-knowledge, software-hardware, and individuals (those with new insights as well as recognized authorities) that enables just-in-time knowledge for better decisions than these three elements acting alone.”
Advancing Operational Capabilities
Aggregative crowdsourced forecasting (ACF) is a recent key advancement towards Collective Intelligence wherein predictions (e.g., X% probability that Y will happen) and rationales (why I believe it is this probability that X will happen) are elicited independently from a diverse crowd, aggregated, and then used to inform higher-level decision-making. This research asks whether ACF, as a key way to enable Operational Collective Intelligence, could be brought to bear on operational scenarios (i.e., sequences of events with defined agents, components, and interactions) and decision-making, and considers whether such a capability could provide novel operational capabilities to enable new forms of decision-advantage.
Theory of Mind in Artificial Intelligence
Existing approaches to Theory of Mind (ToM) in Artificial Intelligence (AI) overemphasize prompted, or cue-based, ToM, which may limit our collective ability to develop Artificial Social Intelligence (ASI).
Spontaneous ToM vs. Prompted ToM
Drawing from research in computer science, cognitive science, and related disciplines, we contrast prompted ToM with what we call spontaneous ToM – reasoning about others’ mental states that is grounded in unintentional, possibly uncontrollable cognitive functions. We argue for a principled approach to studying and developing AI ToM and suggest that a robust, or general, ASI will respond to prompts and spontaneously engage in social reasoning.
Causal Models of ToM
This new research program investigates the causal nature of Theory of Mind (ToM) in social interactions, particularly focusing on its implications for machine intelligence systems. The project hypothesizes that ToM’s causal necessity exists primarily when analytical solutions (e.g., game theoretic models) are unavailable, while its causal sufficiency manifests in scenarios where deriving analytical solutions is computationally prohibitive. The research posits that ToM’s causal contribution requires either computational complexity that precludes analytical solutions or involuntary mentalization, as observed in human cognition. The study aims to validate these hypotheses and evaluate their potential application in optimizing social reasoning capabilities of artificial intelligence systems, where, unlike humans, the engagement of higher-order cognitive functions can be controlled.
In more general terms, we are studying when and why humans automatically think about other people’s thoughts and feelings (called Theory of Mind) during social interactions. While humans can’t choose when to turn this ability on or off, artificial intelligence systems could potentially control when they use similar capabilities. This research aims to figure out exactly when this mind-reading ability is truly necessary or helpful in social situations. We think think it might only be essential when there’s no straightforward way to analyze a social situation, and particularly useful when a situation is too complex to figure out through pure logic. Understanding these patterns could help create more efficient AI systems that know exactly when to use their social reasoning abilities.
Adaptive AI and Trust
Optimization of human-AI teams hinges on the AI’s ability to tailor its interaction to individual human teammates.
Behavioral Predictors of Compliance
A common hypothesis in adaptive AI research is that minor differences in people’s predisposition to trust can significantly impact their likelihood of complying with recommendations from the AI. Predisposition to trust is often measured with self-report inventories that are administered before interactions. We benchmark a popular measure of this kind against behavioral predictors of compliance.
Key Findings
We find that the inventory is a less effective predictor of compliance than the behavioral measures in datasets taken from three previous research projects. This suggests a general property that individual differences in initial behavior are more predictive than differences in self-reported trust attitudes. This result also shows a potential for easily accessible behavioral measures to provide an AI with more accurate models without the use of (often costly) survey instruments.
HMI Interaction – and Understanding
In conclusion, the research conducted at the Social Simulation Lab at USC ICT highlights the critical intersections of human cognition, decision-making, and artificial intelligence. By combining theoretical frameworks with experimental data, the lab’s efforts focus on understanding and improving human-machine interactions, especially within the context of decision-making and collective intelligence.
The exploration of concepts like Theory of Mind, adaptive AI, and conversational technologies not only broadens our understanding of how machines can be better integrated into human decision-making processes but also points to practical applications that can enhance teamwork, cybersecurity, and forecasting.
Ultimately, this research paves the way for the development of more socially aware and effective AI systems that can adapt to individual human needs, building trust and fostering collaboration. As we continue to navigate this evolving landscape, the collaboration between human and machine intelligence holds the potential to redefine problem-solving in increasingly complex environments.
//