BYLINE: Andrew Leeds, Technical Support Lead, USC Institute for Creative Technologies
Last month at the 38th International FLAIRS Conference, I had the opportunity to present our study, “Usability and Preferences for a Personalized Adaptive Learning System for AI Upskilling,” on behalf of my co-authors—Mark G. Core, Benjamin Nye, Kayla Carr, Shirley Li, Aaron Shiel, Daniel Auerbach, and William Swartout. Representing our team at USC’s Institute for Creative Technologies, I shared our ongoing work on Game-if-AI, a personalized adaptive learning system designed to help learners upskill in artificial intelligence—by using AI. We’ve been passionate about the idea that learning AI shouldn’t be confined to computer science majors or technical elites. With the right tools, anyone can—and should—learn to understand and work with AI.
Why Upskilling in AI Matters Now
From predictive analytics in logistics to LLM-based assistants in customer service, AI tools are becoming essential across industries. Employers, including the U.S. Department of Defense, are prioritizing upskilling (enhancing current skills) and reskilling (training for new roles), especially among technical and non-CS professionals.
In both the public and private sector, organizations are asking: How do we train people to use AI—effectively, ethically, and with confidence?
This is especially challenging for learners who aren’t traditional software engineers but still need to understand how to configure, interpret, and improve AI systems. We call them “technician-level” learners: people who need AI literacy and practical skills, but not necessarily deep algorithmic knowledge.
The Game-if-AI System: Built on PAL3
To meet this need, we adapted our existing PAL3 platform (Personal Assistant for Lifelong Learning) into something tailored for AI: Game-if-AI. PAL3 was originally used for military technician training and resilience education. Game-if-AI reimagines that same adaptive infrastructure for AI concepts and coding tasks.
It includes:
- A pedagogical agent (“Pal”) that gives personalized topic and activity recommendations.
- A personal roadmap to help learners track mastery across topics.
- Dialog-based tutoring using OpenTutor for conceptual understanding.
- GAI notebooks, gamified programming environments modeled after Jupyter, which support hands-on practice.
What We Studied
We evaluated Game-if-AI over four semesters in a new undergraduate AI minor. Participation was optional but incentivized through small amounts of extra credit. Our key research questions were:
- Topic Engagement – Which topics do students choose, and why?
- User Acceptance – How do students rate usability and perceived learning value?
- Design Preferences – What features support (or hinder) learning?
What We Learned: Engagement & Usability
Topic Preferences
Students gravitated toward:
- Topics aligned with class material (e.g., Neural Networks).
- Easier and high-interest topics (like LLM Prompting, even though it wasn’t graded).
- Modules with games or interactive evaluation screens.
This suggests students are strategic: they engage more with content that’s relevant, digestible, and enjoyable.
Usability Ratings
Across semesters, overall reception was strong:
- 93% said using an adaptive system for AI learning was a good idea.
- Traditional content (readings and quizzes) received the highest ease-of-use and usefulness ratings.
- More complex activities (tutoring dialogs and programming) were rated slightly lower but still positive.
Notably, students reported increased expectations for AI assistants to behave more like LLMs (e.g., “Why can’t Pal help fix my error?”). This is a big shift from older tutoring systems, where learners rarely used help buttons. Today, they want chat-style explanations and just-in-time support.
Programming with AI: GAI Notebooks
One of the most innovative parts of Game-if-AI is our GAI notebooks. These web-based programming tasks guide students to:
- Modify Python code within structured cells.
- Receive interactive feedback and hints.
- Play mini-games that visualize AI model decisions (like categorizing images or reviews).
We carefully designed these activities to simulate real-world AI development, not just “fill-in-the-blank” coding. Yet, we found a clear tension: students appreciated the feedback and visualizations, but still found the interfaces demanding. This reflects the cognitive load of programming, especially for non-CS learners.
Designing for the Modern AI Learner
Over time, we’ve observed how student expectations are shaped by the wider tech ecosystem:
- They expect LLM-style interactions, even in educational tools.
- They value personalization and pacing, especially in optional learning environments.
- They want AI systems to explain why a recommendation or correction was made.
All of this challenges how we build tutoring systems in the LLM era. The bar is higher—but so is the opportunity.
Where We’re Going Next
What excites me most is how this research opens up new possibilities for AI education at scale:
- Expanding beyond university classrooms to corporate training, K-12, and public sector upskilling.
- Exploring how much AI assistance is too much—balancing guidance with exploration.
- Making passive resources interactive using generative AI, helping learners move beyond the illusion of understanding.
The key question remains: How do we teach people not just to use AI, but to think critically about its outputs, limitations, and societal impact?
Final Thoughts
Presenting our research at FLAIRS-38 was an enriching experience, offering valuable feedback from peers and experts in the field. The real potential of AI in education isn’t just in automating content—it’s in transforming how we learn. By making AI a co-pilot in the learning process, we empower a broader range of learners to confidently navigate an AI-rich world.
For a more detailed understanding of our study, you can access the full paper here.