By Dr. Randall W. Hill, Jr. & S.C. Stuart
The AI Paradox
McKinsey recently reported that half of consumers now use AI-powered search, with the technology poised to impact $750 billion in revenue by 2028. The question for research institutions became urgent: what is your strategy for generative AI engine optimization?
At the USC Institute for Creative Technologies, a DoD University Affiliated Research Center sponsored by the US Army, this created a unique irony. A research institute conducting work in human-centered artificial intelligence (as well as modeling & simulation, learning sciences, mixed reality, and more) needed to make that very research discoverable by AI-powered search engines.
The solution: Build an AI-powered editorial system to communicate AI research to AI search algorithms.
The meta-recursion was not lost on the team.
The Challenge: Excellence Without Visibility
Research excellence alone is no longer sufficient. When groundbreaking work remains confined to specialist language and academic journals, it becomes invisible to funders, collaborators, employers, and the broader public who need to discover it.
ICT recognized a gap between the caliber of research being conducted and its reach. Researchers (as well as graduate students and interns) were producing sophisticated work in theory-of-mind modeling, humble AI systems, emotional reasoning in large language models, and multi-agent coordination.
But few had the time or training to translate that work into compelling public narratives.
In today’s competitive landscape, researchers need professional backstories they can carry forward. Projects expressed in plain language have a better chance of attracting interest and funding. Clear communication helps researchers stand out to reviewers, employers, and collaborators.
The question became operational: how could one communications professional support 18 labs across multiple projects without sacrificing quality or overwhelming limited resources?
ICT’s AI Research Foundation
The answer to this communications challenge emerged directly from ICT’s research expertise. Since its founding in 1999, the institute has explored fundamental questions about human-AI interaction: cognition, collaboration, context, and above all, trust.
Today, ICT remains mission-driven, focused on supporting warfighters, analysts, instructors, and decision-makers with systems that are both intelligent and intelligible. Current research spans critical domains:
Dr. Nik Gurney’s team develops theory-of-mind modeling for agents in mission rehearsal and adversarial games, equipping AI with social reasoning capabilities. Dr. Ning Wang’s Human-Centered AI Lab builds “humble” agents that disclose prediction uncertainty, restoring transparency in high-stakes environments. Ala Tak’s research, supervised by Dr. Jonathan Gratch, maps emotional reasoning inside large language models, revealing how biases in emotional appraisal shape coaching and decision support.
Bin Han’s Unity-based simulations demonstrate that personality traits like introversion and extraversion can emerge naturally in synthetic agents through language and movement, enabling psychologically coherent roleplayers for virtual training. Dr. Gale Lucas’s lab evaluates how people disclose information and calibrate trust when interacting with AI, particularly in scenarios shaped by stigma, trauma, or authority. Dr. Volkan Ustun and Soham Hans build multi-agent systems using LLMs to plan, reason, and coordinate like human teams.
ICT’s Learning Sciences Lab, led by Dr. Benjamin Nye, has deployed PAL3, a mobile AI tutor for coding and reasoning, alongside ARC, which helps Army instructors revise curriculum with AI assistance. The Vision and Graphics Lab (Dr. Yajie Zhao) and MxR Lab (David Nelson) push boundaries in computer vision, embodied simulation, and immersive training environments.
This research foundation informed a practical question: if ICT understands how to build trustworthy, transparent, role-specific AI systems for military and educational contexts, could those same principles solve the institute’s communications challenge?
The 5-AI Framework: Reimagining the Newsroom
The solution emerged from an unexpected source: traditional newspaper editorial workflow, reimagined for the age of large language models.
ICT developed what became known as the 5-AI Framework: a compact, disciplined editorial system designed to increase output without lowering standards. Rather than treating AI as a monolithic tool, the framework assigns specific editorial roles to different systems, creating checks and balances similar to a traditional newsroom hierarchy.
AI #1: The Rookie Reporter: Handles rapid first drafts, headlines, and lead paragraphs. Trained through conversational interaction rather than rigid prompts, this system learns to write for engagement while avoiding common pitfalls like burying the lede.
AI #2: The Senior Editor: Subjects drafts to editorial scrutiny, challenging structure, accuracy, and clarity. Running two AI systems in productive tension creates an internal quality check, similar to adversarial networks in machine learning.
AI #3: The Research Desk: Builds searchable knowledge bases from multiple publications, interviews, and research materials. Particularly valuable when researchers have extensive publication records requiring synthesis.
AI #4: The Visual Studio: Generates concept visuals aligned with editorial content. Interestingly, the framework evolved to let AI #1 and #2 write the image generation prompts, as they produced better results than human-written ones.
AI #5: The Production Suite: Handles image refinement, format optimization, and final web-ready preparation.
Process: From Raw Research to Published Essay
The workflow begins with structured conversation. Researchers receive ten targeted questions designed to distill their work into digestible sections, encouraged to respond conversationally rather than academically: one to two lines per question, using voice-to-text when possible.
Meanwhile, supporting materials are gathered: publications, pre-press papers, conference abstracts, lab notes, code repositories, and professional profiles. This raw data gets poured into the working document, deliberately unorganized, because AI excels at structuring information.
The editorial process then unfolds across multiple browser tabs. AI #1 receives a brief biographical sketch of the researcher, explicit instructions to avoid bias, and notes on writing style preferences. It produces a first draft in seconds. After human refinement, the draft moves to AI #2 for editorial review. If needed, AI #3 queries the knowledge base for fact-checking and context. AI #4 generates visuals, and AI #5 handles final production.
Human judgment remains central throughout, particularly for ethics, narrative coherence, tone, and credibility. The AI systems build the container; human expertise ensures what fills it serves the researcher’s goals.
Results: 10X Content Production and Amplified Impact
Over twelve months, ICT produced more than 135 editorial assets using the 5-AI Framework, spanning research profiles, project summaries, conference previews, and technical explainers. Content ranged from undergraduate research experiences through senior researcher publications and international conference preparations.
The quantitative results were significant. LinkedIn engagement rose 51 percent year-over-year. Academics began arriving at major conferences (NeurIPS, SIGGRAPH, IEEE, ICML, ACL, EMAS, IVA) with articles already published and amplified through institutional channels. Media training sessions now build on published work, helping researchers practice pitching to funders, employers, and journalists.
Production efficiency improved dramatically. The framework reduced editorial cycles by approximately 90 percent compared to traditional manual processes: reading, abstracting, drafting, fact-checking, editing, and finalizing. This enabled single-person output at multi-person scale while maintaining editorial standards. Peak monthly output reached 25 editorial assets, a 10X increase that would typically require significantly larger teams.
Perhaps most importantly, the systematic content production created new pathways for media engagement. The consistent flow of accessible, well-crafted stories about ICT’s research made it substantially easier for journalists and editors to discover, understand, and cover the institute’s work. This translated directly into elevated media presence: cover stories in Army AL&T and National Defense magazines, plus coverage in Deadline, Wired, The Wall Street Journal, The Atlantic, and other major outlets.
The qualitative impact proved equally valuable. Researchers leave ICT with deeper expertise, public-facing articles, and communication confidence. The framework creates a pipeline from lab to industry and academia while strengthening recruitment, alumni relations, and institutional reputation.
The Discipline Behind the Speed
The framework’s effectiveness stems less from raw speed than from systematic discipline. Training AI systems for specific editorial roles enforces tighter briefs, cleaner structure, and more economical writing. Each system’s constraints sharpen the overall process.
This approach proved adaptable across content types: technical research summaries, personal career narratives, conference previews, and collaborative project updates. The framework scales from a few hundred words to several thousand without requiring process redesign.
Ethics and Authenticity
The framework raises legitimate questions about authenticity and labor displacement. ICT’s position is straightforward: these AI systems generate first drafts based on researchers’ own materials (their responses, publications, lab notes, and documented work). Researchers retain full editorial control, refining and approving every element before publication.
The AI systems serve as structured translation tools, helping researchers communicate complex findings to broader audiences. This differs fundamentally from content fabrication or misrepresentation. Published essays become foundations for media training, ensuring researchers can discuss their work confidently and accurately.
Regarding labor concerns, the landscape remains complex. Entry-level creative positions face genuine disruption. Simultaneously, new distribution models and creation tools enable entrepreneurial approaches previously unavailable to individual creators. The framework exists within this tension, attempting to enhance human capacity rather than replace human judgment.
Implications and Adaptability
The 5-AI Framework demonstrates that systematic AI integration can deliver significant operational value in research communications. The approach could adapt across academic institutions, research labs, and organizations facing similar challenges: excellent work requiring translation for broader audiences, limited communications resources, and increasing pressure for visibility in AI-dominated search environments.
Key principles enable adaptation: assign specific roles rather than treating AI as monolithic, create productive tension between systems, maintain human oversight at critical decision points, and train systems through iteration rather than expecting immediate results.
Through this approach, every researcher (intern, graduate, or doctoral) gains the opportunity to develop both expertise and communication skills. By embedding storytelling into research training, institutions cultivate scientists who can articulate their work clearly, engage broader audiences, and build professional narratives supporting long-term career success.
The 5-AI Framework represents a practical response to a practical problem: helping researchers become visible in an age of AI-powered discovery. It augments human capacity, allowing small communications teams to achieve results once requiring extensive editorial departments.
The question facing research institutions is not whether AI will change how they communicate, it already has. The question is whether they will use it thoughtfully, systematically, and in service of the mission. If AI is now the world’s new front door, research must be written to meet it: clearly, consistently, and with purpose. ICT’s 5 AIs framework ensures that all research communications are optimized for AI discoverability – to reach the humans who need it.
Dr. Randall W. Hill, Jr. is Executive Director, USC Institute for Creative Technologies. S.C. Stuart is Head of Communications at ICT and Contributing Writer for Ziff Davis (PCMag).
