At 08:15 AM, on Wednesday January 17th, Dr. Wolfgang Hürst, Co-Chair of the 6th IEEE AIxVR, looked up from the podium at the USC Institute for Creative Technologies’ theater and said: “It’s good to be back, in-person again!”
This was the first time (since the pandemic) that IEEE AIxVR could gather its international attendees in one physical location, in the heart of USC’s Silicon Beach Campus. For those who couldn’t make it to Southern California, the event was live-streamed on a closed network, and Local Chair Dr. Yajie Zhao, tasked her Vision & Graphics Lab researchers with building out a full virtual experience, in keeping with the 3-day event’s focus on emerging technologies of the future.
The opening day’s keynote, How We Built the Holodeck (minus the parts that break the laws of physics), was delivered by ICT’s Executive Director Dr. Randall Hill, who told conference attendees: “We haven’t figured out quantum teleportation, and I’m not sure we ever will, at least not safely…but, in the past almost 25 years, through both basic and applied research, we have compiled almost all of the components of a holodeck, including photo-real computer graphics and virtual environments with intelligent agents, Virtual Humans, natural language generation, speech synthesis, modeling emotion and multi-modal sensing, drawing on learning sciences to reflect how people attain and retain knowledge.”
On display, during the conference, attendees got to see some of those holodeck elements from ICT, including demos showing the latest 3D Geospatial Terrain research; conversational testimony using natural language technologies in New Dimensions in Testimony (NDT); the Digital Interactive Victim Intake Simulator (DIVIS) in the Mixed Reality (MxR) Lab, as well as the Vision & Graphics Lab’s (VGL) Academy Award-winning Light Stage.
Throughout this year’s IEEE AIxVR, across the institute, every large meeting space was taken up with researchers presenting posters and academic papers, taking notes at the various talks, gathering together to share their latest insights, grabbing lunch, coffee (lots of coffee), snacks, splitting into smaller meet-ups, then all reconvening back in the theater for the panel discussions once more.
On day two, everyone returned early to catch Caltech’s Dr. Georgia Gkioxari’s keynote, focusing on the Evolution of Machine Vision through AI – namely teaching machines to see.
“Computational models get to observe this world from imagery, but only partially,” explained Dr. Gkioxari, “As visual data does not completely capture the richness of the world we live in.” She went on to explain that the goal of her research is to design visual perception models that bridge the gap between 2D imagery and our 4D world.
Dr. Gkioxari, who was a research scientist at Meta AI before taking up her current post at Caltech, also highlighted the key technological milestones that have driven progress in visual perception, particularly the ability to associate images with objects of scenes.
That evening, the conference moved locations to the nearby Hilton hotel for a banquet and the Best Paper Awards, aka the “IEEE AIxVR 2024 Hall of Fame.” Best Paper Award went to “Using Motion Forecasting for Behavior-Based Virtual Reality (VR) Authentication” by Mingjun Li, Natasha Kholgade Banerjee and Sean Banerjee. Best Poster Award was given to “Minimal Latency Speech-Driven Gesture Generation for Continuous Interaction in Social XR” by Niklas Krome and Stefan Kopp. While the Best Demo was won by “VR-Hand-in-Hand: Using Virtual Reality (VR) Hand Tracking For Hand-Object Data Annotation” by Matt Duver, Noah Wiederhold, Nikolas Lamb, Maria Kyrarini, Sean Banerjee and Natasha Kholgade Banerjee.
On the final day, Google Research’s Dr. Chloe LeGendre gave the keynote: Remain in Light: Realistic Augmented Imagery in the AI Era, drawing on her current work on computational photography and high dynamic range imaging.
Dr. LeGendre, who did her graduate research at ICT’s Vision and Graphics Lab, described advances towards crafting AR imagery that seamlessly blend the real and virtual together, with a focus on matching scene lighting. With a nod to recent developments in image generation models, she also shared her perspective on generative models as applied to AR experiences.As the event closed, ICT handed over the IEEE AIxVR hosting baton to the Instituto Superior Técnico, Lisbon, Portugal which will be presiding over next year’s event in January 2025. Like all technology conferences it will have remote participation available. But, just as with this year, everyone is looking forward to gathering in corporeal form again as there’s nothing quite like the in-person experience, especially when your research is focused on the unreal, the virtual and the augmented versions of reality.