July 25, 2013

ICT graphics innovations in display and rendering technologies will be featured throughout the upcoming ACM SIGGRAPH conference July 21- 25 in Anaheim, Calif. SIGGRAPH is the premier international forum for disseminating new scholarly work and emerging trends and techniques in computer graphics and interactive visuals.

Technical Paper

Acquiring Reflectance and Shape from Continuous Spherical Harmonic Illumination
Authors:Borom Tunwattanapong, Graham Fyffe, Paul Graham, Jay Busch, Xueming Yu, Abhijeet Ghosh, Paul Debevec
Abstract: A new reflectance measurement and 3D scanning technique uses a rotating arc of light and long-exposure photography to illuminate an object with spherical harmonic illumination and estimate maps from multiple viewpoints. The responses yield estimates of diffuse and specular reflectance parameters and 3D geometry.

ICT Graphics Lab’s Technical Paper page

Real-Time Live Demo

Digital Ira: High-Resolution Facial Performance Playback
Real-time facial animation from high-resolution scans driven by video performance capture rendered in a reproducible, game-ready pipeline. This collaborative work incorporates expression blending for the face, extensions to photoreal eye and skin rendering and real-time ambient shadows.
NVIDIA: Curtis Beeson, Steve Burke, Mark Daly
Activision, Inc: Javier von der Pahlen, Etienne Danvoye, Bernardo Antoniazzi, Michael Eheler, Zbynek Kysela, Jorge Jimenez
USC Institute for Creative Technologies” Oleg Alexander, Jay Busch, Paul Graham, Borom Tunwattanapong, Andrew Jones, Koki Nagano, Ryosuke Ichikari, Paul Debevec, Graham Fyffe

ICT Graphics Lab’s Digital Ira page

Related Work
Talk: Driving High-Resolution Facial Blendshapes with Video Performance Capture, Graham Fyffe
Poster: Digital Ira: Creating a Real-Time Photoreal Digital Actor (scroll to #36)
Poster: Vuvuzela: A Facial Scan Correspondence Tool (scroll to #113)

Emerging Technologies Demo

An Autostereoscopic Projector Array Optimized for 3D Facial Display
USC Institute for Creative Technologies: Koki Nagano, Andrew Jones, Jay Busch, Paul Debevec, Mark Bolas, Xueming Yu
University of California at Santa Cruz: Jing Liu
Abstract: Video projectors are rapidly shrinking in size, power consumption, and cost. Such projectors provide unprecedented flexibility to stack, arrange, and aim pixels without the need for moving parts. This dense projector display is optimized in size and resolution to display an autostereoscopic life-sized 3D human face. It utilizes 72 Texas Instruments PICO projectors to illuminate a 30 cm x 30 cm anisotropic screen with a wide 110-degree field of view. The demonstration includes both live scanning of subjects and virtual animated characters.

ACM Symposium on Spatial Use

Interaction (SUI)

Just prior to SIGGRAPH, ICT will host the first ACM Symposium on Spatial User Interaction (SUI), chaired by ICT’s Evan Suma.

ACM SIGGRAPH/Eurographics

Symposium on Computer Animation

Virtual Character Performance from Speech
Authors: Stacy Marsella, Margaux Lhommet, Yuyu Xu, Andrew Feng, Stefan Scherer, Ari Shapiro
Abstract: We demonstrate a method for generating a 3D virtual character performance from the audio signal by inferring the acoustic and semantic properties of the utterance. Through a prosodic analysis of the acoustic signal, we perform an analysis for stress and pitch, relate it to the spoken words and identify the agitation state. Our rule-based system performs a shallow analysis of the utterance text to determine its semantic, pragmatic and rhetorical content. Based on these analyses, the system generates facial expressions and behaviors including head movements, eye saccades, gestures, blinks and gazes. Our technique is able to synthesize the performance and generate novel gesture animations based on coarticulation with other closely scheduled animations. Because our method utilizes semantics in addition to prosody, we are able to generate virtual character performances that are more appropriate than methods that use only prosody. We perform a study that shows that our technique outperforms methods that use prosody alone.

Digital Production Symposium

Towards Higher Quality Character Performance in Previz, Digital Production Symposium
Authors: Stacy Marsella, Ari Shapiro, Andrew Feng, Yuyu Xu, Margaux Lhommet, Stefan Scherer
Abstract: We seek to raise the standard of quality for automatic and minimal cost 3D content in previz by automated generation of a believable 3D character performance from only an audio track and its transcription.