Publications
Search
Marti, Deniz; Hanrahan, David; Sanchez-Triana, Ernesto; Wells, Mona; Corra, Lilian; Hu, Howard; Breysse, Patrick N.; Laborde, Amalia; Caravanos, Jack; Bertollini, Roberto; Porterfield, Kate; Fuller, Richard
Structured Expert Judgement Approach of the Health Impact of Various Chemicals and Classes of Chemicals Technical Report
Public and Global Health 2024.
@techreport{marti_structured_2024,
title = {Structured Expert Judgement Approach of the Health Impact of Various Chemicals and Classes of Chemicals},
author = {Deniz Marti and David Hanrahan and Ernesto Sanchez-Triana and Mona Wells and Lilian Corra and Howard Hu and Patrick N. Breysse and Amalia Laborde and Jack Caravanos and Roberto Bertollini and Kate Porterfield and Richard Fuller},
url = {http://medrxiv.org/lookup/doi/10.1101/2024.01.30.24301863},
doi = {10.1101/2024.01.30.24301863},
year = {2024},
date = {2024-02-01},
urldate = {2024-02-21},
institution = {Public and Global Health},
abstract = {ABSTRACT
Introduction
Chemical contamination and pollution are an ongoing threat to human health and the environment. The concern over the consequences of chemical exposures at the global level continues to grow. Because resources are constrained, there is a need to prioritize interventions focused on the greatest health impact. Data, especially related to chemical exposures, are rarely available for most substances of concern, and alternate methods to evaluate their impact are needed.
Structured Expert Judgment (SEJ) Process
A Structured Expert Judgment
3
process was performed to provide plausible estimates of health impacts for 16 commonly found pollutants: asbestos, arsenic, benzene, chromium, cadmium, dioxins, fluoride, highly hazardous pesticides (HHPs), lead, mercury, polycyclic-aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), Per- and Polyfluorinated Substances (PFAs), phthalates, endocrine disrupting chemicals (EDCs), and brominated flame retardants (BRFs). This process, undertaken by sector experts, weighed individual estimations of the probable global health scale health impacts of each pollutant using objective estimates of the expert opinions’ statistical accuracy and informativeness.
Main Findings
The foremost substances, in terms of mean projected annual total deaths, were lead, asbestos, arsenic, and HHPs. Lead surpasses the others by a large margin, with an estimated median value of 1.7 million deaths annually. The three other substances averaged between 136,000 and 274,000 deaths per year. Of the 12 other chemicals evaluated, none reached an estimated annual death count exceeding 100,000. These findings underscore the importance of prioritizing available resources on reducing and remediating the impacts of these key pollutants.
Range of Health Impacts
Based on the evidence available, experts concluded some of the more notorious chemical pollutants, such as PCBs and dioxin, do not result in high levels of human health impact from a global scale perspective. However, the chemical toxicity of some compounds released in recent decades, such as Endocrine Disrupters and PFAs, cannot be ignored, even if current impacts are limited. Moreover, the impact of some chemicals may be disproportionately large in some geographic areas. Continued research and monitoring are essential; and a preventative approach is needed for chemicals.
Future Directions
These results, and potential similar analyses of other chemicals, are provided as inputs to ongoing discussions about priority setting for global chemicals and pollution management. Furthermore, we suggest that this SEJ process be repeated periodically as new information becomes available.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Introduction
Chemical contamination and pollution are an ongoing threat to human health and the environment. The concern over the consequences of chemical exposures at the global level continues to grow. Because resources are constrained, there is a need to prioritize interventions focused on the greatest health impact. Data, especially related to chemical exposures, are rarely available for most substances of concern, and alternate methods to evaluate their impact are needed.
Structured Expert Judgment (SEJ) Process
A Structured Expert Judgment
3
process was performed to provide plausible estimates of health impacts for 16 commonly found pollutants: asbestos, arsenic, benzene, chromium, cadmium, dioxins, fluoride, highly hazardous pesticides (HHPs), lead, mercury, polycyclic-aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), Per- and Polyfluorinated Substances (PFAs), phthalates, endocrine disrupting chemicals (EDCs), and brominated flame retardants (BRFs). This process, undertaken by sector experts, weighed individual estimations of the probable global health scale health impacts of each pollutant using objective estimates of the expert opinions’ statistical accuracy and informativeness.
Main Findings
The foremost substances, in terms of mean projected annual total deaths, were lead, asbestos, arsenic, and HHPs. Lead surpasses the others by a large margin, with an estimated median value of 1.7 million deaths annually. The three other substances averaged between 136,000 and 274,000 deaths per year. Of the 12 other chemicals evaluated, none reached an estimated annual death count exceeding 100,000. These findings underscore the importance of prioritizing available resources on reducing and remediating the impacts of these key pollutants.
Range of Health Impacts
Based on the evidence available, experts concluded some of the more notorious chemical pollutants, such as PCBs and dioxin, do not result in high levels of human health impact from a global scale perspective. However, the chemical toxicity of some compounds released in recent decades, such as Endocrine Disrupters and PFAs, cannot be ignored, even if current impacts are limited. Moreover, the impact of some chemicals may be disproportionately large in some geographic areas. Continued research and monitoring are essential; and a preventative approach is needed for chemicals.
Future Directions
These results, and potential similar analyses of other chemicals, are provided as inputs to ongoing discussions about priority setting for global chemicals and pollution management. Furthermore, we suggest that this SEJ process be repeated periodically as new information becomes available.
Prinzing, Michael; Garton, Catherine; Berman, Catherine J.; Zhou, Jieni; West, Taylor Nicole; Gratch, Jonathan; Fredrickson, Barbara
Can AI Agents Help Humans to Connect? Technical Report
PsyArXiv 2023.
@techreport{prinzing_can_2023,
title = {Can AI Agents Help Humans to Connect?},
author = {Michael Prinzing and Catherine Garton and Catherine J. Berman and Jieni Zhou and Taylor Nicole West and Jonathan Gratch and Barbara Fredrickson},
url = {https://osf.io/muq6s},
doi = {10.31234/osf.io/muq6s},
year = {2023},
date = {2023-10-01},
urldate = {2023-12-07},
institution = {PsyArXiv},
abstract = {This paper reports on a pre-registered experiment designed to test whether artificial agents can help people to create more moments of high-quality connection with other humans. Of four pre-registered hypotheses, we found (partial) support for only one.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Saxon, Leslie; Boberg, Jill; Faulk, Robert; Barrett, Trevor
Identifying relationships between compression garments and recovery in a military training environment Technical Report
In Review 2023.
@techreport{saxon_identifying_2023,
title = {Identifying relationships between compression garments and recovery in a military training environment},
author = {Leslie Saxon and Jill Boberg and Robert Faulk and Trevor Barrett},
url = {https://www.researchsquare.com/article/rs-3193173/v1},
doi = {10.21203/rs.3.rs-3193173/v1},
year = {2023},
date = {2023-07-01},
urldate = {2023-09-21},
institution = {In Review},
abstract = {Abstract
Development and maintenance of physical capabilities is an essential part of combat readiness in the military. This readiness requires continuous training and is therefore compromised by injury. Because Service Members (SMs) must be physically and cognitively prepared to conduct multifaceted operations in support of strategic objectives, and because the Department of Defense’s (DoD) non-deployable rate and annual costs associated with treating SMs continue to rise at an alarming rate, finding a far-reaching and efficient solution to prevent such injuries is a high priority. Compression garments (CGs) have become increasingly popular over the past decade in human performance applications, and reportedly facilitate post-exercise recovery by reducing muscle soreness, increasing blood lactate removal, and increasing perception of recovery, but the evidence is mixed, at best. In the current study we explored whether CG use, and duration of use, improves recovery and mitigates muscle soreness effectively in an elite Marine training course. In order to test this, we subjected Service Members to fatiguing exercise and then measured subjective and objective recovery and soreness using participant reports and grip and leg strength over a 72-hour recovery period. Findings from this study suggest that wearing CGs for post training recovery showed significant and moderate positive effects on subjective soreness, fatigue, and perceived level of recovery. We did not find statistically significant effects on physical performance while testing grip or leg strength. These findings suggest that CG may be a beneficial strategy for military training environments to accelerate muscle recovery after high-intensity exercise, without adverse effects to the wearer or negative impact on military training.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Development and maintenance of physical capabilities is an essential part of combat readiness in the military. This readiness requires continuous training and is therefore compromised by injury. Because Service Members (SMs) must be physically and cognitively prepared to conduct multifaceted operations in support of strategic objectives, and because the Department of Defense’s (DoD) non-deployable rate and annual costs associated with treating SMs continue to rise at an alarming rate, finding a far-reaching and efficient solution to prevent such injuries is a high priority. Compression garments (CGs) have become increasingly popular over the past decade in human performance applications, and reportedly facilitate post-exercise recovery by reducing muscle soreness, increasing blood lactate removal, and increasing perception of recovery, but the evidence is mixed, at best. In the current study we explored whether CG use, and duration of use, improves recovery and mitigates muscle soreness effectively in an elite Marine training course. In order to test this, we subjected Service Members to fatiguing exercise and then measured subjective and objective recovery and soreness using participant reports and grip and leg strength over a 72-hour recovery period. Findings from this study suggest that wearing CGs for post training recovery showed significant and moderate positive effects on subjective soreness, fatigue, and perceived level of recovery. We did not find statistically significant effects on physical performance while testing grip or leg strength. These findings suggest that CG may be a beneficial strategy for military training environments to accelerate muscle recovery after high-intensity exercise, without adverse effects to the wearer or negative impact on military training.
Nye, Benjamin D.; Core, Mark G.; Jaiswa, Shikhar; Ghosal, Aviroop; Auerbach, Daniel
Acting Engaged: Leveraging Play Persona Archetypes for Semi-Supervised Classification of Engagement Technical Report
International Educational Data Mining Society 2021, (Publication Title: International Educational Data Mining Society ERIC Number: ED615498).
@techreport{nye_acting_2021,
title = {Acting Engaged: Leveraging Play Persona Archetypes for Semi-Supervised Classification of Engagement},
author = {Benjamin D. Nye and Mark G. Core and Shikhar Jaiswa and Aviroop Ghosal and Daniel Auerbach},
url = {https://eric.ed.gov/?id=ED615498},
year = {2021},
date = {2021-01-01},
urldate = {2023-03-31},
institution = {International Educational Data Mining Society},
abstract = {Engaged and disengaged behaviors have been studied across a variety of educational contexts. However, tools to analyze engagement typically require custom-coding and calibration for a system. This limits engagement detection to systems where experts are available to study patterns and build detectors. This work studies a new approach to classify engagement patterns without expert input, by using a play persona methodology where labeled archetype data is generated by novice testers acting out different engagement patterns in a system. Domain-agnostic task features (e.g., response time to an activity, scores/correctness, task difficulty) are extracted from standardized data logs for both archetype and authentic user sessions. A semi-supervised methodology was used to label engagement; bottom-up clusters were combined with archetype data to build a classifier. This approach was analyzed with a focus on cold-start performance on small samples, using two metrics: consistency with larger full-sample cluster assignments and stability of points staying in the same cluster once assigned. These were compared against a baseline of clustering without an incrementally trained classifier. Findings on a data set from a branching multiple-choice scenario-based tutoring system indicated that approximately 52 unlabeled samples and 51 play-test labeled samples were sufficient to classify holdout sessions at 85% consistency with a full set of 145 unsupervised samples. Additionally, alignment to play persona samples for the full set matched expert labels for clusters. Use-cases and limitations of this approach are discussed. [For the full proceedings, see ED615472.]},
note = {Publication Title: International Educational Data Mining Society
ERIC Number: ED615498},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Stocco, Andrea; Steine-Hanson, Zoe; Koh, Natalie; Laird, John E.; Lebiere, Christian J.; Rosenbloom, Paul
Analysis of the Human Connectome Data Supports the Notion of A “Common Model of Cognition” for Human and Human-Like Intelligence Technical Report
Neuroscience 2019.
@techreport{stocco_analysis_2019,
title = {Analysis of the Human Connectome Data Supports the Notion of A “Common Model of Cognition” for Human and Human-Like Intelligence},
author = {Andrea Stocco and Zoe Steine-Hanson and Natalie Koh and John E. Laird and Christian J. Lebiere and Paul Rosenbloom},
url = {http://biorxiv.org/lookup/doi/10.1101/703777},
doi = {10.1101/703777},
year = {2019},
date = {2019-07-01},
pages = {38},
institution = {Neuroscience},
abstract = {The Common Model of Cognition (CMC) is a recently proposed, consensus architecture intended to capture decades of progress in cognitive science on modeling human and human-like intelligence. Because of the broad agreement around it and preliminary mappings of its components to specific brain areas, we hypothesized that the CMC could be a candidate model of the large-scale functional architecture of the human brain. To test this hypothesis, we analyzed functional MRI data from 200 participants and seven different tasks that cover the broad range of cognitive domains. The CMC components were identified with functionally homologous brain regions through canonical fMRI analysis, and their communication pathways were translated into predicted patterns of effective connectivity between regions. The resulting dynamic linear model was implemented and fitted using Dynamic Causal Modeling, and compared against four alternative brain architectures that had been previously proposed in the field of neuroscience (two hierarchical architectures and two hub-and-spoke architectures) using a Bayesian approach. The results show that, in all cases, the CMC vastly outperforms all other architectures, both within each domain and across all tasks. The results suggest that a common, general architecture that could be used for artificial intelligence effectively underpins all aspects of human cognition, from the overall functional architecture of the human brain to higher level thought processes.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
The Context of Military Environments: Social and Organizational Factors Technical Report
National Academies Press Washington, DC, 2014.
@techreport{noauthor_context_2014,
title = {The Context of Military Environments: Social and Organizational Factors},
url = {http://sites.nationalacademies.org/DBASSE/BBCSS/CurrentProjects/DBASSE_080746},
year = {2014},
date = {2014-09-01},
address = {Washington, DC},
institution = {National Academies Press},
abstract = {The U.S. Army faces a variety of challenges to maintain a ready and capable force into the future. Its missions are diverse, following a continuum from peace to war that includes combat and counterinsurgency operations as well as negotiation, reconstruction, and stability operations that require a variety of personnel and skill sets to execute. Missions often demand rapid decision making and coordination with others in novel ways, so that personnel are not simply following a specific set of tactical orders but, rather, carrying out mission command through an understanding of broader strategic goals in order to develop and choose among courses of action. Like any workforce, the Army is diverse in terms of demographic characteristics, such as gender and race, with a commitment of its leadership to ensure equal opportunities across all demographic parties. With these challenges comes the urgent need to better understand how contextual factors influence soldier and small unit behavior and mission performance.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Gahm, Gregory; Reger, Greg; Ingram, Mary V.; Reger, Mark; Rizzo, Albert
A Multisite, Randomized Clinical Trial of Virtual Reality and Prolonged Exposure Therapy for Active Duty Soldiers with PTSD Technical Report
no. A611975, 2012.
@techreport{gahm_multisite_2012,
title = {A Multisite, Randomized Clinical Trial of Virtual Reality and Prolonged Exposure Therapy for Active Duty Soldiers with PTSD},
author = {Gregory Gahm and Greg Reger and Mary V. Ingram and Mark Reger and Albert Rizzo},
url = {http://ict.usc.edu/pubs/A%20Multisite,%20Randomized%20Clinical%20Trial%20of%20Virtual%20Reality%20and%20Prolonged%20Exposure%20Therapy%20for%20Active%20Duty%20Soldiers%20with%20PTSD.pdf},
year = {2012},
date = {2012-12-01},
number = {A611975},
abstract = {This randomized, single blind study extends recruitment to an additional active duty site (Womack Army Medical Center at Ft Bragg) in support of a previously funded clinical trial to evaluate the efficacy of virtual reality exposure therapy (VRET) and prolonged exposure therapy (PE) with a waitlist (WL) group in the treatment of posttraumatic stress disorder (PTSD) in active duty (AD) Soldiers with combat-related trauma. During the first year, the study team developed the infrastructure to implement the trial including personnel recruitment, hiring, and initial training, process development to identify, screen, and enroll participants, and research protocol development and approval by IRB s. During the second year hiring of clinical staff and training of the study team was completed. Recruitment and enrollment commenced.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Graham, Paul; Tunwattanapong, Borom; Busch, Jay; Yu, Xueming; Jones, Andrew; Debevec, Paul; Ghosh, Abhijeet
Measurement-based Synthesis of Facial Microgeometry Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2012, 2012.
@techreport{graham_measurement-based_2012,
title = {Measurement-based Synthesis of Facial Microgeometry},
author = {Paul Graham and Borom Tunwattanapong and Jay Busch and Xueming Yu and Andrew Jones and Paul Debevec and Abhijeet Ghosh},
url = {http://ict.usc.edu/pubs/ICT-TR-01-2012.pdf},
year = {2012},
date = {2012-11-01},
number = {ICT TR 01 2012},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present a technique for generating microstructure-level facial geometry by augmenting a mesostructure-level facial scan with detail synthesized from a set of exemplar skin patches scanned at much higher resolution. We use constrained texture synthesis based on image analogies to increase the resolution of the facial scan in a way that is consistent with the scanned mesostructure. We digitize the exemplar patches with a polarization-based computational illumination technique which considers specular reflection and single scattering. The recorded microstructure patches can be used to synthesize full-facial microstructure detail for either the same subject or to a different subject. We show that the technique allows for greater realism in facial renderings including more accurate reproduction of skin’s specular roughness and anisotropic reflection effects.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Fyffe, Graham
High Fidelity Facial Hair Capture Technical Report
University of Southern California Institute for Creative Technologies Playa Vista, CA, no. ICT TR 02 2012, 2012.
@techreport{fyffe_high_2012,
title = {High Fidelity Facial Hair Capture},
author = {Graham Fyffe},
url = {https://apps.dtic.mil/sti/trecms/pdf/AD1170996.pdf},
year = {2012},
date = {2012-08-01},
booktitle = {SIGGRAPH},
number = {ICT TR 02 2012},
address = {Playa Vista, CA},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We propose an extension to multi-view face capture that reconstructs high quality facial hair automatically. Multi-view stereo is well known for producing high quality smooth surfaces and meshes, but fails on fine structure such as hair. We exploit this failure, and automatically detect the hairs on a face by careful analysis of the pixel reconstruction error of the multi-view stereo result. Central to our work is a novel stereo matching cost function, which we call equalized cross correlation, that properly accounts for both camera sensor noise and pixel sampling variance. In contrast to previous works that treat hair modeling as a synthesis problem based on image cues, we reconstruct facial hair to explain the same highresolution input photographs used for face reconstruction, producing a result with higher fidelity to the input photographs.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Chiang, Jen-Yuan; Fyffe, Graham
Realistic Real-Time Rendering of Eyes and Teeth Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2010, 2010.
@techreport{chiang_realistic_2010,
title = {Realistic Real-Time Rendering of Eyes and Teeth},
author = {Jen-Yuan Chiang and Graham Fyffe},
url = {http://ict.usc.edu/pubs/ICT%20TR%2001%202010.pdf},
year = {2010},
date = {2010-09-01},
number = {ICT TR 01 2010},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zheng, Jun; Ghosh, Abhijeet
Specular Normal Synthesis Using Stochastic Super-resolution for Detailed Facial Geometry Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 02 2010, 2010.
@techreport{zheng_specular_2010,
title = {Specular Normal Synthesis Using Stochastic Super-resolution for Detailed Facial Geometry},
author = {Jun Zheng and Abhijeet Ghosh},
url = {http://www.ict.usc.edu//pubs/ICT-TR-02-2010.pdf},
year = {2010},
date = {2010-01-01},
number = {ICT TR 02 2010},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {Detailed facial geometry is critical for the visual realism of face models in computer games, movies, and virtual reality applica- tions. The existing face scanning methods, however, are either sacrificing resolution for real-time processing, or requiring expen- sive high-speed cameras. In this work we propose a new technique for real-time high-resolution facial scanning using spherical gradi- ent illumination. The key elements of the approach are the use of stochastic super-resolution to generate specular normal map based on diffuse normal map, instead of capturing both of them during scanning process. We analyze a training dataset of diffuse normal maps and specular normals of a particular object and learn the map- ping from low-frequency components of diffuse normal maps to high-frequency components of specular normal maps of that ob- ject. This enables us to infer, for example, the most likely high- resolution specular normal map detail depicting the same person as a low-resolution diffuse normal map given as input. Experimen- tal results show that the proposed algorithm generates high-quality specular normal maps from diffuse normal map inputs.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yamada, Hideshi; Peers, Pieter; Debevec, Paul
Compact Representation of Reflectance Fields using Clustered Sparse Residual Factorization Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 02 2009, 2009.
@techreport{yamada_compact_2009,
title = {Compact Representation of Reflectance Fields using Clustered Sparse Residual Factorization},
author = {Hideshi Yamada and Pieter Peers and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT-TR-02-2009.pdf},
year = {2009},
date = {2009-01-01},
number = {ICT TR 02 2009},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present a novel compression method for fixed viewpoint re- flectance fields, captured for example by a Light Stage. Our com- pressed representation consists of a global approximation that ex- ploits the similarities between the reflectance functions of different pixels, and a local approximation that encodes the per-pixel resid- ual with the global approximation. Key to our method is a clustered sparse residual factorization. This sparse residual factorization en- sures that the per-pixel residual matrix is as sparse as possible, en- abling a compact local approximation. Finally, we demonstrate that the presented compact representation is well suited for high-quality real-time rendering.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Alexander, Oleg; Rogers, Mike; Lambeth, William; Chiang, Matt; Debevec, Paul
Creating a Photoreal Digital Actor: The Digital Emily Project Technical Report
University of Southern California Institute for Creative Technologies London, UK, no. ICT TR 04 2009, 2009.
@techreport{alexander_creating_2009-1,
title = {Creating a Photoreal Digital Actor: The Digital Emily Project},
author = {Oleg Alexander and Mike Rogers and William Lambeth and Matt Chiang and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT%20TR%2004%202009.pdf},
year = {2009},
date = {2009-01-01},
booktitle = {IEEE European Conference on Visual Media Production (CVMP)},
number = {ICT TR 04 2009},
address = {London, UK},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {The Digital Emily Project is a collaboration between facial animation company Image Metrics and the Graphics Laboratory at the University of Southern California's Institute for Creative Technologies to achieve one of the world's first photorealistic digital facial performances. The project leverages latest-generation techniques in high-resolution face scanning, character rigging, video-based facial animation, and compositing. An actress was first filmed on a studio set speaking emotive lines of dialog in high definition. The lighting on the set was captured as a high dynamic range light probe image. The actress' face was then three-dimensionally scanned in thirty-three facial expressions showing different emotions and mouth and eye movements using a high-resolution facial scanning process accurate to the level of skin pores and fine wrinkles. Lighting-independent diffuse and specular reflectance maps were also acquired as part of the scanning process. Correspondences between the 3D expression scans were formed using a semi-automatic process, allowing a blendshape facial animation rig to be constructed whose expressions closely mirrored the shapes observed in the rich set of facial scans; animated eyes and teeth were also added to the model. Skin texture detail showing dynamic wrinkling was converted into multiresolution displacement maps also driven by the blend shapes. A semi-automatic video-based facial animation system was then used to animate the 3D face rig to match the performance seen in the original video, and this performance was tracked onto the facial motion in the studio video. The final face was illuminated by the captured studio illumination and shading using the acquired reflectance maps with a skin translucency shading algorithm. Using this process, the project was able to render a synthetic facial performance which was generally accepted as being a real face.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lamond, Bruce; Peers, Pieter; Ghosh, Abhijeet; Debevec, Paul
Image-based Separation of Diffuse and Specular Reflections using Environmental Structured Illumination, Supplemental Material Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2009, 2009.
@techreport{lamond_image-based_2009,
title = {Image-based Separation of Diffuse and Specular Reflections using Environmental Structured Illumination, Supplemental Material},
author = {Bruce Lamond and Pieter Peers and Abhijeet Ghosh and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT-TR-01-2009.pdf},
year = {2009},
date = {2009-01-01},
number = {ICT TR 01 2009},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present an image-based method for separating dif- fuse and specular reflections using environmental struc- tured illumination. Two types of structured illumination are discussed: phase-shifted sine wave patterns, and phase- shifted binary stripe patterns. In both cases the low-pass filtering nature of diffuse reflections is utilized to separate the reflection components. We illustrate our method on a wide range of example scenes and applications.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Kok, Iwan
Internship Report on Predicting Listener Backchannels Technical Report
University of Southern California Institute for Creative Technologies no. ICT-TR-02-2008, 2008.
@techreport{de_kok_internship_2008,
title = {Internship Report on Predicting Listener Backchannels},
author = {Iwan Kok},
url = {http://ict.usc.edu/pubs/ICT%20TR%2002%202008.pdf},
year = {2008},
date = {2008-06-01},
number = {ICT-TR-02-2008},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {In this report I will document the work I have done during my internship at Institute for Creative Technologies from 22 January to 25 April under supervision of Louis-Phillipe Morency. During this time I have done research in the field of virtual humans, more specifically in the field of predicting and producing listener backchannels. But more on that later. I will start this report with a little background about the Institute for Creative Technologies and the project group which I was part of. After this the goal of my internship will be explained in Section 2. A general overview of our approach of achieving the goals set in Section 2 will be explained in Section 3. A more detailed description of the different steps taken will be given in Section 4. Following on that the results of the conducted research will be presented in Section 5. Finally a discussion of the work done, recom- mendations for improvement and future work will be given in Section 6.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Solomon, Steve; Lent, Michael; Core, Mark; Carpenter, Paul; Rosenberg, Milton
A Language for Modeling Cultural Norms, Biases and Stereotypes for Human Behavior Models Technical Report
2008.
@techreport{solomon_language_2008,
title = {A Language for Modeling Cultural Norms, Biases and Stereotypes for Human Behavior Models},
author = {Steve Solomon and Michael Lent and Mark Core and Paul Carpenter and Milton Rosenberg},
url = {http://ict.usc.edu/pubs/A%20Language%20for%20Modeling%20Cultural%20Norms,%20Biases%20and%20Stereotypes%20for%20Human%20Behavior%20Models.pdf},
year = {2008},
date = {2008-04-01},
abstract = {Increasingly, the military has requirements for teaching cultural awareness, which demands flexible representations of cultural knowledge. The Culturally-Affected Behavior project seeks to define a language for encoding ethnographic data in order to capture cultural knowledge and use that knowledge to affect human behavior models. Having anthropologists encode ethnographic data will validate the language and will result in a library of culture models for immersive training.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Carre, David; Levasseur, Marco; Gratch, Jonathan; Jacopin, Eric
Multimodal Toolbox: Analyzing Gestures Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 03 2008, 2008.
@techreport{carre_multimodal_2008,
title = {Multimodal Toolbox: Analyzing Gestures},
author = {David Carre and Marco Levasseur and Jonathan Gratch and Eric Jacopin},
url = {http://ict.usc.edu/pubs/ICT%20TR%2003%202008.pdf},
year = {2008},
date = {2008-01-01},
number = {ICT TR 03 2008},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {Rapport between people and virtual human agents is not limited to just speech. There are many non-verbal behaviors such as gestures or facial expressions that can express feelings or convey a message. One of the challenges in making an agent appear more realistic is to make his non-verbal behaviors appear more natural. To accomplish this, it is essential to find out how and when gestures are performed. In order to determine how gestures are performed, it is necessary to assess different appearances of the same gesture and the mapping between their respective function. To determine when gestures are performed, the key is to find relevant contextual features and their links with gestures, which will lead to the prediction of the moment they should be performed. Finally, both of these issues can now be tackled with the provided toolbox. Preliminary results show that we have some gesture pattern. Beside, we were able, based on contextual features, to predict when the agent should nod his head. Early results appear to show the agent nods at an opportune time. Moreover, this toolbox generalizes the results to other kind of gestures than head nods, which is the goal of this study.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Peers, Pieter; Mahajan, Dhruv K.; Lamond, Bruce; Ghosh, Abhijeet; Matusik, Wojciech; Ramamoorthi, Ravi; Debevec, Paul
Compressive Light Transport Sensing Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 05 2008, 2008.
@techreport{peers_compressive_2008,
title = {Compressive Light Transport Sensing},
author = {Pieter Peers and Dhruv K. Mahajan and Bruce Lamond and Abhijeet Ghosh and Wojciech Matusik and Ravi Ramamoorthi and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT%20TR%2005%202008.pdf},
year = {2008},
date = {2008-01-01},
number = {ICT TR 05 2008},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {In this paper we propose a new framework for capturing light trans- port data of a real scene, based on the recently developed theory of compressive sensing. Compressive sensing offers a solid math- ematical framework to infer a sparse signal from a limited number of non-adaptive measurements. Besides introducing compressive sensing for fast acquisition of light transport to computer graphics, we develop several innovations that address specific challenges for image-based relighting, and which may have broader implications. We develop a novel hierarchical decoding algorithm that improves reconstruction quality by exploiting inter-pixel coherency relations. Additionally, we design new non-adaptive illumination patterns that minimize measurement noise and further improve reconstruction quality. We illustrate our framework by capturing detailed high- resolution reflectance fields for image-based relighting.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Jones, Andrew; Chiang, Jen-Yuan; Ghosh, Abhijeet; Lang, Magnus; Hullin, Matthias; Busch, Jay; Debevec, Paul
Real-time Geometry and Reflectance Capture for Digital Face Replacement Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 04 2008, 2008.
@techreport{jones_real-time_2008,
title = {Real-time Geometry and Reflectance Capture for Digital Face Replacement},
author = {Andrew Jones and Jen-Yuan Chiang and Abhijeet Ghosh and Magnus Lang and Matthias Hullin and Jay Busch and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT-TR-04-2008.pdf},
year = {2008},
date = {2008-01-01},
number = {ICT TR 04 2008},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lent, Michael
Culturally and Emotionally Affected Behavior Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2008, 2008.
@techreport{van_lent_culturally_2008,
title = {Culturally and Emotionally Affected Behavior},
author = {Michael Lent},
year = {2008},
date = {2008-01-01},
number = {ICT TR 01 2008},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Filter
2024
Marti, Deniz; Hanrahan, David; Sanchez-Triana, Ernesto; Wells, Mona; Corra, Lilian; Hu, Howard; Breysse, Patrick N.; Laborde, Amalia; Caravanos, Jack; Bertollini, Roberto; Porterfield, Kate; Fuller, Richard
Structured Expert Judgement Approach of the Health Impact of Various Chemicals and Classes of Chemicals Technical Report
Public and Global Health 2024.
Abstract | Links | BibTeX | Tags:
@techreport{marti_structured_2024,
title = {Structured Expert Judgement Approach of the Health Impact of Various Chemicals and Classes of Chemicals},
author = {Deniz Marti and David Hanrahan and Ernesto Sanchez-Triana and Mona Wells and Lilian Corra and Howard Hu and Patrick N. Breysse and Amalia Laborde and Jack Caravanos and Roberto Bertollini and Kate Porterfield and Richard Fuller},
url = {http://medrxiv.org/lookup/doi/10.1101/2024.01.30.24301863},
doi = {10.1101/2024.01.30.24301863},
year = {2024},
date = {2024-02-01},
urldate = {2024-02-21},
institution = {Public and Global Health},
abstract = {ABSTRACT
Introduction
Chemical contamination and pollution are an ongoing threat to human health and the environment. The concern over the consequences of chemical exposures at the global level continues to grow. Because resources are constrained, there is a need to prioritize interventions focused on the greatest health impact. Data, especially related to chemical exposures, are rarely available for most substances of concern, and alternate methods to evaluate their impact are needed.
Structured Expert Judgment (SEJ) Process
A Structured Expert Judgment
3
process was performed to provide plausible estimates of health impacts for 16 commonly found pollutants: asbestos, arsenic, benzene, chromium, cadmium, dioxins, fluoride, highly hazardous pesticides (HHPs), lead, mercury, polycyclic-aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), Per- and Polyfluorinated Substances (PFAs), phthalates, endocrine disrupting chemicals (EDCs), and brominated flame retardants (BRFs). This process, undertaken by sector experts, weighed individual estimations of the probable global health scale health impacts of each pollutant using objective estimates of the expert opinions’ statistical accuracy and informativeness.
Main Findings
The foremost substances, in terms of mean projected annual total deaths, were lead, asbestos, arsenic, and HHPs. Lead surpasses the others by a large margin, with an estimated median value of 1.7 million deaths annually. The three other substances averaged between 136,000 and 274,000 deaths per year. Of the 12 other chemicals evaluated, none reached an estimated annual death count exceeding 100,000. These findings underscore the importance of prioritizing available resources on reducing and remediating the impacts of these key pollutants.
Range of Health Impacts
Based on the evidence available, experts concluded some of the more notorious chemical pollutants, such as PCBs and dioxin, do not result in high levels of human health impact from a global scale perspective. However, the chemical toxicity of some compounds released in recent decades, such as Endocrine Disrupters and PFAs, cannot be ignored, even if current impacts are limited. Moreover, the impact of some chemicals may be disproportionately large in some geographic areas. Continued research and monitoring are essential; and a preventative approach is needed for chemicals.
Future Directions
These results, and potential similar analyses of other chemicals, are provided as inputs to ongoing discussions about priority setting for global chemicals and pollution management. Furthermore, we suggest that this SEJ process be repeated periodically as new information becomes available.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Introduction
Chemical contamination and pollution are an ongoing threat to human health and the environment. The concern over the consequences of chemical exposures at the global level continues to grow. Because resources are constrained, there is a need to prioritize interventions focused on the greatest health impact. Data, especially related to chemical exposures, are rarely available for most substances of concern, and alternate methods to evaluate their impact are needed.
Structured Expert Judgment (SEJ) Process
A Structured Expert Judgment
3
process was performed to provide plausible estimates of health impacts for 16 commonly found pollutants: asbestos, arsenic, benzene, chromium, cadmium, dioxins, fluoride, highly hazardous pesticides (HHPs), lead, mercury, polycyclic-aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), Per- and Polyfluorinated Substances (PFAs), phthalates, endocrine disrupting chemicals (EDCs), and brominated flame retardants (BRFs). This process, undertaken by sector experts, weighed individual estimations of the probable global health scale health impacts of each pollutant using objective estimates of the expert opinions’ statistical accuracy and informativeness.
Main Findings
The foremost substances, in terms of mean projected annual total deaths, were lead, asbestos, arsenic, and HHPs. Lead surpasses the others by a large margin, with an estimated median value of 1.7 million deaths annually. The three other substances averaged between 136,000 and 274,000 deaths per year. Of the 12 other chemicals evaluated, none reached an estimated annual death count exceeding 100,000. These findings underscore the importance of prioritizing available resources on reducing and remediating the impacts of these key pollutants.
Range of Health Impacts
Based on the evidence available, experts concluded some of the more notorious chemical pollutants, such as PCBs and dioxin, do not result in high levels of human health impact from a global scale perspective. However, the chemical toxicity of some compounds released in recent decades, such as Endocrine Disrupters and PFAs, cannot be ignored, even if current impacts are limited. Moreover, the impact of some chemicals may be disproportionately large in some geographic areas. Continued research and monitoring are essential; and a preventative approach is needed for chemicals.
Future Directions
These results, and potential similar analyses of other chemicals, are provided as inputs to ongoing discussions about priority setting for global chemicals and pollution management. Furthermore, we suggest that this SEJ process be repeated periodically as new information becomes available.
2023
Prinzing, Michael; Garton, Catherine; Berman, Catherine J.; Zhou, Jieni; West, Taylor Nicole; Gratch, Jonathan; Fredrickson, Barbara
Can AI Agents Help Humans to Connect? Technical Report
PsyArXiv 2023.
Abstract | Links | BibTeX | Tags: AI, UARC, Virtual Humans
@techreport{prinzing_can_2023,
title = {Can AI Agents Help Humans to Connect?},
author = {Michael Prinzing and Catherine Garton and Catherine J. Berman and Jieni Zhou and Taylor Nicole West and Jonathan Gratch and Barbara Fredrickson},
url = {https://osf.io/muq6s},
doi = {10.31234/osf.io/muq6s},
year = {2023},
date = {2023-10-01},
urldate = {2023-12-07},
institution = {PsyArXiv},
abstract = {This paper reports on a pre-registered experiment designed to test whether artificial agents can help people to create more moments of high-quality connection with other humans. Of four pre-registered hypotheses, we found (partial) support for only one.},
keywords = {AI, UARC, Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
Saxon, Leslie; Boberg, Jill; Faulk, Robert; Barrett, Trevor
Identifying relationships between compression garments and recovery in a military training environment Technical Report
In Review 2023.
Abstract | Links | BibTeX | Tags: CBC, UARC
@techreport{saxon_identifying_2023,
title = {Identifying relationships between compression garments and recovery in a military training environment},
author = {Leslie Saxon and Jill Boberg and Robert Faulk and Trevor Barrett},
url = {https://www.researchsquare.com/article/rs-3193173/v1},
doi = {10.21203/rs.3.rs-3193173/v1},
year = {2023},
date = {2023-07-01},
urldate = {2023-09-21},
institution = {In Review},
abstract = {Abstract
Development and maintenance of physical capabilities is an essential part of combat readiness in the military. This readiness requires continuous training and is therefore compromised by injury. Because Service Members (SMs) must be physically and cognitively prepared to conduct multifaceted operations in support of strategic objectives, and because the Department of Defense’s (DoD) non-deployable rate and annual costs associated with treating SMs continue to rise at an alarming rate, finding a far-reaching and efficient solution to prevent such injuries is a high priority. Compression garments (CGs) have become increasingly popular over the past decade in human performance applications, and reportedly facilitate post-exercise recovery by reducing muscle soreness, increasing blood lactate removal, and increasing perception of recovery, but the evidence is mixed, at best. In the current study we explored whether CG use, and duration of use, improves recovery and mitigates muscle soreness effectively in an elite Marine training course. In order to test this, we subjected Service Members to fatiguing exercise and then measured subjective and objective recovery and soreness using participant reports and grip and leg strength over a 72-hour recovery period. Findings from this study suggest that wearing CGs for post training recovery showed significant and moderate positive effects on subjective soreness, fatigue, and perceived level of recovery. We did not find statistically significant effects on physical performance while testing grip or leg strength. These findings suggest that CG may be a beneficial strategy for military training environments to accelerate muscle recovery after high-intensity exercise, without adverse effects to the wearer or negative impact on military training.},
keywords = {CBC, UARC},
pubstate = {published},
tppubtype = {techreport}
}
Development and maintenance of physical capabilities is an essential part of combat readiness in the military. This readiness requires continuous training and is therefore compromised by injury. Because Service Members (SMs) must be physically and cognitively prepared to conduct multifaceted operations in support of strategic objectives, and because the Department of Defense’s (DoD) non-deployable rate and annual costs associated with treating SMs continue to rise at an alarming rate, finding a far-reaching and efficient solution to prevent such injuries is a high priority. Compression garments (CGs) have become increasingly popular over the past decade in human performance applications, and reportedly facilitate post-exercise recovery by reducing muscle soreness, increasing blood lactate removal, and increasing perception of recovery, but the evidence is mixed, at best. In the current study we explored whether CG use, and duration of use, improves recovery and mitigates muscle soreness effectively in an elite Marine training course. In order to test this, we subjected Service Members to fatiguing exercise and then measured subjective and objective recovery and soreness using participant reports and grip and leg strength over a 72-hour recovery period. Findings from this study suggest that wearing CGs for post training recovery showed significant and moderate positive effects on subjective soreness, fatigue, and perceived level of recovery. We did not find statistically significant effects on physical performance while testing grip or leg strength. These findings suggest that CG may be a beneficial strategy for military training environments to accelerate muscle recovery after high-intensity exercise, without adverse effects to the wearer or negative impact on military training.
2021
Nye, Benjamin D.; Core, Mark G.; Jaiswa, Shikhar; Ghosal, Aviroop; Auerbach, Daniel
Acting Engaged: Leveraging Play Persona Archetypes for Semi-Supervised Classification of Engagement Technical Report
International Educational Data Mining Society 2021, (Publication Title: International Educational Data Mining Society ERIC Number: ED615498).
Abstract | Links | BibTeX | Tags: Learning Sciences, UARC
@techreport{nye_acting_2021,
title = {Acting Engaged: Leveraging Play Persona Archetypes for Semi-Supervised Classification of Engagement},
author = {Benjamin D. Nye and Mark G. Core and Shikhar Jaiswa and Aviroop Ghosal and Daniel Auerbach},
url = {https://eric.ed.gov/?id=ED615498},
year = {2021},
date = {2021-01-01},
urldate = {2023-03-31},
institution = {International Educational Data Mining Society},
abstract = {Engaged and disengaged behaviors have been studied across a variety of educational contexts. However, tools to analyze engagement typically require custom-coding and calibration for a system. This limits engagement detection to systems where experts are available to study patterns and build detectors. This work studies a new approach to classify engagement patterns without expert input, by using a play persona methodology where labeled archetype data is generated by novice testers acting out different engagement patterns in a system. Domain-agnostic task features (e.g., response time to an activity, scores/correctness, task difficulty) are extracted from standardized data logs for both archetype and authentic user sessions. A semi-supervised methodology was used to label engagement; bottom-up clusters were combined with archetype data to build a classifier. This approach was analyzed with a focus on cold-start performance on small samples, using two metrics: consistency with larger full-sample cluster assignments and stability of points staying in the same cluster once assigned. These were compared against a baseline of clustering without an incrementally trained classifier. Findings on a data set from a branching multiple-choice scenario-based tutoring system indicated that approximately 52 unlabeled samples and 51 play-test labeled samples were sufficient to classify holdout sessions at 85% consistency with a full set of 145 unsupervised samples. Additionally, alignment to play persona samples for the full set matched expert labels for clusters. Use-cases and limitations of this approach are discussed. [For the full proceedings, see ED615472.]},
note = {Publication Title: International Educational Data Mining Society
ERIC Number: ED615498},
keywords = {Learning Sciences, UARC},
pubstate = {published},
tppubtype = {techreport}
}
2019
Stocco, Andrea; Steine-Hanson, Zoe; Koh, Natalie; Laird, John E.; Lebiere, Christian J.; Rosenbloom, Paul
Analysis of the Human Connectome Data Supports the Notion of A “Common Model of Cognition” for Human and Human-Like Intelligence Technical Report
Neuroscience 2019.
Abstract | Links | BibTeX | Tags: UARC, Virtual Humans
@techreport{stocco_analysis_2019,
title = {Analysis of the Human Connectome Data Supports the Notion of A “Common Model of Cognition” for Human and Human-Like Intelligence},
author = {Andrea Stocco and Zoe Steine-Hanson and Natalie Koh and John E. Laird and Christian J. Lebiere and Paul Rosenbloom},
url = {http://biorxiv.org/lookup/doi/10.1101/703777},
doi = {10.1101/703777},
year = {2019},
date = {2019-07-01},
pages = {38},
institution = {Neuroscience},
abstract = {The Common Model of Cognition (CMC) is a recently proposed, consensus architecture intended to capture decades of progress in cognitive science on modeling human and human-like intelligence. Because of the broad agreement around it and preliminary mappings of its components to specific brain areas, we hypothesized that the CMC could be a candidate model of the large-scale functional architecture of the human brain. To test this hypothesis, we analyzed functional MRI data from 200 participants and seven different tasks that cover the broad range of cognitive domains. The CMC components were identified with functionally homologous brain regions through canonical fMRI analysis, and their communication pathways were translated into predicted patterns of effective connectivity between regions. The resulting dynamic linear model was implemented and fitted using Dynamic Causal Modeling, and compared against four alternative brain architectures that had been previously proposed in the field of neuroscience (two hierarchical architectures and two hub-and-spoke architectures) using a Bayesian approach. The results show that, in all cases, the CMC vastly outperforms all other architectures, both within each domain and across all tasks. The results suggest that a common, general architecture that could be used for artificial intelligence effectively underpins all aspects of human cognition, from the overall functional architecture of the human brain to higher level thought processes.},
keywords = {UARC, Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
2014
The Context of Military Environments: Social and Organizational Factors Technical Report
National Academies Press Washington, DC, 2014.
Abstract | Links | BibTeX | Tags: Virtual Humans
@techreport{noauthor_context_2014,
title = {The Context of Military Environments: Social and Organizational Factors},
url = {http://sites.nationalacademies.org/DBASSE/BBCSS/CurrentProjects/DBASSE_080746},
year = {2014},
date = {2014-09-01},
address = {Washington, DC},
institution = {National Academies Press},
abstract = {The U.S. Army faces a variety of challenges to maintain a ready and capable force into the future. Its missions are diverse, following a continuum from peace to war that includes combat and counterinsurgency operations as well as negotiation, reconstruction, and stability operations that require a variety of personnel and skill sets to execute. Missions often demand rapid decision making and coordination with others in novel ways, so that personnel are not simply following a specific set of tactical orders but, rather, carrying out mission command through an understanding of broader strategic goals in order to develop and choose among courses of action. Like any workforce, the Army is diverse in terms of demographic characteristics, such as gender and race, with a commitment of its leadership to ensure equal opportunities across all demographic parties. With these challenges comes the urgent need to better understand how contextual factors influence soldier and small unit behavior and mission performance.},
keywords = {Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
2012
Gahm, Gregory; Reger, Greg; Ingram, Mary V.; Reger, Mark; Rizzo, Albert
A Multisite, Randomized Clinical Trial of Virtual Reality and Prolonged Exposure Therapy for Active Duty Soldiers with PTSD Technical Report
no. A611975, 2012.
Abstract | Links | BibTeX | Tags: DoD, MedVR
@techreport{gahm_multisite_2012,
title = {A Multisite, Randomized Clinical Trial of Virtual Reality and Prolonged Exposure Therapy for Active Duty Soldiers with PTSD},
author = {Gregory Gahm and Greg Reger and Mary V. Ingram and Mark Reger and Albert Rizzo},
url = {http://ict.usc.edu/pubs/A%20Multisite,%20Randomized%20Clinical%20Trial%20of%20Virtual%20Reality%20and%20Prolonged%20Exposure%20Therapy%20for%20Active%20Duty%20Soldiers%20with%20PTSD.pdf},
year = {2012},
date = {2012-12-01},
number = {A611975},
abstract = {This randomized, single blind study extends recruitment to an additional active duty site (Womack Army Medical Center at Ft Bragg) in support of a previously funded clinical trial to evaluate the efficacy of virtual reality exposure therapy (VRET) and prolonged exposure therapy (PE) with a waitlist (WL) group in the treatment of posttraumatic stress disorder (PTSD) in active duty (AD) Soldiers with combat-related trauma. During the first year, the study team developed the infrastructure to implement the trial including personnel recruitment, hiring, and initial training, process development to identify, screen, and enroll participants, and research protocol development and approval by IRB s. During the second year hiring of clinical staff and training of the study team was completed. Recruitment and enrollment commenced.},
keywords = {DoD, MedVR},
pubstate = {published},
tppubtype = {techreport}
}
Graham, Paul; Tunwattanapong, Borom; Busch, Jay; Yu, Xueming; Jones, Andrew; Debevec, Paul; Ghosh, Abhijeet
Measurement-based Synthesis of Facial Microgeometry Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2012, 2012.
Abstract | Links | BibTeX | Tags: Graphics, UARC
@techreport{graham_measurement-based_2012,
title = {Measurement-based Synthesis of Facial Microgeometry},
author = {Paul Graham and Borom Tunwattanapong and Jay Busch and Xueming Yu and Andrew Jones and Paul Debevec and Abhijeet Ghosh},
url = {http://ict.usc.edu/pubs/ICT-TR-01-2012.pdf},
year = {2012},
date = {2012-11-01},
number = {ICT TR 01 2012},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present a technique for generating microstructure-level facial geometry by augmenting a mesostructure-level facial scan with detail synthesized from a set of exemplar skin patches scanned at much higher resolution. We use constrained texture synthesis based on image analogies to increase the resolution of the facial scan in a way that is consistent with the scanned mesostructure. We digitize the exemplar patches with a polarization-based computational illumination technique which considers specular reflection and single scattering. The recorded microstructure patches can be used to synthesize full-facial microstructure detail for either the same subject or to a different subject. We show that the technique allows for greater realism in facial renderings including more accurate reproduction of skin’s specular roughness and anisotropic reflection effects.},
keywords = {Graphics, UARC},
pubstate = {published},
tppubtype = {techreport}
}
Fyffe, Graham
High Fidelity Facial Hair Capture Technical Report
University of Southern California Institute for Creative Technologies Playa Vista, CA, no. ICT TR 02 2012, 2012.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{fyffe_high_2012,
title = {High Fidelity Facial Hair Capture},
author = {Graham Fyffe},
url = {https://apps.dtic.mil/sti/trecms/pdf/AD1170996.pdf},
year = {2012},
date = {2012-08-01},
booktitle = {SIGGRAPH},
number = {ICT TR 02 2012},
address = {Playa Vista, CA},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We propose an extension to multi-view face capture that reconstructs high quality facial hair automatically. Multi-view stereo is well known for producing high quality smooth surfaces and meshes, but fails on fine structure such as hair. We exploit this failure, and automatically detect the hairs on a face by careful analysis of the pixel reconstruction error of the multi-view stereo result. Central to our work is a novel stereo matching cost function, which we call equalized cross correlation, that properly accounts for both camera sensor noise and pixel sampling variance. In contrast to previous works that treat hair modeling as a synthesis problem based on image cues, we reconstruct facial hair to explain the same highresolution input photographs used for face reconstruction, producing a result with higher fidelity to the input photographs.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
2010
Chiang, Jen-Yuan; Fyffe, Graham
Realistic Real-Time Rendering of Eyes and Teeth Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2010, 2010.
@techreport{chiang_realistic_2010,
title = {Realistic Real-Time Rendering of Eyes and Teeth},
author = {Jen-Yuan Chiang and Graham Fyffe},
url = {http://ict.usc.edu/pubs/ICT%20TR%2001%202010.pdf},
year = {2010},
date = {2010-09-01},
number = {ICT TR 01 2010},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zheng, Jun; Ghosh, Abhijeet
Specular Normal Synthesis Using Stochastic Super-resolution for Detailed Facial Geometry Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 02 2010, 2010.
Abstract | Links | BibTeX | Tags:
@techreport{zheng_specular_2010,
title = {Specular Normal Synthesis Using Stochastic Super-resolution for Detailed Facial Geometry},
author = {Jun Zheng and Abhijeet Ghosh},
url = {http://www.ict.usc.edu//pubs/ICT-TR-02-2010.pdf},
year = {2010},
date = {2010-01-01},
number = {ICT TR 02 2010},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {Detailed facial geometry is critical for the visual realism of face models in computer games, movies, and virtual reality applica- tions. The existing face scanning methods, however, are either sacrificing resolution for real-time processing, or requiring expen- sive high-speed cameras. In this work we propose a new technique for real-time high-resolution facial scanning using spherical gradi- ent illumination. The key elements of the approach are the use of stochastic super-resolution to generate specular normal map based on diffuse normal map, instead of capturing both of them during scanning process. We analyze a training dataset of diffuse normal maps and specular normals of a particular object and learn the map- ping from low-frequency components of diffuse normal maps to high-frequency components of specular normal maps of that ob- ject. This enables us to infer, for example, the most likely high- resolution specular normal map detail depicting the same person as a low-resolution diffuse normal map given as input. Experimen- tal results show that the proposed algorithm generates high-quality specular normal maps from diffuse normal map inputs.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
2009
Yamada, Hideshi; Peers, Pieter; Debevec, Paul
Compact Representation of Reflectance Fields using Clustered Sparse Residual Factorization Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 02 2009, 2009.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{yamada_compact_2009,
title = {Compact Representation of Reflectance Fields using Clustered Sparse Residual Factorization},
author = {Hideshi Yamada and Pieter Peers and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT-TR-02-2009.pdf},
year = {2009},
date = {2009-01-01},
number = {ICT TR 02 2009},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present a novel compression method for fixed viewpoint re- flectance fields, captured for example by a Light Stage. Our com- pressed representation consists of a global approximation that ex- ploits the similarities between the reflectance functions of different pixels, and a local approximation that encodes the per-pixel resid- ual with the global approximation. Key to our method is a clustered sparse residual factorization. This sparse residual factorization en- sures that the per-pixel residual matrix is as sparse as possible, en- abling a compact local approximation. Finally, we demonstrate that the presented compact representation is well suited for high-quality real-time rendering.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
Alexander, Oleg; Rogers, Mike; Lambeth, William; Chiang, Matt; Debevec, Paul
Creating a Photoreal Digital Actor: The Digital Emily Project Technical Report
University of Southern California Institute for Creative Technologies London, UK, no. ICT TR 04 2009, 2009.
Abstract | Links | BibTeX | Tags:
@techreport{alexander_creating_2009-1,
title = {Creating a Photoreal Digital Actor: The Digital Emily Project},
author = {Oleg Alexander and Mike Rogers and William Lambeth and Matt Chiang and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT%20TR%2004%202009.pdf},
year = {2009},
date = {2009-01-01},
booktitle = {IEEE European Conference on Visual Media Production (CVMP)},
number = {ICT TR 04 2009},
address = {London, UK},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {The Digital Emily Project is a collaboration between facial animation company Image Metrics and the Graphics Laboratory at the University of Southern California's Institute for Creative Technologies to achieve one of the world's first photorealistic digital facial performances. The project leverages latest-generation techniques in high-resolution face scanning, character rigging, video-based facial animation, and compositing. An actress was first filmed on a studio set speaking emotive lines of dialog in high definition. The lighting on the set was captured as a high dynamic range light probe image. The actress' face was then three-dimensionally scanned in thirty-three facial expressions showing different emotions and mouth and eye movements using a high-resolution facial scanning process accurate to the level of skin pores and fine wrinkles. Lighting-independent diffuse and specular reflectance maps were also acquired as part of the scanning process. Correspondences between the 3D expression scans were formed using a semi-automatic process, allowing a blendshape facial animation rig to be constructed whose expressions closely mirrored the shapes observed in the rich set of facial scans; animated eyes and teeth were also added to the model. Skin texture detail showing dynamic wrinkling was converted into multiresolution displacement maps also driven by the blend shapes. A semi-automatic video-based facial animation system was then used to animate the 3D face rig to match the performance seen in the original video, and this performance was tracked onto the facial motion in the studio video. The final face was illuminated by the captured studio illumination and shading using the acquired reflectance maps with a skin translucency shading algorithm. Using this process, the project was able to render a synthetic facial performance which was generally accepted as being a real face.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lamond, Bruce; Peers, Pieter; Ghosh, Abhijeet; Debevec, Paul
Image-based Separation of Diffuse and Specular Reflections using Environmental Structured Illumination, Supplemental Material Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2009, 2009.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{lamond_image-based_2009,
title = {Image-based Separation of Diffuse and Specular Reflections using Environmental Structured Illumination, Supplemental Material},
author = {Bruce Lamond and Pieter Peers and Abhijeet Ghosh and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT-TR-01-2009.pdf},
year = {2009},
date = {2009-01-01},
number = {ICT TR 01 2009},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present an image-based method for separating dif- fuse and specular reflections using environmental struc- tured illumination. Two types of structured illumination are discussed: phase-shifted sine wave patterns, and phase- shifted binary stripe patterns. In both cases the low-pass filtering nature of diffuse reflections is utilized to separate the reflection components. We illustrate our method on a wide range of example scenes and applications.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
2008
Kok, Iwan
Internship Report on Predicting Listener Backchannels Technical Report
University of Southern California Institute for Creative Technologies no. ICT-TR-02-2008, 2008.
Abstract | Links | BibTeX | Tags:
@techreport{de_kok_internship_2008,
title = {Internship Report on Predicting Listener Backchannels},
author = {Iwan Kok},
url = {http://ict.usc.edu/pubs/ICT%20TR%2002%202008.pdf},
year = {2008},
date = {2008-06-01},
number = {ICT-TR-02-2008},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {In this report I will document the work I have done during my internship at Institute for Creative Technologies from 22 January to 25 April under supervision of Louis-Phillipe Morency. During this time I have done research in the field of virtual humans, more specifically in the field of predicting and producing listener backchannels. But more on that later. I will start this report with a little background about the Institute for Creative Technologies and the project group which I was part of. After this the goal of my internship will be explained in Section 2. A general overview of our approach of achieving the goals set in Section 2 will be explained in Section 3. A more detailed description of the different steps taken will be given in Section 4. Following on that the results of the conducted research will be presented in Section 5. Finally a discussion of the work done, recom- mendations for improvement and future work will be given in Section 6.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Solomon, Steve; Lent, Michael; Core, Mark; Carpenter, Paul; Rosenberg, Milton
A Language for Modeling Cultural Norms, Biases and Stereotypes for Human Behavior Models Technical Report
2008.
Abstract | Links | BibTeX | Tags: Learning Sciences
@techreport{solomon_language_2008,
title = {A Language for Modeling Cultural Norms, Biases and Stereotypes for Human Behavior Models},
author = {Steve Solomon and Michael Lent and Mark Core and Paul Carpenter and Milton Rosenberg},
url = {http://ict.usc.edu/pubs/A%20Language%20for%20Modeling%20Cultural%20Norms,%20Biases%20and%20Stereotypes%20for%20Human%20Behavior%20Models.pdf},
year = {2008},
date = {2008-04-01},
abstract = {Increasingly, the military has requirements for teaching cultural awareness, which demands flexible representations of cultural knowledge. The Culturally-Affected Behavior project seeks to define a language for encoding ethnographic data in order to capture cultural knowledge and use that knowledge to affect human behavior models. Having anthropologists encode ethnographic data will validate the language and will result in a library of culture models for immersive training.},
keywords = {Learning Sciences},
pubstate = {published},
tppubtype = {techreport}
}
Carre, David; Levasseur, Marco; Gratch, Jonathan; Jacopin, Eric
Multimodal Toolbox: Analyzing Gestures Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 03 2008, 2008.
Abstract | Links | BibTeX | Tags: Virtual Humans
@techreport{carre_multimodal_2008,
title = {Multimodal Toolbox: Analyzing Gestures},
author = {David Carre and Marco Levasseur and Jonathan Gratch and Eric Jacopin},
url = {http://ict.usc.edu/pubs/ICT%20TR%2003%202008.pdf},
year = {2008},
date = {2008-01-01},
number = {ICT TR 03 2008},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {Rapport between people and virtual human agents is not limited to just speech. There are many non-verbal behaviors such as gestures or facial expressions that can express feelings or convey a message. One of the challenges in making an agent appear more realistic is to make his non-verbal behaviors appear more natural. To accomplish this, it is essential to find out how and when gestures are performed. In order to determine how gestures are performed, it is necessary to assess different appearances of the same gesture and the mapping between their respective function. To determine when gestures are performed, the key is to find relevant contextual features and their links with gestures, which will lead to the prediction of the moment they should be performed. Finally, both of these issues can now be tackled with the provided toolbox. Preliminary results show that we have some gesture pattern. Beside, we were able, based on contextual features, to predict when the agent should nod his head. Early results appear to show the agent nods at an opportune time. Moreover, this toolbox generalizes the results to other kind of gestures than head nods, which is the goal of this study.},
keywords = {Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
Peers, Pieter; Mahajan, Dhruv K.; Lamond, Bruce; Ghosh, Abhijeet; Matusik, Wojciech; Ramamoorthi, Ravi; Debevec, Paul
Compressive Light Transport Sensing Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 05 2008, 2008.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{peers_compressive_2008,
title = {Compressive Light Transport Sensing},
author = {Pieter Peers and Dhruv K. Mahajan and Bruce Lamond and Abhijeet Ghosh and Wojciech Matusik and Ravi Ramamoorthi and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT%20TR%2005%202008.pdf},
year = {2008},
date = {2008-01-01},
number = {ICT TR 05 2008},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {In this paper we propose a new framework for capturing light trans- port data of a real scene, based on the recently developed theory of compressive sensing. Compressive sensing offers a solid math- ematical framework to infer a sparse signal from a limited number of non-adaptive measurements. Besides introducing compressive sensing for fast acquisition of light transport to computer graphics, we develop several innovations that address specific challenges for image-based relighting, and which may have broader implications. We develop a novel hierarchical decoding algorithm that improves reconstruction quality by exploiting inter-pixel coherency relations. Additionally, we design new non-adaptive illumination patterns that minimize measurement noise and further improve reconstruction quality. We illustrate our framework by capturing detailed high- resolution reflectance fields for image-based relighting.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
Jones, Andrew; Chiang, Jen-Yuan; Ghosh, Abhijeet; Lang, Magnus; Hullin, Matthias; Busch, Jay; Debevec, Paul
Real-time Geometry and Reflectance Capture for Digital Face Replacement Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 04 2008, 2008.
Links | BibTeX | Tags: Graphics
@techreport{jones_real-time_2008,
title = {Real-time Geometry and Reflectance Capture for Digital Face Replacement},
author = {Andrew Jones and Jen-Yuan Chiang and Abhijeet Ghosh and Magnus Lang and Matthias Hullin and Jay Busch and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT-TR-04-2008.pdf},
year = {2008},
date = {2008-01-01},
number = {ICT TR 04 2008},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
Lent, Michael
Culturally and Emotionally Affected Behavior Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2008, 2008.
BibTeX | Tags:
@techreport{van_lent_culturally_2008,
title = {Culturally and Emotionally Affected Behavior},
author = {Michael Lent},
year = {2008},
date = {2008-01-01},
number = {ICT TR 01 2008},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
2007
Zbylut, Michelle L.; Metcalf, Kimberly A.; Kim, Julia; Hill, Randall W.; Rocher, Scott
Army Excellence in Leadership (AXL): A Multimedia Approach to Building Tacit Knowledge and Cultural Reasoning Technical Report
no. Technical Report 1194, 2007.
Abstract | Links | BibTeX | Tags:
@techreport{zbylut_army_2007,
title = {Army Excellence in Leadership (AXL): A Multimedia Approach to Building Tacit Knowledge and Cultural Reasoning},
author = {Michelle L. Zbylut and Kimberly A. Metcalf and Julia Kim and Randall W. Hill and Scott Rocher},
url = {http://ict.usc.edu/pubs/Army%20Excellence%20in%20Leadership%20(AXL)-%20A%20Multimedia%20Approach%20to%20Building%20Tacit%20Knowledge%20and%20Cultural%20Reasoning.pdf},
year = {2007},
date = {2007-01-01},
number = {Technical Report 1194},
abstract = {This report presents findings from a preliminary examination of the Army Excellence in Leadership (AXL) system, a leader intervention that targets the development of tacit leadership knowledge and cultural awareness in junior Army officers. Fifty-five junior officers interacted with a pilot version of a cultural awareness module from the AXL system. Results indicated that the AXL approach resulted in improvements in leader judgment on a forced-choice measure. Furthermore, results indicated that cultural issues were more salient to leaders after completion of the cultural awareness module. Reactions to training were generally positive, with officers indicating that the cultural awareness module was useful and stimulated thought. Additionally, this investigation explored the relationship between affect and learning and found that emotional responses to the AXL system were related to learning-relevant variables, such as judgment scores and officer reports that they could apply the training to their activities as a leader.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lamond, Bruce; Peers, Pieter; Debevec, Paul
Fast Image-based Separation of Diffuse and Specular Reflections Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 02 2007, 2007.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{lamond_fast_2007,
title = {Fast Image-based Separation of Diffuse and Specular Reflections},
author = {Bruce Lamond and Pieter Peers and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT-TR-02-2007.pdf},
year = {2007},
date = {2007-01-01},
number = {ICT TR 02 2007},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present a novel image-based method for separating diffuse and specular reflections of real objects under distant environmental illumination. By illuminating a scene with only four high frequency illumination patterns, the specular and diffuse reflections can be separated by computing the maximum and minimum observed pixel values. Furthermore, we show that our method can be extended to separate diffuse and specular components under image-based environmental illumination. Applications range from image-based modeling of reflectance properties to improved normal and geometry acquisition.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
2006
Tariq, Sarah; Gardner, Andrew; Llamas, Ignacio; Jones, Andrew; Debevec, Paul; Turk, Greg
Efficient Estimation of Spatially Varying Subsurface Scattering Parameters for Relighting Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2006, 2006.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{tariq_efficient_2006,
title = {Efficient Estimation of Spatially Varying Subsurface Scattering Parameters for Relighting},
author = {Sarah Tariq and Andrew Gardner and Ignacio Llamas and Andrew Jones and Paul Debevec and Greg Turk},
url = {http://ict.usc.edu/pubs/ICT-TR-01-2006.pdf},
year = {2006},
date = {2006-01-01},
number = {ICT TR 01 2006},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present an image-based technique to rapidly ac- quire spatially varying subsurface reflectance prop- erties of a human face. The estimated properties can be used directly to render faces with spatially vary- ing scattering, or can be used to estimate a robust average across the face. We demonstrate our tech- nique with renderings of peoples' faces under novel, spatially-varying illumination and provide compar- isons with current techniques. Our captured data consists of images of the face from a single view- point under two small sets of projected images. The first set, a sequence of phase shifted periodic stripe patterns, provides a per-pixel profile of how light scatters from adjacent locations. The second set contains structured light and is used to obtain face geometry. We match the observed reflectance pro- files to scattering properties predicted by a scatter- ing model using a lookup table. From these prop- erties we can generate images of the face under any incident illumination, including local lighting. The rendered images exhibit realistic subsurface trans- port, including light bleeding across shadow edges. Our method works more than an order of magnitude faster than current techniques for capturing subsur- face scattering information, and makes it possible for the first time to capture these properties over an entire face.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
Peers, Pieter; Hawkins, Tim; Debevec, Paul
A Reflective Light Stage Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 04 2006, 2006.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{peers_reflective_2006,
title = {A Reflective Light Stage},
author = {Pieter Peers and Tim Hawkins and Paul Debevec},
url = {http://ict.usc.edu/pubs/ICT-TR-04.2006.pdf},
year = {2006},
date = {2006-01-01},
number = {ICT TR 04 2006},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present a novel acquisition device to capture high resolution 4D re- flectance fields of real scenes. The device consists of a concave hemispher- ical surface coated with a rough specular paint and a digital video projector with a fish-eye lens positioned near the center of the hemisphere. The scene is placed near the projector, also near the center, and photographed from a fixed vantage point. The projector projects a high-resolution image of incident illu- mination which is reflected by the rough hemispherical surface to become the illumination on the scene. We demonstrate the utility of this device by cap- turing a high resolution hemispherical reflectance field of a specular object which would be difficult to capture using previous acquisition techniques.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
Werf, R. J.
Creating Rapport with Virtual Humans Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 02 2006, 2006.
Abstract | Links | BibTeX | Tags:
@techreport{van_der_werf_creating_2006,
title = {Creating Rapport with Virtual Humans},
author = {R. J. Werf},
url = {http://ict.usc.edu/pubs/ICT-TR.02.2006-Rick.pdf},
year = {2006},
date = {2006-01-01},
number = {ICT TR 02 2006},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {This report describes the internship about the assignment Creating Rapport with Virtual Humans. The assignment is split up into two separate parts. The first part is to improve the visual feature detection of the current mimicking system [MAA04]. This is going to be done using a Computer Vision approach. Together with two other interns [LAM05] the whole mimicking system was improved, leading to a new Rapport system. The second part involves subject testing with the newly developed system. Firstly the goal is to make a working system that can be reused and expanded in the future. Secondly the goal is to use the data from the subject test to determine if rapport can be created with Virtual Humans. The resulting Rapport system should be a very well reuseable and expandable system. This system makes it possible for other people, unfamiliar with the system, to easily use the system for future testing. Unfortunately too little data was obtained with subject testing to give a solid conclusion whether or not creating rapport with Virtual Humans is possible. The subject testing did lead to a improved testing procedure which makes future testing quite easy.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
2005
Kock, Arien; Gratch, Jonathan
An Evaluation of Automatic Lip-syncing Methods for Game Environments Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 01 2005, 2005.
Abstract | Links | BibTeX | Tags: Virtual Humans
@techreport{kock_evaluation_2005,
title = {An Evaluation of Automatic Lip-syncing Methods for Game Environments},
author = {Arien Kock and Jonathan Gratch},
url = {http://ict.usc.edu/pubs/ICT-TR.01.2005.pdf},
year = {2005},
date = {2005-01-01},
number = {ICT TR 01 2005},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {Lip-synching is the production of articulator motion corresponding to a given audible utterance. The Mission Rehearsal Exercise training system requires lip-synching to increase the believability of its virtual agents. In this report I document the selection, exploration, evaluation and comparison of several candidate lip-synching systems, ending with a recommendation. The evaluation focuses on the believability of articulators' expression, the foreseeable difficulty of integration into MRE’s architecture, the support for facial expressions related to semantics and prosodic features as well as the scalability of each system.},
keywords = {Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
Maatman, R. M.; Gratch, Jonathan; Marsella, Stacy C.
Responsive Behavior of a Listening Agent Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 02 2005, 2005.
Abstract | Links | BibTeX | Tags: Social Simulation, Virtual Humans
@techreport{maatman_responsive_2005,
title = {Responsive Behavior of a Listening Agent},
author = {R. M. Maatman and Jonathan Gratch and Stacy C. Marsella},
url = {http://ict.usc.edu/pubs/ICT-TR.02.2005.pdf},
year = {2005},
date = {2005-01-01},
number = {ICT TR 02 2005},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {The purpose of this assignment is twofold. First the possibility of generating real time responsive behavior is evaluated in order to create a more human-like agent. Second, the effect of the behavior of the agent on the human interactor is evaluated. The main motivation for the focus on responsive gestures is because much research has been done already on gestures that accompany the speaker, and nothing on gesture that accompany the listener, although responsiveness is a crucial part of a conversation. The responsive behavior of a virtual agent consists of performing gestures during the time a human is speaking to the agent. To generate the correct gestures, first a literature research is carried out, from which is concluded that with the current of the current Natural Language Understanding technology, it is not possible to extract semantic features of the human speech in real time. Thus, other features have to be considered. The result of the literature research is a basic mapping between real time obtainable features and their correct responsive behavior: - if the speech contains a relatively long period of low pitch then perform a head nod. - if the speech contains relatively high intensity then perform a head nod - if the speech contains disfluency then perform a posture shift, gazing behavior or a frown - if the human performs a posture shift then mirror this posture shift - if the human performs a head shake then mirror this head shake - if the human performs major gazing behavior then mimic this behavior A design has been made to implement this mapping into the behavior of a virtual agent and this design has been implemented which results in two programs. One to mirror the physical features of the human and one to extract the speech features from the voice of the human. The two programs are combined and the effect of the resulting behavior on the human interactor has been tested. The results of these tests are that the performing of responsive behavior has a positive effect on the natural behavior of a virtual agent and thus looks promising for future research. However, the gestures proposed by this mapping are not always context-independent. Thus, much refinement is still to be done and more functionality can be added to improve the responsive behavior. The conclusion of this research is twofold. First the performing of responsive behaviors in real time is possible with the presented mapping and this results in a more natural behaving agent. Second, some responsive behavior is still dependant of semantic information. This leaves open the further enhancement of the presented mapping in order to increase the responsive behavior.},
keywords = {Social Simulation, Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
Mao, Wenji; Gratch, Jonathan
Evaluating Social Causality and Responsibility Models: An Initial Report Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 03 2005, 2005.
Abstract | Links | BibTeX | Tags: Virtual Humans
@techreport{mao_evaluating_2005,
title = {Evaluating Social Causality and Responsibility Models: An Initial Report},
author = {Wenji Mao and Jonathan Gratch},
url = {http://ict.usc.edu/pubs/ICT-TR-03-2005.pdf},
year = {2005},
date = {2005-01-01},
number = {ICT TR 03 2005},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {Intelligent virtual agents are typically embedded in a social environment and must reason about social cause and effect. Social causal reasoning is qualitatively different from physical causal reasoning that underlies most current intelligent sys- tems. Besides physical causality, the assessments of social cause emphasize epistemic variables including intentions, foreknowledge and perceived coercion. Modeling the process and inferences of social causality can enrich believability and cognitive capabili- ties of social intelligent agents. In this report, we present a general computational model of social causality and responsibility, and empirical results of a preliminary evaluation of the model in comparison with several other approaches.},
keywords = {Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
2004
Debevec, Paul; Gardner, Andrew; Tchou, Chris; Hawkins, Tim
Postproduction Re-Illumination of Live Action Using Time-Multiplexed Lighting Technical Report
University of Southern California Institute for Creative Technologies Marina del Rey, CA, no. ICT TR 05.2004, 2004.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{debevec_postproduction_2004,
title = {Postproduction Re-Illumination of Live Action Using Time-Multiplexed Lighting},
author = {Paul Debevec and Andrew Gardner and Chris Tchou and Tim Hawkins},
url = {http://ict.usc.edu/pubs/Postproduction%20Re-Illumination%20of%20Live%20Action%20Using%20Time-Multiplexed%20Lighting.pdf},
year = {2004},
date = {2004-06-01},
number = {ICT TR 05.2004},
address = {Marina del Rey, CA},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {In this work, we present a technique for capturing a time-varying human performance in such a way that it can be re-illuminated in postproduction. The key idea is to illuminate the subject with a variety of rapidly changing time-multiplexed basis lighting conditions, and to record these lighting conditions with a fast enough video camera so that several or many different basis lighting conditions are recorded during the span of the final video's desired frame rate. In this poster we present two versions of such a system and propose plans for creating a complete, production-ready device.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
Gratch, Jonathan; Marsella, Stacy C.
Technical Details of a Domain-independent Framework for Modeling Emotion Technical Report
University of Southern California Institute for Creative Technologies Marina del Rey, CA, no. ICT TR 04.2004, 2004.
Abstract | Links | BibTeX | Tags: Social Simulation, Virtual Humans
@techreport{gratch_technical_2004,
title = {Technical Details of a Domain-independent Framework for Modeling Emotion},
author = {Jonathan Gratch and Stacy C. Marsella},
url = {http://ict.usc.edu/pubs/Technical%20Details%20of%20a%20Domain-independent%20Framework%20for%20Modeling%20Emotion.pdf},
year = {2004},
date = {2004-01-01},
number = {ICT TR 04.2004},
address = {Marina del Rey, CA},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {This technical report elaborates on the technical details of the EMA model of emotional appraisal and coping. It should be seen as an appendix to the journal article on this topic (Gratch & Marsella, to appear)},
keywords = {Social Simulation, Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
Mao, Wenji; Gratch, Jonathan
Decision-Theoretic Approach to Plan Recognition Technical Report
University of Southern California Institute for Creative Technologies Marina del Rey, CA, no. ICT TR 01.2004, 2004.
Abstract | Links | BibTeX | Tags: Virtual Humans
@techreport{mao_decision-theoretic_2004,
title = {Decision-Theoretic Approach to Plan Recognition},
author = {Wenji Mao and Jonathan Gratch},
url = {http://ict.usc.edu/pubs/Decision-Theoretic%20Approach%20to%20Plan%20Recognition.pdf},
year = {2004},
date = {2004-01-01},
number = {ICT TR 01.2004},
address = {Marina del Rey, CA},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {In this report, first we give a survey of the work in plan recognition field, including the evolution of different approaches, their strength and weaknesses. Then we propose two decision-theoretic approaches to plan recognition problem, which explicitly take outcome utilities into consideration. One is an extension within the probabilistic reasoning framework, by adding utility nodes to belief nets. The other is based on maximizing the estimated expected utility of possible plan. Illustrative examples are given to explain the approaches. Finally, we compare the two approaches presented in the report and summarize the work.},
keywords = {Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
Muller, T. J.
Everything in perspective Technical Report
University of Southern California Institute for Creative Technologies Marina del Rey, CA, no. ICT TR 03.2004, 2004.
@techreport{muller_everything_2004,
title = {Everything in perspective},
author = {T. J. Muller},
url = {http://ict.usc.edu/pubs/Everything%20in%20perspective.pdf},
year = {2004},
date = {2004-01-01},
number = {ICT TR 03.2004},
address = {Marina del Rey, CA},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Debevec, Paul; Tchou, Chris; Gardner, Andrew; Hawkins, Tim; Poullis, Charis; Stumpfel, Jessi; Jones, Andrew; Yun, Nathaniel; Einarsson, Per; Lundgren, Therese; Fajardo, Marcos; Martinez, Philippe
Estimating Surface Reflectance Properties of a Complex Scene under Captured Natural Illumination Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 06 2004, 2004.
Abstract | Links | BibTeX | Tags: Graphics
@techreport{debevec_estimating_2004,
title = {Estimating Surface Reflectance Properties of a Complex Scene under Captured Natural Illumination},
author = {Paul Debevec and Chris Tchou and Andrew Gardner and Tim Hawkins and Charis Poullis and Jessi Stumpfel and Andrew Jones and Nathaniel Yun and Per Einarsson and Therese Lundgren and Marcos Fajardo and Philippe Martinez},
url = {http://ict.usc.edu/pubs/ICT-TR-06.2004.pdf},
year = {2004},
date = {2004-01-01},
number = {ICT TR 06 2004},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {We present a process for estimating spatially-varying surface re- flectance of a complex scene observed under natural illumination conditions. The process uses a laser-scanned model of the scene's geometry, a set of digital images viewing the scene's surfaces under a variety of natural illumination conditions, and a set of correspond- ing measurements of the scene's incident illumination in each pho- tograph. The process then employs an iterative inverse global illu- mination technique to compute surface colors for the scene which, when rendered under the recorded illumination conditions, best re- produce the scene's appearance in the photographs. In our process we measure BRDFs of representative surfaces in the scene to better model the non-Lambertian surface reflectance. Our process uses a novel lighting measurement apparatus to record the full dynamic range of both sunlit and cloudy natural illumination conditions. We employ Monte-Carlo global illumination, multiresolution geome- try, and a texture atlas system to perform inverse global illumina- tion on the scene. The result is a lighting-independent model of the scene that can be re-illuminated under any form of lighting. We demonstrate the process on a real-world archaeological site, show- ing that the technique can produce novel illumination renderings consistent with real photographs as well as reflectance properties that are consistent with ground-truth reflectance measurements.},
keywords = {Graphics},
pubstate = {published},
tppubtype = {techreport}
}
Hartholt, Arno; Muller, T. J.
Interaction on Emotions Technical Report
University of Southern California Institute for Creative Technologies Marina del Rey, CA, no. ICT TR 02.2004, 2004.
Abstract | Links | BibTeX | Tags:
@techreport{hartholt_interaction_2004,
title = {Interaction on Emotions},
author = {Arno Hartholt and T. J. Muller},
url = {http://ict.usc.edu/pubs/Interaction%20on%20emotions.pdf},
year = {2004},
date = {2004-01-01},
number = {ICT TR 02.2004},
address = {Marina del Rey, CA},
institution = {University of Southern California Institute for Creative Technologies},
abstract = {This report describes the addition of an emotion dialogue to the Mission Rehearsal Exercise (MRE) system. The goal of the MRE system is to provide an immersive learning environment for army officer recruits. The user can engage in conversation with several intelligent agents in order to accomplish the goals within a certain scenario. Although these agents did already posses emotions, they were unable to express them verbally. A question - answer dialogue has been implemented to this purpose. The implementation makes use of proposition states for modelling knowledge, keyword scanning for natural language understanding and templates for natural language generation. The system is implemented using Soar and TCL. An agent can understand emotion related questions in four different domains, type, intensity, state, and the combination of responsible-agent and blameworthiness. Some limitations arise due to the techniques used and to the relative short time frame in which the assignment was to be executed. Main issues are that the existing natural language understanding and generation modules could not be fully used, that very little context about the conversation is available and that the emotion states simplify the emotional state of an agent. These limitations and other thoughts give rise to the following recommendations for further work: * Make full use of references. * Use coping strategies for generating agent's utterances. * Use focus mechanisms for generating agent's utterances. * Extend known utterances. * Use NLU and NLG module. * Use emotion dialogue and states to influence emotions. * Fix known bugs.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
2003
Lent, Michael; Hill, Randall W.; McAlinden, Ryan; Brobst, Paul
2002 Defense Modeling and Simulation Office (DMSO) Laboratory for Human Behavior Model Interchange Standards Technical Report
no. AFRL-HE-WP-TP-2007-0008, 2003.
Abstract | Links | BibTeX | Tags:
@techreport{van_lent_2002_2003,
title = {2002 Defense Modeling and Simulation Office (DMSO) Laboratory for Human Behavior Model Interchange Standards},
author = {Michael Lent and Randall W. Hill and Ryan McAlinden and Paul Brobst},
url = {http://ict.usc.edu/pubs/2002%20Defense%20Modeling%20and%20Simulation%20Office%20(DMSO)%20Laboratory%20for%20Human%20Behavior%20Model%20Interchange%20Standards.pdf},
year = {2003},
date = {2003-07-01},
number = {AFRL-HE-WP-TP-2007-0008},
abstract = {This report describes the effort to address the following research objective: "To begin to define, prototype, and demonstrate an interchange standard among Human Behavior Modeling (HEM) -related models in the Department of Defense (DoD), Industry, Academia, and other Government simulations by establishing a Laboratory for the Study of Human Behavior Representation Interchange Standard." With experience, expertise, and technologies of the commercial computer game industry, the academic research community, and DoD simulation developers, the Institute for Creative Technologies discusses their design and implementation for a prototype HBM interface standard and also describes their demonstration of that standard in a game-based simulation environment that combines HBM models from the entertainment industry and academic researchers.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Mao, Wenji; Gratch, Jonathan
The Social Credit Assignment Problem (Extended Version) Technical Report
University of Southern California Institute for Creative Technologies no. ICT TR 02 2003, 2003.
Links | BibTeX | Tags: Virtual Humans
@techreport{mao_social_2003,
title = {The Social Credit Assignment Problem (Extended Version)},
author = {Wenji Mao and Jonathan Gratch},
url = {http://ict.usc.edu/pubs/ICT%20TR%2002%202003.pdf},
year = {2003},
date = {2003-01-01},
number = {ICT TR 02 2003},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
Moore, Benjamin
QuBit Documentation Technical Report
University of Southern California Institute for Creative Technologies Marina del Rey, CA, no. ICT TR 01.2003, 2003.
@techreport{moore_qubit_2003,
title = {QuBit Documentation},
author = {Benjamin Moore},
url = {http://ict.usc.edu/pubs/QuBit%20Documentation.pdf},
year = {2003},
date = {2003-01-01},
number = {ICT TR 01.2003},
address = {Marina del Rey, CA},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
2002
Gratch, Jonathan
Details of the CFOR Planner Technical Report
University of Southern California Institute for Creative Technologies Marina del Rey, CA, no. ICT TR 01.2002, 2002.
Links | BibTeX | Tags: Virtual Humans
@techreport{gratch_details_2002,
title = {Details of the CFOR Planner},
author = {Jonathan Gratch},
url = {http://ict.usc.edu/pubs/Details%20of%20the%20CFOR%20Planner.pdf},
year = {2002},
date = {2002-01-01},
number = {ICT TR 01.2002},
address = {Marina del Rey, CA},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {Virtual Humans},
pubstate = {published},
tppubtype = {techreport}
}
2001
Sadek, Ramy; Miraglia, Dave; Morie, Jacquelyn
3D Sound Design and Technology for the Sensory Environments Evaluations Project: Phase 1 Technical Report
University of Southern California Institute for Creative Technologies Marina del Rey, CA, no. ICT TR 01.2001, 2001.
@techreport{sadek_3d_2001,
title = {3D Sound Design and Technology for the Sensory Environments Evaluations Project: Phase 1},
author = {Ramy Sadek and Dave Miraglia and Jacquelyn Morie},
url = {http://ict.usc.edu/pubs/ICT-TR-01-2001.pdf},
year = {2001},
date = {2001-01-01},
number = {ICT TR 01.2001},
address = {Marina del Rey, CA},
institution = {University of Southern California Institute for Creative Technologies},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}