299 – Research Assistant, Virtual Doppelganger

Project Name
Virtual Doppelganger

Project Description
The research project will examine the effect of avatar appearance on user performance. The project team will run a study with participants assigned to different appearance conditions. Statistical analyses will be used to test for significant differences between conditions to evaluate the impact of avatar appearance.

Job Description
The research assistant will help design and run the study, process and analyze the data, and possibly contribute to a publication and presentation of the work at a scientific conference.

Preferred Skills

  • Experience programming Unity
  • Some experience in human-computer interaction
  • Some experience in experimental design

298 – Research Assistant, Natural Language Annotation

Project Name
Natural Language Annotation

Project Description
The Natural Language Dialogue team collects linguistic data for use in developing, evaluating, training and extending coverage of our conversational dialogue systems. The annotation project consists of annotating conversation transcripts of human dialogue and/or human-machine dialogue for features relevant to understanding and engaging in dialogue.

Job Description
Annotation of conversation transcripts, using semantic and pragmatic representations or relations that have been developed specifically for our implemented systems. The job is suitable primarily for undergraduate students. Interns who reside locally in the Los Angeles area may be able to continue working at ICT after the summer. Topics of interest may include use of stories and extended narrative in dialogue, human-robot dialogue, cross-cultural dialogue, and conversation-related games.

Preferred Skills

  • Highly proficient in spoken & written English or Spanish (native or near-native competence preferred)
  • Some background in Linguistics or a related field
  • General feel for language and working with linguistic material

297 – Research Assistant, Natural Language Dialogue Processing for Virtual Humans

Project Name
Natural Language Dialogue Processing for Virtual Humans

Project Description
ICT is developing artificial intelligence and natural language processing technology to allow virtual humans to engage in spoken and face to face interactions with people for a variety of purposes, including training of conversational tasks with virtual role-players. Current research areas include, embodied dialogue, socio-cultural & affective dialogue, meta-dialogue, and topic switching, casual chat dialogue, dialogue architectures, computational theories of dialogue genres, evaluation of dialogue systems, and dialogue authoring.

Job Description
The student intern will work with the Natural language research group (including Professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter.

Preferred Skills

  • Some familiarity with dialogue systems or natural language dialogue
  • Either programming ability or experience with statistical methods and data analysis
  • Ability to work independently as well as in a collaborative environment

296 – Research Assistant, Human-Robot Dialogue

Project Name
Human-Robot Dialogue

Project Description
ICT has several projects involving applying natural language dialogue technology developed for use with virtual humans to physical robot platforms. Tasks of interest include remote exploration, joint decision-making, social interaction, and language learning. Robot platforms include humanoid (e.g. NAO) and non-humanoid flying or ground-based robots.

Job Description
This internship involves participating in the development and evaluation of dialogue systems that allow physical robots to interact with people using natural language conversation. The student intern will be involved in one or more of the following activities: 1. Porting language technology to a robot platform, 2. Design of task for human-robot collaborative activities, 3. Programming of robot for such activities, or 4. Use of a robot in experimental activities with human subjects.

Preferred Skills

  • Experience with one or more of:
  • Using and programming robots
  • Dialogue systems, computational linguistics
  • Multimodal signal processing, machine learning

295 – Research Assistant, Interactive Experience with a Holocaust Survivor

Project Name
New Dimensions in Testimony

Project Description
New Dimensions in Testimony is a joint effort of ICT, the USC Shoah Foundation, and Conscience Display, intended to create an interactive experience that replicates a live conversation with Holocaust survivors. The project will gather the survivors’ answers to hundreds of questions, recording them using advanced filming technologies which enable 3-D projection using current and future displays, and storing them in a computer database. The project will create systems that allow individuals to ask questions in conversation, and the survivor will answer from the testimony as if he were in the room, utilizing language understanding technology which allows the computer to find the most appropriate reaction to a user’s utterance.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include running user tests; analysis of content and interaction results, and improvements to the systems. The precise tasks will be determined based on the skills and interests of the selected applicant,as well as the demands of the project during the time of the internship.

Preferred Skills

  • Very good spoken and written English (native or near-native competence preferred)
  • General computer operating skills (some programming experience desirable)
  • Experience in one or more of the following: 1. Interactive story authoring & design, 2. Linguistics, language processing or 3. A related field; museum-based informal education; Holocaust research and survivor testimonies

289 – Programmer, Captivating Virtual Instruction for Training

Project Name
Captivating Virtual Instruction for Training (CVIT)

Project Description
The objective of this task order is to plan, design, and develop an online learning application centered around the IA program of instruction topics as As part of the overall CVIT project and outside the scope of this modification, part of the intent of developing this courseware will be to validate and verify next-generation e-learning methods and technologies in order to maximize the return on learning and the engagement of course participants. The expected online course will be approximately 10 hours in length, and will cover comprehension, application and evaluation of IA topics and course material. The use of the application may be standalone by individual course participants, or as part of an instructor-led resident course in the classroom. Three courses have been developed and need support and extensions, Advanced Situational Awareness, Supervisor Development Course, and Intelligence Analyst course.

Job Description
Provide new features and correct bugs on CVIT training applications. Help develop new user interfaces and features. Test and ensure operation of system and responsiveness of deployment.

Preferred Skills

  • Full-stack
  • JSON, Java
  • Web services – AWS
  • UI/ UX design and prototyping skills

288 – Programmer, Virtual Acquisition Career Guide

Project Name
Virtual Acquisition Career Guide (VCG)

Project Description
The project is composed of a base phase and option phase – the base phase will develop a prototype VCG that is loosely integrated with existing USAASC systems. It will be made available to USAASC personnel and selected ALTWF members for testing and demonstration but will not be accessible to the broader acquisition community. Should the government opt for the follow-on effort, the final VCG will be tightly integrated with existing USAASC systems and deployed for widespread use by the ALTWF. For the base phase, the development of an ALTWF VCG consists of two primary technical efforts: 1) the design and development of a virtual guide that interacts with users in the contracting career field, specifically on the topic of certification management; and 2) initial integration with the existing ALTWF Career Acquisition Management Portal (CAMP) and Career Acquisition Personnel and Position Management Information System (CAPMIIS). The VCG will be built with the University of Southern California – Institute for Creative Technology’s (USC – ICT) SimCoach technology platform. The SimCoach platform combines a web delivered virtual human with a comprehensive set of web-based tools for content creation. The proposed ALTWF VCG system will be fully persistent, maintaining a record of users’ career information as well as a record of previous interactions with the VCG. For the option phase, the focus will be on hardening the system to: 1) support large numbers of simultaneous users; 2) tight and robust integration with CAPPMIS; 3) expanding the dialogue base to handle large variations in user interactions; and 4) deploying the system for widespread use by the acquisition community. In addition to these tasks, an operations and maintenance (O&M) plan will be put into place that details the requirements for sustaining the VCG after the contract has ended.

Job Description
Work on Language recognition, FAQs, Bugs, and general improvements needed on Sim Cao

Preferred Skills

  • Full-stack
  • JSON, Java
  • Web services – AWS

287 – Programmer, Graphic Programmer Internship

Project Name
Terrain Mod 10

Project Description
USC-ICT will research and develop a proof-of-concept capability for the ingestion, processing, storage, rendering and simulation of alternative sources of geo-referenced terrain data in next-generation game platform(s). Example data sources include elevation data, vegetation indices, commercial satellite imagery, social media, point clouds, buildings & surface features, roads, subterranean, and cultural features. From this source data, 3D models, materials, textures and features are algorithmically classified and imported into one or more game-based simulation environments. The goal is to procedurally convert, store and use this data in a platform that may serve as the foundation for future Army synthetic training.

Job Description
Work on Photogrammetry pipeline as assigned. Work on delineating and modeling objects and tagging objects in the environment. Help bring models into game engines and automate conversion and processing pipeline. Create Terrains in game engines from images and point cloud data.

Preferred Skills

  • Photogrammetry and Procedural Model creation
  • Game engines (Unity, UE, or others)
  • Knowledge of flying UAS and autopilot programs (desirable)
  • Point cloud data processing

286 – Research Assistant, Mixed Reality Lab (MxR) Techniques and Technologies

Project Name
Mixed Reality Lab (MxR) Techniques and Technologies

Project Description
The ICT MxR Lab researches and develops the techniques and technologies to advance the state-of-the-art for immersive virtual reality and mixed reality experiences. With the guidance of the principal investigators (Evan Suma Rosenberg and David Krum), students working in the lab will help to design, create, and evaluate prototypes and experiments designed to explore specific research questions in virtual reality and human computer interaction. Specific projects may include research in redirected walking, perception and cognition, avatar-mediated communication, and learning in virtual worlds.

Job Description
Duties will include brainstorming and rapid prototyping of novel techniques, developing virtual environments using the Unity game engine, running user studies, and analyzing experiment data. Some projects may include programming (such as C#, Python, Unity, Arduino), fabrication (3D design and 3D printing), 3D modeling, and audio design.

Preferred Skills

  • Development experience using game engines such as Unity
  • Prior experience with virtual reality technology or 3D/touch interfaces
  • Programming in C++, C#, or similar languages
  • Familiar with experimental design and user study procedures
  • Prior experience with rapid prototyping equipment (optional)

285 – Programmer, Building a Backbone for Multi-Agent Intelligent Tutoring Systems

Project Name
Building a Backbone for Multi-Agent Intelligent Tutoring Systems

Project Description
Over the last few years, a likely solution has emerged: service-oriented design and combining components from multiple Intelligent Tutoring Systems (ITS), which leverage artificial intelligence to speed up learning. The research problems that this work attempts to address are: 1) Multi-Agent ITS Services: Rapid and seamless integration of multiple ITS services that interact like agents in real-time to provide a coherent and effective learning experience. 2) Plug-and-Play Interoperability: Reducing barriers to adding services to ITS down to a level that a class of students could, within a day, add a new agent as a service. 3) Blending Expert Knowledge with Machine Learning: Models that allow experts to explicitly declare knowledge and then learn from data to improve agent performance, without needing to throw out the existing expert knowledge.

Job Description
The goal of this internship will be to program new web services that leverage artificial intelligence, machine learning, and semantic messaging to make it faster to build AI-based services that support learning. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) building new tutoring system components and models, (2) machine learning models that identify efficient ways to leverage user data to improve the performance of tutoring system components in real time, (3) running brief usability tests with users of these components with new users of minimal-working-example tutoring system.

Preferred Skills

  • Python, JavaScript, Java
  • AI Programming or Statistics
  • Strong interest in artificial intelligence, machine learning, and human and virtual behavior

284 – Programmer, Option J – Assessment and Evaluation Programmer

Project Name
Option J – Assessment and Evaluation

Project Description
PAL3 is a system for delivering engaging and accessible education via mobile devices. It is designed to provide on-the-job training and support lifelong learning and ongoing assessment. The system features a library of curated training resources containing custom content and pre-existing tutoring systems, tutorial videos and web pages. PAL3 helps learners navigate learning resources through: 1) An embodied pedagogical agent that acts as a guide; 2) A persistent learning record to track what students have done, their level of mastery, and what they need to achieve; 3) A library of educational resources that can include customized intelligent tutoring systems as well as traditional educational materials such as webpages and videos; 4) A recommendation system that suggests library resources for a student based on their learning record; and 5) Game-like mechanisms that create engagement (such as leaderboards and new capabilities that can be unlocked through persistent usage).

Job Description
The goal of the internship will be to expand the repertoire of the system to further enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) models driving the dialog systems for PAL3 to support goal-setting, teamwork, or fun/rapport-building; (2) modifying the intelligent tutoring system and how it supports the learner, and (3) statistical analysis, and/or data mining to identify patterns of interactions between human subjects and the intelligent tutoring system.

Preferred Skills

  • C#, Java, Python, R
  • Dialog Systems, Basic AI Programming, or Statistics
  • Strong interest in intelligent agents, human and virtual behavior, and social cognition

283 – Programmer, Engage: Promoting Engagement in Virtual Learning Environments

Project Name
Promoting Engagement in Virtual Learning Environments

Project Description
The Engage project at ICT seeks to investigate motivation and engagement in game-based, virtual learning experiences. Specifically, the project focuses on how interactions with virtual humans can be made more effective and compelling for learners. If you have ever interacted with characters in video games or web-chat programs, you probably know there is much room for improvement! In this stage of the project, we will be analyzing data and finalizing service-oriented modules for optimizing interactions with agents. This analysis will include investigating the role of emotions and feedback from the system and its impact on engagement experienced by learners.

Job Description
The goal of the internship will be to expand the repertoire of the system to further enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) modifying the intelligent tutoring system and how it supports the learner, and (2) models driving the virtual human utterances and behaviors, and (3) emotion coding, statistical analysis, and/or data mining to identify patterns of interactions between human subjects and the intelligent tutoring system.

Preferred Skills

  • C#, Java, Python
  • Basic AI Programming or Statistics
  • Strong interest in human and virtual behavior and cognition

282 – Programmer, Emerging Concepts in Virtual Environments for Training Programmer/Developer

Project Name
Emerging Concepts in Virtual Environments for Training

Project Description
Redefine the role of virtual reality in training by developing new immersive interaction and presentation techniques, and studying collaboration and learning in complex virtual environments with a focus on narrative scenario development that explores the developing ‘language of VR’ for content creation.

Job Description
Programmer will join a team of student developers, artists and designers working on an immersive scenario.

Preferred Skills

  • Development experience using Unity game engine
  • Prior experience with virtual reality technology (e.g. head-mounted displays, motion tracking, virtual humans, etc.)
  • Prior experience with rapid prototyping immersive experiences

281 – Programmer, Lightweight and Deployable 3D Human Performance Capture for Automultiscopic Virtual Humans

Project Name
Lightweight and Deployable 3D Human Performance Capture for Automultiscopic Virtual Humans

Project Description
The lab is developing a lightweight 3D human performance capture method that uses very few sensors to obtain a highly detailed, complete, watertight, and textured model of a subject (clothed human with props) which can be rendered properly from any angle in an immersive setting. Our recordings are performed in unconstrained environments and the system should be easily deployable. While we assume well-calibrated high-resolution cameras (e.g., GoPros), synchronized video streams (e.g., Raspberry Pi-based controls), and a well-lit environment, any existing passive multi-view stereo approach based on sparse cameras would significantly under perform dense ones due to challenging scene textures, lighting conditions, and backgrounds. Moreover, much less coverage of the body is possible when using small numbers of cameras.

Job Description
We propose a machine learning approach and address this challenge by posing 3D surface capture of human performances as an inference problem rather than a classic multi-view stereo task. Intern will work with researchers to demonstrate that massive amounts of 3D training data can infer visually compelling and realistic geometries and textures in unseen region. Our goal is to capture clothed subjects (uniformed soldiers, civilians, props and equipment, etc.), which results in an immense amount of appearance variation, as well as highly intricate garment folds.

Preferred Skills

  • C++, OpenGL, GPU programming
  • Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature, detection, bilinear morphable models, texture synthesis, markov random field
  • Operating System: Windows

280 – Programmer, Head-Mounted Facial Capture and Rendering for Augmented Reality

Project Name
Head-Mounted Facial Capture and Rendering for Augmented Reality

Project Description
The lab is developing techniques for enabling natural and expressive face-to-face communication between subjects in an augmented reality (AR) environment by removing the barriers introduced by immersive headmounted displays (HMDs). The degree to which users in AR environments can expressively interact with each other is hindered by HMDs, which occlude a large portion of the face. We propose a method to overlay a virtual face that replicates the subject’s appearance and expressions using facial performance capture. While state-of-the-art real-time face tracking technologies fail in the presence of occlusions, recent efforts by USC CS and USC ICT Graphics Lab and have resulted in algorithms and systems that allow for wearers of virtual reality HMDs to transfer their expressions to an avatar using sensors mounted to HMDs.

Job Description
Intern will work with researchers to develop a prototype AR HMD device based on Microsoft’s HoloLens and develop new algorithms for light-weight facial performance capture, as well as new techniques for appearance synthesis. Intern will support research efforts to fill in occluded facial regions using a digital face. This will also require capturing and rendering the dynamic lighting conditions on the face.

Preferred Skills

  • C++, OpenGL, GPU programming
  • Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature
  • Operating System: Windows
  • Experience with detection, bilinear morphable models, texture synthesis, markov random fields

279 – Programmer, Authoring Novel Facial Performances for Digital Characters

Project Name
Authoring Novel Facial Performances for Digital Characters

Project Description
Authoring realistic digital characters for interactive applications is becoming a practical possibility leveraging numerous technologies developed at ICT. For example, Digital Ira was developed in collaboration with industry that resulted in a photo-real, real-time digital character driven using facial performance capture. An unanswered question is how to author novel performances for an existing character, without requiring additional performances from the original actor. Also, can we further automate the authoring of difficult areas around the eyelids and lip contours, which presently require artistic attention? We identify three avenues of research to address these questions: 1) Research towards authoring novel facial animations for digital characters; 2) Research towards automatic synthesis of the character’s appearance, in accordance with novel performances (including animated reflectance maps and geometric details); 3) Research towards improved automation in authoring eye movement, eyelid animation, and lip contour animation, especially during speech.

Job Description
Candidate will focus on how to author novel performances for an existing character, without requiring additional performances from the original actor. Also, research will focus on how to further automate the authoring of difficult areas around the eyelids and lip contours, which presently require artistic attention. Additional focus on authoring novel facial animations for digital characters, automatic synthesis of the character’s appearance, in accordance with novel performances and towards improved automation in authoring eye movement, eyelid animation, and lip contour animation.

Preferred Skills

  • C++, OpenGL, GPU programming
  • Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature, detection, bilinear morphable models, texture synthesis, markov random fields
  • Operating System: Windows

267 – Research Assistant, Real-time Behavior Interpretation

Project Name
Real-time Behavior Interpretation

Project Description
The Real-time Behavior Interpretation (RBI) project is developing technologies for the automated interpretation of time-series data using a form of automated reasoning, called logical abduction, in a way that integrates closely with probability theory. We are especially interested in the interpretation of movies in the style of the famous Heider-Simmel film, depicting the shenanigans of two triangles and a circle around a box with a door.

Job Description
The RBI project team is seeking a summer intern with EITHER a deep love of both first-order logic and probability theory, OR a strong familiarity with contemporary deep-learning approaches to event segmentation and classification — preferably both. This summer intern is expected to contribute to technical research to develop system that generate high-level narrative interpretations of low-level observable behavior.

Preferred Skills

  • Practical knowledge of Theano/Keras or TensorFlow
  • Automated deduction and theorem proving
  • Propositional and first-order logic

266 – Research Assistant, Data-driven Interactive Narrative Engine

Project Name
Data-driven Interactive Narrative Engine

Project Description
The Data-driven Interactive Narrative Engine (DINE) project is creating a new platform for interactive fiction that allows for free-text input for textual scenarios and free-speech input for audio/video scenarios.

Job Description
The DINE project team is seeking a summer intern with skills and interests in human-computer interaction and user-interface evaluations to help us design and evaluate different interaction approaches for audio-based interactive storytelling.

Preferred Skills

  • Familiarity with experimental design and experimental hypothesis testing
  • Python programming, Unix/Linux shell scripting, database SQL queries
  • HTML/Javascript programming, and MATLAB/r-package/pandas data analysis

265 – Research Assistant, Multimodal Representation Learning of Human Behaviors/Machine Learning

Project Name
Multimodal Representation Learning of Human Behaviors/Machine Learning

Project Description
Machine Learning in general, relies heavily on good representations or features of data that yield better discriminatory capability in classification and regression experiments. In order to derive efficient representations of data, researchers have adopted two main strategies: (1) Manually crafted feature extractors designed for a particular task and (2) Algorithms that derive representations automatically from the data itself. The latter approach is called Representation Learning (RL), and has received growing attention because of increasing availability of both data as well as computational resources. In fact, RL was responsible for large performance boosts in a number of machine learning applications, including performance improvements for speech recognition and facial expression analysis. At ICT we are in particular interested to advance the state of the art in deep neural networks and machine learning approaches that allow us to learn multimodal representations of human behavior. We will use these representations to assess an individual’s well-being and affective state.

Job Description
The candidate has experience with machine learning approaches and is comfortable in programming in Python. The candidate will participate in machine learning experiments that aim to better predict a person’s psychological well-being, e.g. depression recognition. We have access to the largest dataset of depression screening interviews and will leverage big data resources to train successful models. A big plus is if the candidate has experience with deep learning toolboxes such as Tensorflow, Theano, or Keras.

Preferred Skills

  • Python
  • Machine Learning
  • Linux
  • Deep Learning Toolboxes

264 – Research Assistant, Cognitive Architecture Research Assistant

Project Name
The Sigma Cognitive Architecture

Project Description
This project is developing a new cognitive architecture; i.e., a computational hypothesis about the fixed structures underlying a mind, whether natural or artificial. Sigma is built in Lisp and is based on the elegant but powerful formalism of graphical models, which enable combining both statistical/neural and symbolic aspects. We are working on a broad variety of topics, including (deep) learning and memory, problem solving and decision making, perception and imagery, speech and language, and social and affective processing. We are also developing adaptive virtual humans – graphically embodied humanoids that can learn from their experience.

Job Description
Looking for a student interested in developing, applying, analyzing and/or evaluating new intelligent capabilities in an architectural framework.

Preferred Skills

  • Programming (Lisp preferred, but can be learned once arrive)
  • Graphical models (experience preferred, but ability to learn quickly is essential)
  • Cognitive architectures (experience preferred, but interest is essential)

263 – Programmer, Integrated Virtual Humans Programmer

Project Name
Integrated Virtual Humans

Project Description
The Integrated Virtual Humans project (IVH) seeks to create a wide range of virtual human systems by combining the various research efforts within USC and ICT into a general Virtual Human Architecture. These virtual humans range from relatively simple, statistics based question / answer characters to advanced, cognitive agents that are able to reason about themselves and the world they inhabit. Our virtual humans can engage with real humans and each other both verbally and nonverbally, i.e., they are able to hear you, see you, use body language, talk to you, and think about whether or not they like you. The Virtual Humans research at ICT is widely considered one of the most advanced in its field and brings together a variety of research areas, including natural language processing, nonverbal behavior, vision perception and understanding, task modeling, emotion modeling, information retrieval, knowledge representation, and speech recognition.

Job Description
IVH seeks an enthusiastic, self-motivated, programmer to help further advance and iterate on the Virtual Human Toolkit. Additionally, the intern selected will research and develop potential tools to be used in the creation of virtual humans. Working within IVH requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of the Virtual Humans Architecture, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills

  • Fluent in C++, C#, or Java
  • Fluent in one or more scripting languages, such as Python, TCL, LUA, or PHP
  • Excellent general computer skills
  • Background in Artificial Intelligence a plus

262 – Technical Artist/Graphics, Technical Artist

Project Name
Art Group

Project Description
The Art Group project (AG) facilitates ICT projects in reaching their full potential by collaboratively defining and meeting art and pipeline needs. Starting with client’s and researcher’s core concepts and needs, we design and create all aspects of immersive experiences; UI, scripts, storyboards, audio, all visual assets, and the tools and pipelines needed to make them.

Job Description
AG seeks an enthusiastic, self-motivated, detail-oriented technical artist to focus on artistic and design verification process and/or asset creation. The intern selected will work closely with our tech-art and QA teams to identify and create tools and assets needed for projects, asset creation, and more efficient QA procedures.

Preferred Skills

  • Excellent general computer skills
  • Excellent Adobe Suite skills
  • Ability to quickly learn new systems
  • Familiarity with Unity a plus

261 – Research Assistant, Digital Character Generation and Control

Project Name
Human Modeling, Simulation and Control

Project Description
Digital characters are an important part of entertainment, simulations and digital social experiences. Characters can be designed to emulate or imitate human-like (and non-human like behavior). However, humans are very complicated entities, and in order to create a convincing virtual human, it is necessary to model various elements, such as human-like appearance, human-like behaviors, and human-like interactions. 3D characters can fail to be convincing representations because of improper appearance, improper behavior, or improper reactions. The goal of this internship is to advance the state-of-the-art in character simulation by improving or adding aspects to a digital character that would make them more convincing representations of real people.

Job Description
Research, develop and integrate methods for use on virtual characters to better fidelity, interaction or realism of characters. Design or implement algorithms from research papers and integrate into animation/simulation system (SmartBody).

Preferred Skills

  • C++
  • Computer graphics and animation knowledge
  • Research in character/animation/simulation/human modeling