260 – Research Assistant, Affective Computing (psychophysiology)

Project Name
Affective Computing (psychophysiology)

Project Description
USC’s Computational Emotion group is a world leader in the field of Affective Computing. Our research examines the role of emotion in human cognition and human-computer interaction and applies computational techniques the modeling, recognition, and synthesis of emotional phenomena. We are seeking a research assistant to help advance an effort to use peripheral psychophysiological reactions to inform models of cooperative/competitive behavior in game-theoretic settings

Job Description
The research assistant will help analyze an existing dataset of multimodal signals (facial expressions, decisions, and physiological signals) and help create predictive models of whether people cooperated or defected in an economic game. Primary responsibilities will involve analyzing the physiological signals which include data on impedance cardiography and electrodermal responses. The ideal candidate would be able to take a lead in writing and publishing their findings.

Preferred Skills

  • A PhD student working in the field of affective computing
  • Interest and knowledge in psychophysiology
  • Experience in signal processing, especially of physiological signals
  • Experience with machine learning
  • Knowledge of game theory (optional)

259 – Research Assistant, Affective Computing (Twitter sentiment)

Project Name
Affective Computing (Twitter sentiment)

Project Description
USC’s Computational Emotion group is a world leader in the field of Affective Computing. Our research examines the role of emotion in human cognition and human-computer interaction and applies computational techniques the modeling, recognition, and synthesis of emotional phenomena. We are seeking a research assistant to help advance an effort to use emotional reactions to real-world events (as described in twitter or blogs) to evaluate predictive models of emotion.

Job Description
The research assistant will take a leading role in extending our prior work on using twitter sentiment from the World Cup to help inform predictive models of human emotion. The assistant would work with the emotion group leader (Jonathan Gratch) to extend research described in the following paper: “”GOAALLL!: Using sentiment in the World Cup to explore theories of emotion” at ACII2015 (can be found on Google Scholar). The ideal candidate would be able to take a lead in writing and publishing their findings.

Preferred Skills

  • A PhD student working in the field of affective computing
  • Interest in text analysis
  • Interest in emotion theories (especially appraisal theory)
  • Knowledge of Twitter sentiment analysis or affective text analysis
  • Experience with machine learning
  • Comfortable with SQL
  • Experience scraping data from websites

258 – Research Assistant, MxR Lab

Project Name
Mixed Reality Research and Development

Project Description
The ICT Mixed Reality Lab researches and develops the techniques and technologies to improve the state-of-the-art for immersive virtual reality and mixed reality experiences. With the guidance of the principal investigators (David Krum, Evan Suma, and Mark Bolas), students working in the lab will help to design, build, and test prototypes and experiments designed to explore specific research questions in virtual reality and human computer interaction. Specific projects include research in locomotion (redirected walking), perception, avatar-mediated communication, and learning in virtual worlds.

Job Description
Duties will include brainstorming and rapid prototyping of novel techniques, developing virtual environments using the Unity game engine, running user studies, and analyzing experiment data. Some projects may include programming (such as C#, Python, Unity, Arduino), fabrication (3D design and 3D printing), and 3D modeling.

Preferred Skills

  • Development experience using game engines such as Unity
  • Prior experience with virtual reality technology or 3D/touch interfaces (e.g. head-mounted displays, motion tracking, virtual humans, etc.)
  • Prior experience with rapid prototyping equipment (e.g. 3D printers, laser cutters)
  • Programming in C++, C#, or similar languages
  • Familiar with experimental design and user study procedures

257 – Research Assistant, Interactive Experience with a Holocaust Survivor

Project Name
New Dimensions in Testimony

Project Description
New Dimensions in Testimony is a joint effort of ICT, the USC Shoah Foundation, and Conscience Display, intended to create an interactive experience that replicates a live conversation with Holocaust survivors. The project will gather the survivors’ answers to hundreds of questions, recording them using advanced filming technologies which enable 3-D projection using current and future displays, and storing them in a computer database. The project will create systems that allow individuals to ask questions in conversation, and the survivor will answer from the testimony as if he were in the room, utilizing language understanding technology which allows the computer to find the most appropriate reaction to a user’s utterance.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include developing questions; running user tests; transcribing user questions; paraphrasing user questions; transcribing survivor statements; linking questions with survivor statements; analysis of authored content and interaction results. The precise tasks will be determined based on the skills and interests of the selected applicant,as well as the demands of the project during the time of the internship.

Preferred Skills

  • Very good spoken and written English (native or near-native competence preferred).
  • General computer operating skills (some programming experience desirable).
  • General computer operating skills (some programming experience desirable).
  • Experience in one or more of the following: interactive story authoring and design; linguistics, language processing or a related field; museum-based informal education; Holocaust research and survivor testimonie

256 – Research Assistant, Natural Language Dialogue Processing for Virtual Humans

Project Name
Natural Language Dialogue Processing for Virtual Humans

Project Description
ICT is developing artificial intelligence and natural language processing technology to allow virtual humans to engage in spoken and face to face interactions with people for a variety of purposes, including training of conversational tasks with virtual role-players. Current research areas include, Embodied dialogue, socio-cultural & affective dialogue, meta-dialogue, and topic switching, casual chat dialogue, dialogue architectures, computational theories of dialogue genres, evaluation of dialogue systems, and dialogue authoring.

Job Description
The student intern will work with the Natural language research group (including Professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter.

Preferred Skills

  • Some familiarity with dialogue systems or natural language dialogue
  • Either programming ability or experience with statistical methods and data analysis
  • Ability to work independently as well as in a collaborative environment

255 – Research Assistant, Story-swapping for Virtual Humans

Project Name
Story-swapping for Virtual Humans

Project Description
ICT is developing artificial intelligence and natural language processing technology to allow virtual humans to engage in spoken and face to face interactions with people for a variety of purposes, including training of conversational tasks with virtual role-players. ICT has also developed authoring environments to allow an author to create the knowledge and language that a character will talk about. A current focus is on developing the capability for virtual humans to tell relevant and interesting stories in conversation.

Job Description
The student intern will work with the Natural language research group to learn the virtual human dialogue authoring tools. The student will then work on creating or adapting stories for the virtual human to tell and policies for when and how to tell the stories in a conversation with one or more human participants, with the objective of building rapport and long-term interest in continuing to interact with the virtual human.

Preferred Skills

  • Very good spoken and written English (native or near-native competence preferred).
  • Experience with creative writing of stories and/or game design (ideally a short writing sample can be sent with the application or made available upon request).
  • Ability to use computer interfaces.
  • Ability to work in a collaborative environment as well as work independently.
  • Interest in Artificial Intelligence and Natural Language Processing a plus.

254 – Research Assistant, Natural Language Annotation

Project Name
Natural Language Annotation

Project Description
The Natural Language Dialogue team collects linguistic data for use in developing, evaluating, training and extending coverage of our conversational dialogue systems. The annotation project consists of transcribing speech and annotating conversation transcripts of human dialogue and/or human-machine dialogue.

Job Description
Transcription of English speech and annotation of conversation transcripts, using semantic representations that have been developed specifically for our implemented systems. The job is suitable primarily for undergraduate students. Interns who reside locally in the Los Angeles area may be able to continue working at ICT after the summer. Topics of interest may include use of stories and extended narrative in dialogue, human-robot dialogue, multimodal grounding in dialogue, cross-cultural dialogue, and conversation-related games.

Preferred Skills

  • Very good spoken and written English or Spanish (native or near-native competence preferred).
  • Some background in Linguistics or a related field.
  • General feel for language and working with linguistic material.

253 – Research Assistant, Human-Robot Dialogue

Project Name
Human-Robot Dialogue

Project Description
ICT has several projects involving applying natural language dialogue technology developed for use with virtual humans to physical robot platforms. Tasks of interest include remote exploration, joint decision-making, social interaction, and language learning. Robot platforms include humanoid (e.g. NAO) and non-humanoid flying or ground-based robots.

Job Description
This internship involves participating in the development and evaluation of dialogue systems that allow physical robots to interact with people using natural language conversation. The student intern will be involved in one or more of the following activities. Porting language technology to a robot platform, design of task for human-robot collaborative activities, programming of robot for such activities, use of a robot in experimental activities with human subjects.

Preferred Skills

Experience with one or more of the following:

  • Using robots
  • Dialogue systems
  • Computational linguistics
  • Multimodal signal processing
  • Machine learning

252 – Programmer, Authoring Novel Facial Performance

Project Name
Authoring Novel Facial Performance

Project Description
Authoring realistic digital characters for interactive applications is becoming a practical possibility leveraging numerous technologies developed at ICT. For example, Digital Ira was developed in collaboration with industry that resulted in a photo-real, real-time digital character driven using facial performance capture. An unanswered question is how to author novel performances for an existing character, without requiring additional performances from the original actor. Also, can we further automate the authoring of difficult areas around the eyelids and lip contours, which presently require artistic attention? We identify three avenues of research to address these questions: 1) Research towards authoring novel facial animations for digital characters; 2) Research towards automatic synthesis of the character’s appearance, in accordance with novel performances (including animated reflectance maps and geometric details); 3) Research towards improved automation in authoring eye movement, eyelid animation, and lip contour animation, especially during speech.

Job Description
Candidate will focus on how to author novel performances for an existing character, without requiring additional performances from the original actor. Also, research will focus on how to further automate the authoring of difficult areas around the eyelids and lip contours, which presently require artistic attention. Additional focus on authoring novel facial animations for digital characters, automatic synthesis of the character’s appearance, in accordance with novel performances and towards improved automation in authoring eye movement, eyelid animation, and lip contour animation.

Preferred Skills

  • C++, OpenGL, GPU programming
  • Experience with computer vision techniques: multi-camera stereo, optical flow, facial feature, detection, bilinear morphable models, texture synthesis, markov random fields
  • Operating System: Windows

251 – Research Assistant, Captivating Virtual Instruction for Training

Project Name
Captivating Virtual Instruction for Training

Project Description
Captivating Virtual Instruction for Training(CVIT) is a 3-year (GFY14, GFY15, GFY16) research project hoping to shape the future of distributed learning (dL) in the Army and elsewhere through the delivery of course material that is not only informative and educational but engaging and stimulating for participants.

The outputs of CVIT Year 1 (GFY14) were a generalized technique-to-technology mapping strategy for course-ware developers, as well as a prototype set of dL modules that employ this strategy framed around the Army’s Advanced Situational Awareness — Basic (ASA-B) training program administered by the Maneuver Center of Excellence (MCOE) at Ft. Benning, Georgia.

The focus in Year 2 (GFY15) was to extend the mapping strategy and assess/evaluate the Year1 prototype with stakeholders and organizations including the Army Research Institute (ARI) and MCOE, and to potentially transition the software to an existing Army program of record (POR). Year 3 (GFY16) will center around developing, deploying and transitioning a culminating dL experience for ASA-B or a similar course as identified by MCOE.

In the summer of 2016 , We are continuing the research to compare efficiency of delivery techniques and are developing addition training courses related to supervisory development and military intelligence.

Job Description
The intern will assist in the evaluation of prototypes and development of new learning content.

For evaluation of the prototypes, the intern will be asked to assist in quality control and debugging of prototypes as well as potentially administering the prototypes to test subjects for feedback and evaluation or helping to analyze experimental data for results.

The intern will also have opportunities to help create and tune the training content for the new courses we will be developing related to Supervision and Military Intelligence.

Preferred Skills

  • Knowledge of Army or Situational Awareness or Military Intelligence would be very helpful but not essential
  • Detail oriented, able to run online training courses and fill in debugging reports for improvements
  • Skills with Photoshop, or Audio or Video editing software could be helpful. Any background with programming and web applications would be helpful but not required.

250 – Programmer, Terrain 2025 – World in a Box Mod 4

Project Name
Terrain 2025 – World in a Box Mod 4

Project Description
The focus of the proposed Terrain 2025 effort is to expand existing and identify new approaches for representing terrain in future DoD modeling and simulation environments, and to improve the Army’s understanding and utilization of land and space capabilities (Army Space Training Strategy, 2013). Along with investigating the more traditional visual and physical aspects of the environment (land features, height maps, textures), we must also conduct a review of the non-physical terrain which has greatly increase in prevalence – cyberspace, social features, demographics and material (photos/video) gathered by users in the field and by online communities. There is an expectation that these will feature prominently in future M&S capabilities. The deliverables for this project include a survey report and series of proof-of-concept demonstrations on what the future of M&S terrain might look like over the next 10-15 years both for simulation but also comprehensive training and education. It is hoped this effort will help drive future requirements for the field of cross-platform, cross-domain environments.

Job Description
The goal of the internship will be to improve the processing pipeline for photogrammetric images into game environment, testing of the game engines ability to provide Terrains using the One World Terrain format. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) pipeline refinements for photogrammetry into a game engine; 2) programming in game engines to test environments and complete their build out, 3) feedback and contributions to the One World Terrain specification, 4) testing prototypes and 5) Helping to write papers for results.

Preferred Skills

  • Unity coding and creation of environments. Knowledge of Terrains in other game engines.
  • Basic understanding of Photogrammetry.
  • Strong interest in geospatial mapping, use of simulations for training and education

249 – Programmer, Engage: Promoting Engagement in Virtual Learning Environments

Project Name
Engage: Promoting Engagement in Virtual Learning Environments

Project Description
The Engage project at ICT seeks to investigate motivation and engagement in game-based, virtual learning experiences. Specifically, the project focuses on how interactions with virtual humans can be made more effective and compelling for learners. If you have ever interacted with characters in video games or web-chat programs, you probably know there is much room for improvement! In the initial stages of the project, we will be measuring physiological and behavioral reactions of learners who are interacting with a virtual human with the goal being to identify ideal variability in its behavior. Further, we will also investigate the role of feedback from the system and its impact on engagement experienced by learners.

Job Description
The goal of the internship will be to expand the repertoire of the system to further enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) modifying the intelligent tutoring system and how it supports the learner, and (2) models driving the virtual human utterances and behaviors, and (3) emotion coding, statistical analysis, and/or data mining to identify patterns of interactions between human subjects and the intelligent tutoring system.

Preferred Skills

  • C#, Java, C++
  • Basic AI Programming or Statistics
  • Strong interest in human and virtual behavior and cognition

248 – Programmer, Graphics Lab – Computational Cameras Programmer

Project Name
Computational Cameras

Project Description
To perform image-based 3D reconstruction, relighting and rendering the lab has been relying on commercially available digital cameras. Unfortunately, the lack of necessary features required by the computer graphics area on available commercial cameras is becoming more of a hindrance to increasing the quality of ICT graphics lab’s research and production goals. The lab’s computational camera project seeks to research novel video processs and compression algorithms dedicated to the data collection requirements of computer graphics research / productions, especially for time-multiplexed illumination imaging purposes.

Job Description
Intern will participate in prototype development that involves electronics, firmware and software design in the field of digital photography and computer graphics. Research focused towards intelligent camera “backpack” solutions featuring on-board image processing, high-bandwidth data transfer and large capacity storage, heterogeneous camera rig solution that meets versatile needs of revolutionary data capture demands and custom imaging platform utilizing off-the-self components.

Preferred Skills

  • Programming with C/C++, Python and Matlab
  • Knowledge of computer communication protocol such as USB2.0/3.0, HDMI, Camera-Link, etc
  • Experience in hardware programming(Verilog/VHDL, micro-controller) is a plus
  • Soldering skills a plus

247 – Programmer, Graphics Lab – Conversations with Heroes & History

Project Name
Conversations with Heroes & History

Project Description
Conversations with Heroes & History a.k.a. New Dimensions in Testimony is an initiative to record and display video testimonies in a way that will continue the dialogue between Holocaust survivors and learners far into the future. The project uses ICT’s Light Stage technology to record interviews with over 50 video cameras for 3D reconstruction and playback; as well as natural language technology, which will allow people to engage with the testimonies conversationally by asking questions that trigger relevant, spoken responses. ICT is also pioneering display technology that will enable the testimonies to be projected in 3D on automultiscopic and head-mounted displays.

Job Description
The candidate will work on projects related to 3D geometry reconstruction of video testimonies. This project is an interesting mix of basic research with meaningful real-world application. Subject will gain experience with new 3D capture and display hardware. Candidate will require experience with computer graphics and computer vision.

Preferred Skills

  • C/C++
  • GPU programming
  • Computer vision techniques in particular: multi-camera stereo, optical flow, and view interpolation Video compression (MPEG)
  • Experience with OpenGL

246 – Research Assistant, Reinforcement Learning of Dialogue Policies

Project Name
Reinforcement Learning of Dialogue Policies

Project Description
A dialogue policy (or dialogue strategy) decides on which actions a dialogue system should perform given the user’s input and the dialogue history. Dialogue policies can be either hand-crafted or learned from data and simulations, e.g., using reinforcement learning, supervised learning, etc. The goal of this project is to investigate new approaches to reinforcement learning of dialogue policies and user simulation models for multi-party dialogue.

Job Description
The intern will work with the ICT natural language dialogue group to learn dialogue policies and user simulation models. The intern will also evaluate dialogue policies both in simulation and with human users.

Preferred Skills

  • Familiarity with reinforcement learning
  • Familiarity with dialogue systems and natural language dialogue
  • Good programming skills (preferably C++, Java, or Python)

245 – Programmer, Authoring and Accessing Virtual Humans on Mobile Devices

Project Name
Authoring and Accessing Virtual Humans on Mobile Devices

Project Description
This project is focused on extending and improving our framework to deliver virtual humans over the web and to mobile devices. In particular you will have a chance to work on the design and evaluation of authoring interface prototypes and/or techniques to deliver to and produce video for mobile devices.

Job Description
In this project you will use web technologies to develop prototypes of authoring tools to create novel dialog behaviors, evaluate them with user studies, and possibly work on a mobile app to integrate real-time virtual human rendering in our framework.

Preferred Skills

  • Good object-oriented design skills
  • Excellent programming skills (any language)
  • Experience with JavaScript/JSON
  • Experience with Unity3D

244 – Research Assistant, Advanced Natural Language Understanding (NLU) for Virtual Humans

Project Name
Advanced Natural Language Understanding (NLU) for Virtual Humans

Project Description
This project focuses on improving natural language understanding for dialogue systems. In particular we will focus on natural language understanding techniques to access structured sources of information and to better handle out of domain queries.

Job Description
In this project you will work with abduction based inference, knowledge representation and parsers to understand the current limitations of applying abduction based inference to NLU for dialogue systems. You will also familiarize yourself with structured sources of information (like Wikidata) and possibly work on improving the current interface layer to Wikidata.

Preferred Skills

  • Interest and experience with both machine learning and symbolic techniques for inference.
  • Experience with natural language processing.
  • Significant experience with programming (any language).

243 – Research Assistant, Data-Driven Multiagent Modeling of Human Behavior/OpenMind project

Project Name
Data-Driven Multiagent Modeling of Human Behavior/OpenMind project

Project Description
The Social Simulation Lab works on modeling and simulation of social systems from small group to societal level interactions, as well as data-driven approaches to validating these models. Our approach to simulation relies on multiagent techniques where autonomous, goal-driven agents are used to model the entities in the simulation, whether individuals, groups, organizations, etc.

Job Description
The research assistant will investigate automated methods for building agent-based models of human behavior. The core of the task will be developing and implementing algorithms that can analyze human behavior data and find a decision-theoretic model (or models) that best matches that data. The task will also involve using those models in simulation to further validate their potential predictive power.

Preferred Skills

  • Knowledge of multi-agent systems, especially decision-theoretic models like POMDPs.
  • Experience with Python programming.

242 – Research Assistant, Human-Robot Interaction (HRI)

Project Name
Human-Robot Interaction (HRI)

Project Description
This project aims to develop and study how humanoid robots can facilitate social interactions in learning.

Job Description
Develop speech and gesture capability for a humanoid robot, conduct experiments on how the robot can engage students in learning activities.

Preferred Skills

  • Passion for robotics
  • Experience with programming/scripting language (e.g., Java, C/C++, Python)
  • Experience working with robots

241 – Research Assistant, Digital Embodiment for Learning Transfer Acceleration (DELTA)

Project Name
Digital Embodiment for Learning Transfer Acceleration (DELTA)

Project Description
This project aims to develop serious games to address problematic behaviors, promote behavioral change and improve transfer of learning in virtual environment to the real world.

Job Description
Develop a role-playing simulation game using Unity 3D, conduct evaluation of the game.

Preferred Skills

  • Passion for games
  • Experience with Unity, programming/scripting language (e.g. C/C++, Python), and computer animation
  • Experience in game development

240 – Research Assistant, Summer Intern : Character Animation and Simulation Group

Project Name
Controllable Models of Motion for Virtual Humans

Project Description
Digital characters are an important part of entertainment, simulations and digital social experiences. Characters can designed to emulate or imitate human-like (and non-human like behavior). However, humans are very complicated entities, and in order to create a convincing virtual human, it is necessary to model various elements, such as human-like appearance, human-like behaviors, and human-like interactions. 3D characters can fail to be convincing representations because of improper appearance, improper behavior, or improper reactions. The goal of this internship is to advance the state-of-the-art in character simulation by improving or adding aspects to a digital character that would make them more convincing representations of real people.

Job Description
Research, develop and integrate methods for use on virtual characters to better fidelity, interaction or realism of characters. Design or implement algorithms from research papers and integrate into animation/simulation system (SmartBody).

Preferred Skills

  • C++
  • Computer graphics and animation knowledge
  • Research in character/animation/simulation/human modeling

239 – Programmer, Integrated Virtual Humans

Project Name
Integrated Virtual Humans

Project Description
The Integrated Virtual Humans project (IVH) seeks to create a wide range of virtual human systems by combining the various research efforts within USC and ICT into a general Virtual Human Architecture. These virtual humans range from relatively simple, statistics based question / answer characters to advanced, cognitive agents that are able to reason about themselves and the world they inhabit. Our virtual humans can engage with real humans and each other both verbally and nonverbally, i.e., they are able to hear you, see you, use body language, talk to you, and think about whether or not they like you. The Virtual Humans research at ICT is widely considered one of the most advanced in its field and brings together a variety of research areas, including natural language processing, nonverbal behavior, vision perception and understanding, task modeling, emotion modeling, information retrieval, knowledge representation, and speech recognition.

Job Description
IVH seeks an enthusiastic, self-motivated, programmer to help further advance and iterate on the Virtual Human Toolkit. Additionally, the intern selected will research and develop potential tools to be used in the creation of virtual humans. Working within IVH requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of the Virtual Humans Architecture, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills

  • Fluent in C++, C# or Java
  • Fluent in one or more scripting languages, such as Python, TCL, LUA or PHP
  • Excellent general computer skills
  • Background in Artificial Intelligence a plus

238 – Research Assistant, Real-time Behavior Interpretation

Project Name
Real-time Behavior Interpretation

Project Description
The Real-time Behavior Interpretation project at ICT explores the use of logical abduction in the automated interpretation of ambiguous sets of actions and events. This summer, our efforts are focused on the problem of incremental interpretation of long narratives, where we are modeling how people develop and maintain a running hypothesis of narrative events as they watch and interpret them in real-time.

Job Description
We are seeking a bright and creative intern to assist in our research, to include tasks such as optimizing and benchmarking automated reasoning software, authoring the logical forms of commonsense axioms, and the creation of evaluation materials.

Preferred Skills

  • The ideal candidate should have a deep love for first-order logic and its application in automated reasoning applications.
  • Software engineering skills in Python and/or Rust would be best.
  • Helpful would be some familiarity with the concept of logical abduction, joint and conditional probabilities, and algorithms for unification, forward chaining, and backward chaining.

237 – Research Assistant, Data-driven Interactive Narrative Engine

Project Name
Data-driven Interactive Narrative Engine

Project Description
The Data-driven Interactive Narrative Engine project at ICT explores the use of narrative language models, trained on millions of personal stories, in immersive simulation environments. This summer, we are applying recurrent neural network models to the task of predicting the next simulation event in interactive settings.

Job Description
We are seeking a bright and creative intern to assist in our research, to include tasks such as designing system evaluations, architecting early prototypes, and analyzing system performance.

Preferred Skills

  • Experience with the application of deep learning to natural language processing tasks
  • Software engineering skills in Python
  • Familiarity with commercial interactive narrative titles and narrative-based computer games would be helpful

236 – Programmer, The Sigma Cognitive Architecture

Project Name
The Sigma Cognitive Architecture

Project Description
This project is developing a new cognitive architecture; i.e., a computational hypothesis about the fixed structures underlying a mind, whether natural or artificial. Sigma is built in Lisp and is based on the elegant but powerful formalism of graphical models (factor graphs in particular). We are working on a broad variety of topics, including learning and memory, problem solving and decision making, perception and imagery, speech and language, and Theory of Mind. We are also developing an adaptive virtual human (a graphically embodied humanoid that can learn from its experience).

Job Description
Looking for a student interested in contributing to development of, and/or experimentation with, the adaptive virtual human.

Preferred Skills

  • Programming (Lisp preferred, but can be learned once arrive)
  • Cognitive architectures (experience preferred, but interest is essential)

235 – Research Assistant, Multimodal Behavior Analytics and Machine Learning

Project Name
Multimodal Behavior Analytics and Machine Learning

Project Description
Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal displays and signals. Human interpersonal behaviors have long been studied in linguistic, communication, sociology and psychology. The recent advances in machine learning, pattern recognition and signal processing enabled a new generation of computational tools to analyze, recognize and predict human communication behaviors during social interactions. This new research direction has broad applicability, including the improvement of human behavior recognition, the synthesis of natural animations for robots and virtual humans, the development of intelligent tutoring systems, and the diagnoses of social disorders (e.g., autism spectrum disorder).

Job Description
The internship will be performed under the supervision of Prof. Stefan Scherer. The goal of this internship is to create new multimodal machine learning approaches able to model multimodal affect, well-being or interpersonal skills using in-house recorded datasets as well as publicly available debates. To achieve this goal, the intern will gather experience in using state-of-the-art machine learning algorithms and gain proficiency in multimodal integration of information. Further, the intern will gain experience in the use of ICT’s virtual human and nonverbal behavior sensing architectures.

Preferred Skills

  • Experience in Machine Learning
  • Experience using Matlab
  • Experience in Signal Processing