Summer Research program positions

390 - Team-Adaptive Coach for Training Artificial Intelligence Competencies (TACTAIC)

Project Name
Team-Adaptive Coach for Training Artificial Intelligence Competencies (TACTAIC)

Project Description
TACTAIC seeks to build AI tools to train AI skills. The Team-Adaptive Coach for Training Artificial Intelligence Competencies (TACTAIC) project aims to create a personal learning assistant and associated learning resources to support learners developing competencies in artificial intelligence (i.e., using artificial intelligence to teach artificial intelligence). By the summer of 2023, we anticipate testing an initial prototype as well as continuing to develop learning resources. This project will involve both web-application and mobile-learning components.

Job Description
Over the course of the summer, the intern will assist in development of software infrastructure, participate in AI-integrated content development, and informal testing of the prototype. Content development may include content creation (e.g., recording a video resource) as well as game-like learning scenarios (e.g., interactive simulations of AI algorithms). Platforms for this work include Python AI/ML libraries, React Native, React JS, Node.js, and GraphQL. Interns involved in this project will be expected to document their designs, contributions, and analyses to contribute to related publications.

Preferred Skills
• At least one AI course
• Javascript
• Python, Python Notebook

Apply now

391 - Content Analytics and Tools for the Personal Assistant for Life Long Learning (PAL3)

Project Name
Content Analytics and Tools for the Personal Assistant for Life Long Learning (PAL3)

Project Description
In this research, we seek to prototype and study the effects of leveraging artificial intelligence to deliver personalized, interactive learning recommendations. The primary focus of this work is to expand the “content bottleneck” for adaptive systems, such that new content may be integrated and included into the system with minimal training.

The testbed for this work will be the Personal Assistant for Life-Long Learning (PAL3), which uses artificial intelligence to accelerate learning through just-in-time training and interventions that are tailored to an individual. The types of content that will be analyzed using machine learning and included in the system cover a wide range of domains. These include:

• Suicide Prevention & Risk Reduction: The military offers a wide array of resources to increase resilience and reduce destructive behaviors (e.g., suicide, alcohol abuse, sexual assault). Unfortunately, these resources often require searching across many web portals and the resources are often static (e.g., no interaction, no personalization). Additionally, social stigmas may present barriers to seeking out information or help directly.

• Artificial Intelligence: Training on topics such as AI algorithms and emerging technologies that leverage AI and data science. Content on how to build, evaluate, and compare different AI systems for realistic use cases.

• Leadership Strategies: Training on different types of leadership styles, with opportunities to compare and apply leadership strategies to simulated scenarios.

• Technical Skills: Training on technical skills such as electronics fundamentals (e.g., transistors) to master concepts on how circuits behave under different types of current or voltage loads.

Job Description
The goal of the internship will be to expand the capabilities of the system to enhance learning and engagement. The specific tasks will be determined based on the status of the project at the time of the internship as well as your interests. Possible topics include work with: (1) Machine Learning for Content Analysis (e.g., multimedia: images, video, text, dynamic content), (2) Natural Language Processing or Dialog Systems for Coaching, (3) Learning Recommendation Systems, and (4) Content-specific technologies (e.g., Leadership or Technical Training scenarios). Opportunities will be available to contribute to peer-reviewed publications.

Preferred Skills
• Python, JavaScript/Node.js, R
• Machine Learning Expertise, Basic AI Programming, or Statistics
• Strong interest in machine learning approaches, intelligent agents, human and virtual behavior, and social cognition

Apply now

392 - 3D Human Motion Synthesis (AI/Computer Vision)

Project Name
3D Human Motion Synthesis (AI/Computer Vision)

Project Description
The 3D Motion Synthesis project aims to build a scalable motion synthesis system to generate 3D full-body human animations. It could be utilized to create 3D co-speech gesture animations from input speech or synthesize human actions from text descriptions. The key tasks include extracting motion datasets from videos, designing efficient neural network architecture, and reproducing motion styles from different persons for synthesizing diverse motions and actions. The focus is to utilize the ideas from recent text-to-image works (i.e. DALL-E, Stable Diffusion, etc) and develop new methods for motion synthesis. The goal is to develop novel machine learning methods for driving real-time 3D virtual avatars that can be used in training simulation and augmented reality.

The project seeks to:

• Develop 3D human pose exsimtation methods to extract 3D character motions from monocular videos.
• Construct a 3D motion database from 2D video sources.
• Develop motion generators through machine learning.
• Research motion style transfer and disentanglement to allow synthesis motions of varying styles.
• Implement real-time solutions for live motion synthesis on 3D virtual avatars.

Job Description
The ML Researcher (AI/Computer Vision) will work with the ICT researcher in support of developing solutions for 3D motion generator from speech or text that will reproduce the perosnalized motion styles from a person.

Preferred Skills
• Experience with machine learning and computer vision (Python, Pytorch)
• Knowledge/experience the area of monocular 3D human pose estimtation, generative modeling, and/or natural language processing.
• Previous experience working with video data or 3D character animation.
• Experience with Unity/Unreal game engine and related programming skills (C++/C#)

Apply now

393 - One World Terrain Project (AR/Machine Perception)

Project Name
One World Terrain Project (AR/Machine Perception)

Project Description
One World Terrain (OWT) is an applied research effort focusing on researching and prototyping capabilities that support a fully geo-referenced 3D planetary model for use in the Army’s next-generation training and simulation environments. USC-ICT’s research exploits new techniques and advances in the focus areas of collection, processing, storage and distribution of geospatial data to various runtime applications.

The project seeks to:

• Construct a single 3D geospatial database for use in next-generation simulations and virtual environments
• Procedurally recreate 3D terrain using drones and other capturing equipment
• Develop methods for real-time 3D object trackings and camera localizations from 2D monocular videos.
• Extract semantic features from raw 3D terrain and point cloud to build a simulation ready environment
• Reduce the cost and time for creating geo-specific datasets for modeling and simulation (M&S)

Job Description
The ML Researcher (AR/Machine Perception) will work with the OWT team in support of developing efficient solutions for 3D object trackings and camera localizations from 2D monocular videos.

Preferred Skills
• Experience with machine learning and computer vision (Python, Tensor Flow, Pytorch)
• Interest/experience in 3D computer vision applications such as 3D object detections, monocular depth estimtation, structure-from-motion, and/or SLAM.
• Familiar with photogrammetry reconstruction process.
• Experience with Unity/Unreal game engine and related programming skills (C++/C#)
• Interest/experience with Geographic information system applications and datasets.

Apply now

394 - Machine learning for human behavior understanding

Project Name
Machine Learning for Human Behavior Understanding

Project Description
The Intelligent Human Perception Lab at USC’s Institute for Creative Technologies conducts research on automatic human behavior analysis. We are looking for interns who are interested in performing research on multimodal learning for understanding behaviors and actions. The project requires performing data pre-processing, evaluating with existing baseline methods and contributing to the development of novel machine learning approaches.

Job Description
You will perform research on machine learning for human behavior understanding. This requires coding, data wrangling, and documenting the research progress. The internship should result in a peer-reviewed publication.

Preferred Skills
• Python, PyTorch
• Linux
• Machine learning

Apply now

395 - Integrated Virtual Human R&D Project

Project Name
Integrated Virtual Human R&D Project

Project Description
The Integrated Virtual Human R&D project seeks to design, develop, and validate a wide range of virtual human systems by combining the various research efforts within USC and ICT. These virtual humans range from relatively simple, statistics-based question / answer characters to advanced, cognitive agents that are able to reason about themselves and the world they inhabit. Our virtual humans can engage with real humans and each other both verbally and nonverbally, i.e., they are able to hear you, see you, use body language, talk to you, and think about whether or not they like you. The Virtual Humans research at ICT is widely considered one of the most advanced in its field and brings together a variety of research areas, including natural language processing, nonverbal behavior, vision perception and understanding, task modeling, emotion modeling, information retrieval, knowledge representation, and speech recognition.

Job Description
As part of the Integrated Virtual Humans Project you will help us review relevant literature and design cutting-edge virtual human research. You will work collaboratively with our research team to help determine the parameters of our investigation, propose and evaluate potential human-subjects research methodology, prepare IRB applications and maintain the study repository. Study execution as well as co-authorship on published manuscripts is possible.

Preferred Skills
• Literature Review
• Study Design (Qualitative and Quantitative Methods)
• Qualtrics
• SPS

Apply now

396 - RIDE Integration

Project Name
RIDE Integration

Project Description
The Rapid Integration & Development Environment (RIDE) is a foundational Research and Development (R&D) platform that unites many Department of Defense (DoD) and Army simulation efforts into a rapid prototyping sandbox.

RIDE integrates a range of capabilities, including 3D Geo-spatial terrain, Non-Player Character (NPC) Artificial Intelligence (AI) agent behaviors, Experience Application Programming Interface (xAPI) logging, virtual humans, multiplayer networking, scenario creation, machine learning approaches, and multi-platform support. It leverages robust game engine technology while designed to be agnostic to any specific game or simulation engine.

RIDE is freely available through Government Purpose Rights (GPR) with the aim of lowering the barrier to entry for R&D efforts within the simulation community, in particular for training, analysis, exploration, and prototyping. See https://ride.ict.usc.edu for more details.

Job Description
RIDE combines the best-in-breed solutions from both academia and industry in support of military training. Some of the challenges associated with this include 1) integrating individual technologies into a common, principled framework, 2) developing demonstrations that showcase integrated capabilities, and 3) create new content that leverages these capabilities.

Considering these challenges, the tasks outlined for the summer internship are as follows:
• Become familiar with the RIDE platform;
• Design and develop new demonstrations leveraging existing RIDE capabilities;
• Provide feedback on development and authoring tools for creating new content and support implementing improvements;
• Identify and integrate additional 3rd party capabilities.

Working within this project requires a solid understanding of general software engineering principles and distributed architectures. The work touches on a variety of Computer Science areas, including Artificial Intelligence and Human-Computer Interaction. Given the scope of RIDE, the ability to quickly learn how to use existing components and develop new ones is essential.

Preferred Skills
• Fluent with Unity or Unreal Engine
• Excellent C# or C++ programming skills
• Fluent in one or more scripting languages (e.g., Python)
• Background in artificial intelligence and machine learning a plus
• Excellent general computer and communication skills

Apply now

397 - Mixed Reality Adaptable Interfaces - UI/UX

Project Name
Mixed Reality Adaptable Interfaces – UI/UX

Project Description
Project goal is to provide a conceptual framework for developing an adaptive and adaptable context-aware interface for head-mounted displays, and to demonstrate and evaluate with prototypes the benefits of such a framework. If successful, we believe this will initiate a critical shift away from traditional two-domain system interfaces and towards the design and development of the next-generation system-driven interface.

The team will develop prototype interactions that will iterate on, and instantiate the tools, techniques, and guidelines necessary within an adaptive user-interface to maximize short-term and long-term goals.

Job Description
The team seeks a Summer Intern to develop one or multiple concepts for experimental research designs; to contribute to developing hypotheses and data driven testing associated with the project. The Intern will collaborate with researchers and programmer developers to level up knowledge in the domain and will then work on developing their own ideas for potential formal user studies and experimental designs.

Preferred Skills
• Experimental Design
• Hypothesis Testing
• Cognitive Psychology experience
• Human Factors research

Apply now

398 - Deep (Multi-agent) Reinforcement Learning for Military Training Simulations

Project Name
Deep (Multi-agent) Reinforcement Learning for Military Training Simulations

Project Description
Military training simulations occur in complex, continuous, stochastic, partially observable, non-stationary and doctrine-based environments with multiple players either collaborating or competing against each other. Multi-agent Reinforcement Learning (MARL) presents opportunities in military simulations for creating simulated enemy forces that can learn from and adapt to techniques used by friendly forces to become challenging opponents, developing policies at a level of abstraction useable by an operational planner in military domains.

Job Description
The ICT Cognitive Architecture Lab leverages machine learning to generate autonomous and adaptive synthetic characters for use in interactive simulations. Our current focus is on creating such synthetic characters for military training simulations, with multiple players collaborating or competing against each other. Interns will have the opportunity to learn and use Multi-agent Reinforcement Learning, with various recent representation aggregation and behavior prediction techniques to stabilize the behavior adaptation.

Preferred Skills
• Deep Reinforcement Learning
• Graph Neural Networks
• Unity ML-Agents

Apply now

399 - Estimating the Risk of Adverse Brain Health in Service Members

Project Name
Estimating the Risk of Adverse Brain Health in Service Members

Project Description
We are doing research to best understand how to estimate the risk of adverse brain health outcomes in service members based on military and non-military exposures to blunt head trauma, extreme environmental conditions, and sound and blast over-pressure exposure. hey will research the incidence of such factors in different units, jobs, and locations, and compile the data to be compared to the incidence of reported mTBi and symptomatic reports.
This research complements and extends our current research on Brain Health in Marines and Army service members in California, North Carolina, and Hawaii.

Job Description
Researcher will work with historical and recently collected data on brain health in service members. The historical data will be accessed through service member records (Marines and Army) which the researcher may need to organize. Current data (including personal blast exposure and childhood exposure reports) will be available from our current projects exploring and documenting the causes and symptoms of adverse brain health outcomes in current service members. Both sources of data will be combined to look for trends, correlations and significant relationships that would allow researchers to create a risk score for individual service members.

Preferred Skills
• Research Skills
• Attention to Detail
• Data Analysis

Apply now

400 - Team Dialogue Processing

Project Name
Team Dialogue Processing

Project Description
The project will involve investigation of techniques to go beyond the current state of the art in human-computer dialogue by creating explicit models of dialogue agent and human interlocutor identity, models of team task planning, execution, and tracking, and multiparty dialogue structure.

Job Description
The student intern will work with the Natural Language Research Group (including professors, other professional researchers, and students) to advance one or more of the research areas described above. If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. Specific activities will depend on the project and skills and interests of the intern, but will include one or more of the following: programming new dialogue or evaluation policies, annotation of dialogue corpora, testing with human subjects.

Preferred Skills
• Some familiarity with dialogue systems or natural language dialogue
• Either programming ability or experience with statistical methods and data analysis
• Ability to work independently as well as in a collaborative environment

Apply now

401 - Conversations with Heroes and History

Project Name
Conversations with Heroes and History

Project Description
ICT’s time-offset interaction technology allows people to have natural conversations with videos of people who have had extraordinary experiences and learn about events and attitudes in a manner similar to direct interaction with the person. Subjects will be determined at the time of the internship. Previous subjects have included Holocaust and Sexual Assault Survivors and Army Heroes.

Job Description
The intern will assist with developing, improving and analyzing the systems. Tasks may include running user tests; analysis of content and interaction results, and improvements to the systems. The precise tasks will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship.

Preferred Skills
• Very good spoken and written English (native or near-native competence preferred)
• General computer operating skills (some programming experience desirable)
• Experience in one or more of the following: 1. Interactive story authoring & design 2. Linguistics, language processing 3. A related field; museum-based informal education

Apply now

402 - 3D Scene Understanding and Processing

Project Name
3D Scene Understanding and Processing

Project Description
The Vision and Graphics Lab at ICT pursues research and engineering works in understanding and processing of 3D scenes, specifically in reconstruction, recognition, and segmentation, using learning-based techniques. It has important values in practical applications of auto-driving, AR, and VR. However, generating realistic 3D scene data for training and testing is challenging due to limited photorealism in synthetic data and intensive manual work in processing real data. Besides, the large scale of complex scenes further increases the difficulty in utilizing such data. Thus, we need to develop advanced techniques for better 3D data generation. Our first goal is an automatic method for automatic data cleanup, organization, annotation, and completion of both real data and synthetic data, either in image space or 3D space, to generate well-structured data for multiple learning-based 3D tasks. Then we will use the data to train the neural networks for the joint reconstruction and segmentation of large-scale 3D scenes.

Job Description
The intern’s task will focus on 3D data processing by developing intelligent algorithms, and editing/visualization tools, to fix problems in the current dataset (inaccurate segmentation, incomplete surfaces, distorted textures, and so on) and generate clean and accurate 3D models for training. Meanwhile, the intern will also join the research in 3D scene reconstruction and segmentation using these data.

Preferred Skills
• The intern’s task will focus on 3D data processing by developing intelligent algorithms, and editing/visualization tools, to fix problems in the current dataset (inaccurate segmentation, incomplete surfaces, distorted textures, and so on) and generate clean and accurate 3D models for training. Meanwhile, the intern will also join the research in 3D scene reconstruction and segmentation using these data
• Engineering maths physics and programming, C++, Python, GPU programming, Unity3, OpenGL
• Basic skills in deep learning and experience in training networks

Apply now

403 - Optical Simulations for Synthetic Image Quality Testing

Project Name
Optical Simulations for Synthetic Image Quality Testing

Project Description
The Vision and Graphics Lab at ICT pursues research and engineering works in understanding, processing and advancing physically based rendering of 3D models using learning-based techniques. Rapid generation of these quality benchmarked assets is a goal being chased by the academia and industry. It has important values in practical applications of teleconferencing, holoportation, AR, and VR. However, generating realtime realistic 3D assets is challenging due to limited photorealism in synthetic data and generated images from optical layers in a deep learning network. Provided with an image from a first iteration optically informed deep network, the task is to run simulation tests for our predicted parameters in an optical simulation software to check for accuracy against the theoretical image formation model. Then we will use various image quality checks for SNR, tonal response, image distortion, chromatic aberration among others.

Job Description
The intern’s task will focus on running simulation tests for quality and accuracy of predicted parameters in an optical simulation software to check for accuracy against the theoretical image formation model. The intern will also perform various image quality checks for provided images in terms of resolution, sharpness, dynamic range etc

Preferred Skills
• Experience with running optical simulations in studios like Code V, Zemax
• Familiarity with image and signal processing skills like Fourier analysis, DCT and basic image opertaions
• Engineering, math, physics and programming, C++, Python, GPU programming, Optional
• Basic skills in deep learning and experience in training networks

Apply now

404 - Material Reflectance Property Database for Neural Rendering

Project Name
Material Reflectance Property Database for Neural Rendering

Project Description
The Vision and Graphics Lab at ICT pursues research and works in physically-based neural rendering for general objects. Although the modern rendering pipeline used in the industry achieves compelling and realistic rendering results, it still has many general issues. It requires professionals to manually tweak the material property to match the natural-looking appearance of each object. Thus it’s costly for a complex scene with multiple objects. We have to model eyeball, teeth, facial hair, and skin separately, only for rendering a human face. On the other side, neural rendering will introduce a revolution of an easy high-quality rendering. By building a radiance field with geometry models, lighting models, and material property models using a network, neural rendering will render a complicated scene in real-time. The material property is no longer required, and the network automatically assigns it according to the object’s material label. This breakthrough will be beneficial for immersive AR/VR experience in the future.

Job Description
In the center of neural rendering is the material property estimation, for which we need a material database. The intern will work with lab researchers to capture, process the material database using our dedicated Light-stage. Our database will include extensive objects (e.g., cloth, wood, hair) with different reflectance properties, measured using our Light Stage via controllable lighting conditions. This database will be used for physically-based neural rendering algorithm development.

Preferred Skills
• C++, OpenGL, GPU programming, Operating System: Windows and Ubuntu, strong math skills
• Experience with computer vision techniques: multi-camera stereo, optical flow, photometric stereo, Spherical Harmonics.
• Knowledge in modern rendering pipelines, image processing, computer vision, computer graphics

Apply now

405 - Real-Time Modelling and Rendering of Virtual Humans

Project Name
Real-Time Modelling and Rendering of Virtual Humans

Project Description
The Vision and Graphics Lab at ICT pursues research and works in production to perform high-quality facial scans for Army training and simulations, as well as for VFX studios and game development companies. There has been continued research on how machine learning can be used to model 3D data effortlessly by data-driven deep learning networks rather than traditional methods. This requires large amounts of data; more than can be achieved using only raw light stage scans. We are currently working on software to aid both in the visualization of our new facial scan database and to animate and render virtual humans. To realize and test the useability of this we would need a tool that can model and render the created Avatar through a web-based GUI. The goal is a real-time, responsive web-based renderer on a client controlled by software hosted on the server.

Job Description
The intern will work with lab researchers to develop a tool to visualize assets generated by the machine learning model of the rendering pipeline in a web browser using a Unity plugin and also integrate deep learning models to be called by web-based APIs. This will include the development of the latest techniques in physically-based real-time character rendering and animation. Ideally, the intern would have awareness about physically based rendering, subsurface scattering techniques, hair rendering, and 3D modeling and reconstruction.

Preferred Skills
• C++, Engineering math physics and programming, OpenGL / Direct3D, GLSL / HLSL, Unity3
• Python, GPU programming, Maya, version control (svn/git)
• Knowledge in modern rendering pipelines, image processing, rigging, blendshape modeling.
• Web-based skills – WebGL, Django, JavaScript, HTML, PHP.

Apply now

406 - Automated Negotiation Agents

Project Name
Automated Negotiation Agents

Project Description
The aim of this project is to help develop AI-driven agents that can negotiate with people over the web via text chat. Work will involve learning about machine learning, large language models, and theories of negotiation. One aspect of the project will be to understand how a person’s values shape their negotiation behavior and how to potentially recognize different values automatically through how a person negotiates.

Job Description
Ideal candidate is a PhD student in computer science with expertise in one or more of the following areas: multi-agent reinforcement learning, deep learning, chatbots, experience tuning large language models. Knowledge of experimental design and analysis a benefit.

Preferred Skills
• BS or MS in Computer Science
• Experience with multi-agent reinforcement learning
• Experience with Natural Language Processing Techniques

Apply now

ARL 75 - Natural Language Explanation of Atypical and Anomalous Environments

Project Name
Natural Language Explanation of Atypical and Anomalous Environments

Project Description
This project studies behavior and develops technology for dialogue systems in which a robot will generate natural language descriptions of environments and attempt to explain any atypical or anomalous properties that are observed. These texts are meant to support a remote-located human teammate who is trying to complete specific tasks with the robot’s help. This work involves aspects of computer vision to visually analyze the environment, commonsense reasoning to understand the most important elements, and natural language understanding and generation to respond to the human. We will work with dialogue corpora surrounding low-quality images taken in real-world and virtual environments.

Job Description
Possible projects include:
•  Conduct manual and/or computational analysis of dialogue about images and environments
•  Experiment with or combine existing natural language generation and/or computer vision software for description and explanation
•  Design evaluation criteria for assessing the quality of the generated text

Preferred Skills
Interest in and knowledge of some combination of the following:
•  Programming expertise for language generation and/or computer vision
•  Human-robot or human-agent dialogue analysis and dialogue systems
•  Experimental design and applied statistics for rating and evaluation

Apply now

ARL 76 - Unlocking eye tracking-based Adaptive Human Agent Teaming in real-world contexts

Project Name
Unlocking eye tracking-based Adaptive Human Agent Teaming in real-world contexts

Project Description
Our mission is to develop opportunistic sensing systems which leverage already existing data streams to inform agents and adapt to changing mission contexts. Specifically, we focus on leveraging eye tracking data streams such as eye movements and pupil size to classify cognitive states and strategies such as search, navigation, stress, effort and depth of focus. Accurately classifying these states will allow an agent to adapt when needed.
We have a number of data sets that can be analyzed to develop algorithms which predict and classify various cognitive states and behaviors.

Example projects using eye tracking data:
•  Model pupil size to extract relative cognitive and non-cognitive influences
•  Characterize complex state and behaviors that emerge from human and human-agent teams.
•  Predict depth of focus on an individual by individual basis – to develop depth of focus aware software to present objects and information in AR/VR at the right depth
•  Using the pupillary light response to infer cognitive influence
•  Visual saliency modeling in complex dynamic environments

Job Description
If the student has a particular goal or related work at their home institution they should briefly describe this in the application letter. The scope of the work will be determined based on the skills and interests of the selected applicant, as well as the demands of the project during the time of the internship but may include data collection, literature review and statistical analysis.

Preferred Skills
•  Computational modeling
•  Machine learning
•  Programming and stastical analysis in Matlab, Python or R
•  Interest in Psychophysiology, cognitive neuroscience and/or psychology
•  Signal processing and time series analysis

Apply now

ARL 77 - Machine Learning and Computer Vision/Graphics

Project Name
Machine Learning and Computer Vision/Graphics

Project Description
Machine learning (ML) algorithms require vast amounts of training data to ensure good performance. Nowadays, synthetic data is increasingly used for training cutting-edge ML algorithms. This research aims to develop an AI-driven image synthesis approach for generating high-quality synthetic data. Advanced machine learning techniques, particularly the generative neural models and deep learning, will be explored for model representation and synthesis.

Job Description
The work includes pursuing technical solutions and developing core algorithms by applying advanced machine learning and data synthesis techniques. Anticipated research results include new theory and algorithm developments leading to publications in scientific forums and real-world utility and software for demonstrations.

Preferred Skills
•  A dedicated and hardworking individual
•  Experience or coursework on machine learning, computer vision, graphics
•  Strong programming skills

Apply now

ARL 78 - Graph Neural Networks

Project Name
Lightweight Graph Neural Networks: Models and Implementation

Project Description
GNNs operate on structural, contextual and temporal information to produce node/link embeddings on static and dynamic graphs. These embeddings are useful for downstream applications like classification or link prediction. This project is focused on developing lightweight GNN models and optimizations for accelerating them on various hardware platforms.

Job Description
The student will assist in developing lightweight Deep Graph Neural Network (GNN) models and implementations on various hardware platforms. Help develop GNN models for both static and dynamic graphs (Temporal GNNs). Identify performance bottlenecks during training and inference on various hardware platforms and work to accelerate performance.

Preferred Skills
•  PyTorch, C++, Python, Java
•  Verilog/VHDL, Hardware Level Synthesis (HLS)
•  Basic knowledge of Graphs and Graph Algorithms

Apply now