ARL 42 – Research Assistant, Deep Learning Models for Human Activity Recognition Using Real and Synthetic Data

Project Name
Human Activity Recognition Using Real and Synthetic Data

Project Description
In the near future, humans and autonomous robotic agents – e.g., unmanned ground and air vehicles – will have to work together, effectively and efficiently, in vast, dynamic, and potentially dangerous environments. In these operating environments, it is critical that (a) the Warfighter is able to communicate in a natural and efficient way with these next generation combat vehicles, and (b) the autonomous vehicle is able to understand the activities that friendly or enemy units are engaged in. Recent years have, thus, seen increasing interest in teaching autonomous agents to recognize human activity, including gestures. Deep learning models have been gaining popularity in this domain due to their ability to implicitly learn the hierarchical structure in the activities and generalize beyond the training data. However, deep models require vast amounts of labeled data which is costly, time-consuming, error-prone, and requires measures to address any potential ethical concerns. Here we’ll look to synthetic data to overcome these limitations and address activity recognition in Army-relevant outdoor, unconstrained, and populated environments.

Job Description
The candidate will implement Tensorflow deep learning models for human activity recognition – e.g., 3D conv nets, I3D – that can be trained using real human gesture data, and synthetic gesture data (generated using an existent simulator). Knowledge of domain transfer techniques (e.g., GANs) may be useful. The candidate will have to research and demonstrate a solution to this problem.

Preferred Skills
– Experience with deep learning models for human activity recognition
– Experience with Python and TensorFlow
– Independent thinking and good communication skills

Apply now.

Go back to the summer research program list.