Skip to main content

Deep Neural Networks for Multimodal Expression Recognition in eHealth Applications

General presentation of the topic:  Chronic diseases like cancer and heart disease are major problems in Canada and the rest of the world. These diseases are primarily caused by poor health behaviours, like physical inactivity and a bad diet. Interventions that work on a person’s ambivalence and hesitancy tend to help change behaviours. Online, or eHealth, behaviour change interventions are becoming very popular. Still, they can’t perceive a person’s emotional state at the moment, meaning they are not as successful as they could be. Part of the problem is that people often express ambivalence and hesitancy in subtle non-verbal ways, and these expressions vary significantly from one person to the next. New advances in AI and the fact that most digital devices have cameras and microphones provide an excellent opportunity to measure ambivalence and hesitancy, e.g., distress and motivation in eHealth. Using our ongoing eHealth behaviour change program, we are recording videos of people performing tasks that create ambivalence and hesitancy in people.

Objectives: In this project, we will develop and evaluate deep learning (DL) models for the recognition of expressions linked to ambivalence and hesitancy in healthcare applications, with a particular focus on methods for domain adaptation of DL models, and fusion of textual, facial and vocal modalities extracted from videos. We will develop DL models that we can accurately measure ambivalence and hesitancy, and then be able to adapt our eHealth behaviour change program to an individual’s specific needs. These DL models will be developed with the capacity to address several key challenges associated with AER in real-world eHealth applications, specifically, designing cost-effective DL models from multiple modalities using weakly-labeled video datasets, with variability within and between individuals (e.g., sex, ethnicity, race, age, cultural background) and capture conditions (e.g., sensors, devices, pose, lighting, etc.).

This project will lead to the construction of the first affective computing system capable of reliably recognizing human ambivalence and hesitancy and use this to create a more efficient and personalized cost-effective eHealth behaviour change intervention, which should ultimately improve health and reduce the burden of chronic diseases. It is foreseeable that these models could have other applications within healthcare, e.g., fatigue detection, pain localization, and other non-healthcare application areas like gaming, human resources (e-interviews), e-learning, the legal system (e-interrogations, e-testimony), etc.

Required knowledge

Expected ability of the student:
• Strong academic record in computer science, applied mathematics, or electrical engineering, preferably with expertise in one or more areas: machine learning, computer vision, pattern recognition, artificial intelligence.
• Good programming skills in languages such as C, C++ and Python. Knowledge of deep learning frameworks would be a plus.

Desired program of studies

Masters with project, Doctorate, Postdoctoral studies

Research domains

Intelligent and Autonomous Systems, Health Technologies

Additional information

Starting: Immediately

Contact: Eric Granger (, LIVIA, ILLS-CNRS