Automatic Multimodal Affect Detection for Research and Clinical Use

Researchers are developing a fully automated system for measuring affective behavior from multimodal (face, gaze, body, voice, and speech) input for research and clinical use.
Principal
Details

For behavioral science, automated coding of affective behavior from multimodal (visual, acoustic, and verbal) input will provide researchers with powerful tools to examine basic questions in emotion, psychopathology, and interpersonal processes. For clinical use, automated measurement will help clinicians to assess vulnerability and protective factors and response to treatment for a wide range of disorders. This is a subaward with the University of Pittsburgh from the National Institute of Mental Health (NIMH)

PROJECT PERIOD

8/1/17 - 4/30/22

FUNDING AGENCY

National Institute of Mental Health (NIMH)