Follow us on: GitHub YouTube Vimeo Twitter LinkedIn

Affective Interactive Systems

We conduct research at the intersection of human computer interaction, affective computing, and artificial intelligence. With increased computing power, shrinking hardware, ubiquitous wireless networks, and widespread adoption of personal computing devices, we are entering into a new technological era of how humans interact with (affect-aware) machines. This is made possible through embedding (at times personal and imperceptible) low-cost and low-power sensors and devices into our everyday environments, whether through bio-responsive wearables and mobile devices, emotion-aware avatars in eXtended Reality (XR), or embodied personal assistants that exhibit emotional intelligence.

A key focus of our group is on affective interactive systems, including:

  • Recognition: Emotion data acquisition, annotation, and recognition algorithms
  • Visualization: Prototypes and infrastructure for gathering, synchronizing and visualizing user affective states across environments (mobile, wearables, XR)
  • Bio-responsiveness: Novel bio-responsive interactive systems and interaction techniques using real-time sensed physiological data

We study handheld and wearable sensors as a paradigm for collecting and processing affective data, across a range of domains including media consumption, automotive, fashion, and immersive XR experiences. Based on realistic testing grounds, and collaborating with several commercial and academic partners, we have deployed our technology and infrastructure in places such as the National Theatre of China in Shanghai and the Amsterdam Dance Event in the Netherlands. Our overall objective is to create intelligent and empathic systems that both respect user privacy and appropriately react to humans and their experiences.

Topics

Affective Systems

  • Multimodal emotion recognition
  • Automotive physiological sensing
  • Affect-aware avatars in XR
  • Emotion-aware fashion
  • Sensing audience engagement

Smart Textiles

  • Bio-responsive wearables
  • Self-actuating textiles

Sensing and Understanding Human Activities

  • Multimodal emotion recognition
  • Sleep monitoring
  • Sensing nightclub activity
  • Understanding urban mobility

Internet of Things

  • Igor/IOTsa

Funding

Videos

Software and Datasets

Key Publications

  • T. Xue, A. El Ali, T. Zhang, G. Ding, P. Cesar CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360 VR Videos. IEEE Transactions on Multimedia, : pp. , 2021 .
  • T. Xue, A. El Ali, G. Ding, and P. Cesar Investigating the Relationship between Momentary Emotion Self-reports and Head and Eye Movements in HMD-based 360° VR Video Watching. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 2021.
  • T. Xue, A. El Ali, T. Zhang, G. Ding, and P. Cesar RCEA-360VR: Real-time, Continuous Emotion Annotation in 360° VR Videos for Collecting Precise Viewport-dependent Ground Truth Labels. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021.
  • T. Zhang, A. El Ali, C. Wang, A. Hanjalic, P. Cesar CorrNet: Fine-Grained Emotion Recognition for Video Watching Using Wearable Physiological Sensors. Sensors, 21: pp. , 2021 .
  • A. S. Gill, S. Cabrero, P. Cesar, D. A. Shamma AI at the Disco: Low Sample Frequency Human Activity Recognition for Night Club Experiences. In Proceedings of the 1st International Workshop on Human-centric Multimedia Analysis, Seattle, WA, USA, 2020.
  • T. Zhang, A. El Ali, C. Wang, A. Hanjalic, and P. Cesar RCEA: Real-time, Continuous Emotion Annotation for Collecting Precise Mobile Video Ground Truth Labels. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM CHI), Honolulu, HA, USA, 2020.
  • A. El Ali, X. Yang, S. Ananthanarayan, T. Röggla, J. Jansen, J. Hartcher-O’Brien, K. Jansen, and P. Cesar ThermalWear: Exploring Wearable On-chest Thermal Displays to Augment Voice Messages with Affect. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM CHI), Honolulu, HA, USA, 2020.
  • P. Cesar, V. Singh, R. Jain, N. Sebe, and N. Oliver New Signals in Multimedia Systems and Applications. IEEE Multimedia (IEEE MM), 25: pp. 12-13, 2018 .
  • A. El Ali, T. Stratmann, S. Park, J. Schöning, W. Heuten, and S. Boll Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), Montreal QC, Canada, 2018.
  • P. S. Wen Shieng, J. Jansen, S. Pemberton Fine-grained Access Control Framework for Igor, a Unified Access Solution to The Internet of Things. Procedia Computer Science, 314: pp. 385-392, 2018 .
  • C. Wang and P. Cesar The Play Is a Hit - But How Can You Tell?. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, Singapore, Singapore, 2017. pp. 336-347.
  • S. Cabrero, J. Jansen, T. Röggla, J.A. Guerra-Gomez, D.A. Shamma, and P. Cesar CWI-ADE2016 Dataset: Sensing nightclubs through 40 million BLE packets. In Proceedings of the ACM Multimedia Systems Conference (ACM MMSys 2017), Taipei, Taiwan, 2017. pp. 181-186.
  • C. Wang, J. Wong, X. Zhu, T. Röggla, J. Jansen, and P. Cesar Quantifying Audience Experience in the Wild: Heuristics for Developing and Deploying a Biosensor Infrastructure in Theaters. In Proceedings of the International Workshop on Quality of Multimedia Experience (QoMEX 2016), Lisbon, Portugal, 2016. pp. 1-6.
  • C. Wang, E. Geelhoed, P. Stenton, and P. Cesar Sensing a Live Audience. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14), Toronto, ON, Canada, 2014. pp. 1909-1912.
  • P.S. Wen Shieng, J. Jansen, and S. Pemberton Fine-grained Access Control Framework for Igor, a Unified Access Solution to The Internet of Things. Procedia Computer Science, 134: pp. 385-392, 2018 .

Theses