Follow us on: GitHub YouTube Vimeo Twitter LinkedIn

Trustworthy Human-AI Interaction

Encounters with AI-generated content can impact the human experience of algorithms, and more broadly the psychology of HumanAI Interaction (HAI). Providing transparency essentially means disclosing to viewers with clear and understandable information (or labels) about how content was created, and disclosing the role of AI in that process, if any. When dealing with AI-generated or edited content, AI system disclosures can influence users perceptions of media content, whether within news, health information, or other online sources. There is a growing concern that as generative AI becomes more widely used, manipulated content could easily spread false information. As a key step toward mitigating harms and risk, mitigation efforts include ensuring AI system transparency and drawing on guidelines set within the European AI Act. However, even with such policy, much remains uncharted regarding the disclosure of AI systems.

Our research on trustworthy human-AI interaction focuses on:

  • Multimodal AI disclosures: Prior work has shown that effective AI labels can enable viewers to immediately recognize AI’s involvement, allowing them to quickly evaluate source credibility, verify the accuracy of the content, acquire contextual knowledge, and make informed decisions around the trust and authenticity of such content. We focus on how to design multi-modal (text-visual-auditory) disclosures of AI-generated content across devices and displays that are transparent, informative, and minimally distracting.
  • Physiological and behavioral analysis of human-AI interaction: Advances in generative AI have shown the capacity to amplify media content generation at an accelerated rate, due to a number of factors, including scale, speed, cost, and hyper-personalization. Our research aims to characterize the physiological and behavioral responses towards online content using a sensor-based approach
  • Multimodal interfaces for interacting with information: We explore new methods, interfaces, and devices for interacting with content and AI agents. This ranges from haptic interfaces that allow one to feel the news, modality differences in determining differences between real and fake and human and AI news, to designing and evaluating Voice User Interfaces (VUIs) for the delivery and consumption of news.

Topics

  • AI disclosures and labeling systems
  • Behavioral and physiological sensing for human-AI interaction (HAI)
  • Intelligent visualization techniques
  • Adaptive user interfaces for HAI
  • Novel user interfaces for journalism and beyond
  • User agency and autonomy in HAI

Funding

Videos

Publications

  • X. Sun, X. Tang, A. El Ali, Z. Li, P. Ren, J. de Wit, J. Pei, J. A. Bosch Rethinking the Alignment of Psychotherapy Dialogue Generation with Motivational Interviewing Strategies. In Proceedings of the 31st International Conference on Computational Linguistics, Abu Dhabi, UAE, 2025.
  • A. El Ali, K. Puttur Venkatraj, S. Morosoli, L. Naudts, N. Helberger, P. Cesar Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How. In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA '24), Hawaii, USA, 2024. Article Article 342, 1-11, pages.
  • P. Elagroudy, J. Li, K. Väänänen, P. Lukowicz, H. Ishii, W. E. Mackay, E. F. Churchill, A. Peters, A. Oulasvirta, R. Prada, A. Diening, G. Barbareschi, A. Gruenerbl, M. Kawaguchi, A. El Ali, F. Draxler, R. Welsch, A. Schmidt Transforming HCI Research Cycles using Generative AI and “Large Whatever Models” (LWMs). In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA '24), Hawaii, USA, 2024. Article Article 584, 1-5, pages.
  • J. Li, H. Cao, L. Lin, Y. Hou, R. Zhu, A. El Ali User Experience Design Professionals’ Perceptions of Generative Artificial Intelligence. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), Hawaii, USA, 2024. Article Article 381, 1–18, pages.
  • S. Ooms, P. Cesar, A. El Ali, D. Ceolin, L. Hollink, M. Slokom, E. Pauwels, V. Robu, H. La Poutre Technological Innovation in the Media Sector: Understanding Current Practices and Unraveling Opportunities. In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA '24), Hawaii, USA, 2024. Article Article 533, 1-7, pages.
  • S. Ooms, M. Lee, P. Cesar, A. El Ali FeelTheNews: Augmenting Affective Perceptions of News Videos with Thermal and Vibrotactile Stimulation. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA '23), Hamburg, Germany, 2023. Article Article 137, 1-8, pages.
  • S. Rao, V. Resendez, A. El Ali, P. Cesar Ethical Self-Disclosing Voice User Interfaces for Delivery of News. In Proceedings of the 4th Conference on Conversational User Interfaces (CUI '22), Glasgow, UK, 2022. Article Article 9, 1–4, pages.
  • A. El Ali, T. Stratmann, S. Park, J. Schöning, W. Heuten, and S. Boll Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), Montreal QC, Canada, 2018.