Follow us on: GitHub YouTube Vimeo Twitter LinkedIn

Human-Computer Interaction in the Netherlands: DIS at CHI2021 and Revitalising CHI Nederland

Publication date: 2021-04-23

Research carried out by the Distributed and Interactive Systems (DIS) group from Centrum Wiskunde & Informatica (CWI) has resulted in several contributions to this year’s ACM CHI Conference on Human Factors in Computing Systems (CHI 2021). CHI is the flagship conference of ACM SIGCHI, the premier international society for professionals, academics and students who are interested in technology and human-computer interaction. This year the conference has transitioned into a fully virtual event. Below we highlight the work we will present (4 papers, 4 workshops, 1 LBW), and provide DOI links for document access.

We furthermore announce the reinstatement of CHI Nederland (CHI NL) (Twitter: https://twitter.com/chinl; official website under construction), an organization that aims at connecting, supporting, and representing the Human-Computer Interaction community in the Netherlands. Our colleague Abdallah El Ali is in the core group that has renewed this organization, and now serving as board member in the function of co-Treasurer. Historically, CHI NL has played a role in promoting Human Computer Interaction Research and its application in the Netherlands. In revitalizing CHI NL, the new board has taken the opportunity to ‘rethink’ the vision and mission for CHI NL. First, the board underwent an inception period of six months. During this period, members and outsiders were invited to share their ideas. Members can (still) express their interest in being active in CHI NL.

Summary of CHI 2021 Works

Papers

(1) Yanni Mei, Jie Li, Huib de Ridder, and Pablo Cesar, “CakeVR: A Social Virtual Reality (VR) Tool for Co-designing Cakes,” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM CHI 2021), Yokohama, Japan, May 8-13, 2021.

This work titled “CakeVR: A Social Virtual Reality (VR) Tool for Co-designing Cakes” presents the design, implementation, and expert evaluation of a social VR application (CakeVR) that allows a client to remotely co-design cakes with a pastry chef, through real-time realistic 3D visualizations. Cake customization services allow clients to collaboratively personalize cakes with pastry chefs. However, remote (e.g., email) and in-person co-design sessions are prone to miscommunication, due to natural restrictions in visualizing cake size, decoration, and celebration context. Thus, we decide to explore the potential of SocialVR as the communication tool in this use case. We start with expert semi-structured interviews (4 clients, 5 pastry chefs), based on the results, we distill and incorporate 8 design requirements into our CakeVR prototype. We evaluate CakeVR with 10 experts (6 clients, 4 pastry chefs) using cognitive walkthroughs, and find that it supports ideation and decision making through intuitive size manipulation, color/flavor selection, decoration design, and custom celebration theme fitting. Our findings provide recommendations for enabling co-design in social VR and highlight CakeVR’s potential to transform product design communication through remote interactive and immersive co-design.

(2) Julie R. Williamson, Jie Li, David A. Shamma, Vinoba Vinayagamoorthy, and Pablo Cesar, “Understanding User Proxemics and Social Formations in an Instrumented Virtual Reality Workshop,” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM CHI 2021), Yokohama, Japan, May 8-13, 2021.

This work titled “Understanding User Proxemics and Social Formations in an Instrumented Virtual Reality Workshop” conducted an academic workshop to facilitate a range of typical workshop activities, used a custom instrumented build of Mozilla Hubs to measure position and orientation, using a custom instrumented build of Mozilla Hubs to measure position and orientation. We analysed social interactions during a keynote, small group breakouts, and informal networking/hallway conversations. Our mixed-methods approach combined environment logging, observations, and semi-structured interviews. The results demonstrate how small and large spaces influenced group formation, shared attention, and personal space, where smaller rooms facilitated more cohesive groups while larger rooms made small group formation challenging but personal space more flexible. Beyond our findings, we show how the combination of data and insights can fuel collaborative spaces’ design and deliver more effective virtual workshops.

(3) A. Striner, A. Webb, J. Hammer, and A. Cook, “Mapping Design Spaces for Audience Participation in Game Live Streaming,” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM CHI 2021), Yokohama, Japan, May 8-13, 2021.

This work titled “Mapping Design Spaces for Audience Participation in Game Live Streaming” introduces and validates a theme map of audience participation in game live streaming for student designers. This map is a lens that reveals relationships among themes and sub-themes of Agency, Pacing, and Community, to explore, reflect upon, describe, and make sense of emerging, complex design spaces. We are the first to articulate such a lens, and to provide a reflective tool to support future research and education. To create the map, we perform a thematic analysis of design process documents of a course on audience participation for Twitch, using this analysis to visually coordinate relationships between important themes. To help student designers analyze and reflect on existing experiences, we supplement the theme map with a set of mapping procedures. We validate the applicability of our map with a second set of student designers, who found the map useful as a comparative and reflective tool.

(4) Tong Xue, Abdallah El Ali, Tianyi Zhang, Gangyi Ding, and Pablo Cesar, “RCEA-360VR: Real-time, Continuous Emotion Annotation in 360 VR Videos for Collecting Precise Viewport-dependent Ground Truth Labels,” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM CHI 2021), Yokohama, Japan, May 8-13, 2021.

This work titled “RCEA-360VR: Real-time, Continuous Emotion Annotation in 360 VR Videos for Collecting Precise Viewport-dependent Ground Truth Labels” investigates techniques for collecting precise emotion ground truth labels for 360◦ virtual reality (VR) video watching. Such labels are essential for fine-grained predictions, where one has to consider varying viewing behavior. However, current annotation techniques either rely on post-stimulus discrete self-reports, or real-time, continuous emotion annotations (RCEA) but only for desktop/mobile settings. We present RCEA for 360◦ VR videos (RCEA-360VR), where we evaluate in a controlled study (N=32) the usability of two peripheral visualization techniques: HaloLight and DotSize. We furthermore develop a method that considers head movements when fusing labels. Using physiological, behavioral, and subjective measures, we show that (1) both techniques do not increase users’ workload, sickness, nor break presence (2) our continuous valence and arousal annotations are consistent with discrete within-VR and original stimuli ratings (3) users exhibit high similarity in viewing behavior, where fused ratings perfectly align with intended labels.

Workshops

(1) Jie Li, Vinoba Vinayagamoorthy, Julie R. Williamson, David A. Shamma, and Pablo Cesar, “Social VR: ​A New Medium for Remote Communication & Collaboration,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2021), May 8–13, 2021, Yokohama, Japan.

Our workshop titled “Social VR: A New Medium for Remote Communication & Collaboration” is a continuation of the successful CHI 2020 Social VR workshop which held virtually on Mozilla Hubs in 2020. In this CHI 2021 virtual workshop, we would like to organize it again on Mozilla Hubs, continuing the discussion about proxemics, social cues and virtual environment designs, which were identified as important aspects for social VR communication in our CHI 2020 workshop.

(2) B. Ryskeldiev, Y. Ochiai, K. Kusano, J. Li, K. Kunze, M.H.D.Y. Saraiji, M. Billinghurst, S. Nanayakkara, Y. Sugano, and T. Honda “Immersive Inclusivity at CHI: Design and Creation of Inclusive User Interactions Through Immersive Media,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2021), May 8–13, 2021, Yokohama, Japan.

Our workshop titled “Immersive Inclusivity at CHI: Design and Creation of Inclusive User Interactions Through Immersive Media” aims at creating a discussion platform on intersections between the fields of immersive media, accessibility, and human-computer interaction, outline the key current and future problems of immersive inclusive design, and define a set of methodologies for design and evaluation of immersive systems from inclusivity perspective.

(3) A. El Ali, M. Perusquía-Hernández, M. Hassib, Y. Abdelrahman, and J. Newn, “MEEC: Second Workshop on Momentary Emotion Elicitation and Capture,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2021), May 8–13, 2021, Yokohama, Japan.

Our workshop titled “MEEC: Second Workshop on Momentary Emotion Elicitation and Capture” deals with the topic of recognizing human emotions and responding appropriately, and how this has the potential to radically change the way we interact with technology. However, to train machines to sensibly detect and recognize human emotions, we need valid emotion ground truths. A fundamental challenge here is the momentary emotion elicitation and capture (MEEC) from individuals continuously and in real-time, without adversely affecting user experience nor breaching ethical standards. In this virtual half-day CHI 2021 workshop, we will (1) have participant talks and an inspirational keynote presentation (2) ideate elicitation, sensing, and annotation techniques (3) create mappings of when to apply an elicitation method.

(4) K. Daher, M. Capallera, C. Lucifora, J. Casas, Q. Meteier, M. El Kamali, A. El Ali, G. Mario Grosso, G. Chollet, O. Abou Khaled, and E. Mugellini, “Empathic interactions in automated vehicles #EmpathicCHI,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2021), May 8–13, 2021, Yokohama, Japan.

Our workshop titled “Empathic interactions in automated vehicles #EmpathicCHI” investigates the use of emotional conversational agents in the automotive context to build a solid relationship between the driver and the vehicle. In this workshop, we aim at gathering researchers and industry practitioners from different fields of HCI, ML/AI, NLU to brainstorm about affective machines, empathy and conversational agents with a special focus on human-vehicle interaction. Questions like “what would be the specificities of a multimodal and empathetic agent in a car”, “how the agent could make the driver aware of the situation” and “how to measure the trust between the user and the autonomous vehicle” will be addressed in this workshop.

Late-Breaking Works

(1) Tong Xue, Abdallah El Ali, Gangyi Ding, and Pablo Cesar, “Investigating the Relationship between Momentary Emotion Self-reports and Head and Eye Movements in HMD-based 360° VR Video Watching,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (ACM CHI 2021), May 8–13, 2021, Yokohama, Japan.

Our LBW paper titled “Investigating the Relationship between Momentary Emotion Self-reports and Head and Eye Movements in HMD-based 360° VR Video Watching” investigates how inferring emotions from Head Movement (HM) and Eye Movement (EM) data in 360◦ Virtual Reality (VR) can enable a low-cost means of improving users’ Quality of Experience. Correlations have been shown between retrospective emotions and HM, as well as EM when tested with static 360◦ images. In this early work, we investigate the relationship between momentary emotion self-reports and HM/EM in HMD-based 360◦ VR video watching. We draw on HM/EM data from a controlled study (N=32) where participants watched eight 1-minute 360◦ emotion-inducing video clips, and annotated their valence and arousal levels continuously in real-time. We analyzed HM/EM features across fine-grained emotion labels from video segments with varying lengths (5-60s), and found significant correlations between HM rotation data, as well as some EM features, with valence and arousal ratings. We show that fine-grained emotion labels provide greater insight into how HM/EM relate to emotions during HMD-based 360◦ VR video watching.

List of CHI 2021 Works

Original article