Follow us on: GitHub YouTube Vimeo Twitter LinkedIn

Master's Thesis Topics 2023-2024 at CWI DIS

Publication date: 2023-09-11

The Distributed & Interactive Systems group at CWI has new open positions for motivated students who would like to work on their Master’s thesis as an internship in the group. Topics include human computer interaction, artificial intelligence, cognitive (neuro-)science and/or interaction design. Keep reading for more information about research topics, requirements and contact information.

How to apply:

  • Send your application to Pablo Cesar (P.S.Cesar@cwi.nl) and the responsible person
  • In your email, please include: (a) your recent CV (b) current academic transcripts (c) brief motivation why you want to work on a given or self-defined topic(s)
  • Note: Stipend possible provided your current GPA is greater or equal to 8

Further info:


Breath Synchronization with Humans and Machines

Responsible person: Abdallah El Ali (aea@cwi.nl)
https://abdoelali.com/

Description

This project will explore breathing synchronization between humans and machines for emotion self-regulation. More details upon request.

Skills

  • Required: Hardware prototyping; interaction design; HCI research methods; quantitative and qualitative analysis
  • Recommended: Interest in physiological signals

Remote Audience Engagement with news using behavioral and physiological sensors

Responsible person: Abdallah El Ali (aea@cwi.nl)
https://abdoelali.com/

Description:

This project explores remote or in-situ sensing of audience engagement using a range of behavioral and physiological sensors. The research will focus primarily on objective measures of human engagement (e.g., using computer vision for head tracking) to infer engagement with the news. This topic is a collaboration with the AI, Media, Democracy lab (https://www.aim4dem.nl/). More details upon request.

Skills:

  • Required: computer vision; social signal processing; HCI research methods; quantitative and qualitative analysis
  • Recommended: Interest in physiological and behavioral sensing

Extended Reality (XR); Machine Learning; Navigation Prediction and Analysis

Responsible person: Silvia Rossi (s.rossi@cwi.nl)
https://www.silviarossi.nl/

Description

Immersive reality technologies, such as Virtual Reality (VR) and Extended Reality (XR) at large, have opened the way to a new era of user-centric systems, in which every aspect of the coding–delivery–rendering chain is tailored to the interaction of the users. However, to fully enable the potential of XR systems in current network limitations, we need to optimize the system around the final user. That involves the complex problem of effectively modelling and understanding how users perceive and interact with XR spaces [1,2]. Within this framework, the student joining our group will be working on machine learning/deep learning strategies to analyse/predict users’ navigation trajectories within VR space with 3-DoF (e.g., 360-degree video content) [3], with the possibility of extending the research to a more challenging VR system with 6-DoF (e.g., volumetric video content) [4]. 

Skills

Good programming skills (e.g. preferably Python, Matlab), prior knowledge of classical machine learning models (e.g., clustering techniques, linear regression) and/or Deep Learning models (e.g. CNN, RNN, Bayesian networks), optional prior knowledge of Virtual Reality applications

References

  • S. Rossi, A. Guedes, and L. Toni. “Streaming and user behaviour in omnidirectional videos.” In Immersive Video Technologies (pp. 49-83). Academic Press. DOI: https://discovery.ucl.ac.uk/id/eprint/10158036/1/2021_chapter_ODV.pdf
  • S. Rossi, I. Viola, L. Toni, and P. Cesar, “A new Challenge: Behavioural analysis of 6-DoF user when consuming immersive media” In Proceedings of IEEE International Conference on Image Processing (ICIP), 2021. DOI: 10.1109/ICIP42928.2021.9506525
  • M. F. R. Rondón, L. Sassatelli, R. Aparicio-Pardo, and F., Precioso,  “TRACK: A New Method From a Re-Examination of Deep Architectures for Head Motion Prediction in 360˚ Videos” IEEE Transactions on Pattern Analysis and Machine Intelligence (2021). DOI: 10.1109/TPAMI.2021.3070520
  • G. K. Illahi, A. Vaishnav, T. Kämäräinen, M. Siekkinen, and M. Di Francesco. “Learning to Predict Head Pose in Remotely-Rendered Virtual Reality”. In Proceedings of the 14th Conference on ACM Multimedia Systems (MMSys), 2023. DOI:/10.1145/3587819.3590972

Interaction Design; Human-Computer Interaction

Responsible person: Moonisa Ahsan (moonisa.ahsan@cwi.nl)
https://www.imoonisa.com

Topics

  • User Experience Design in XR Spaces 
  • Requirement Gathering, Validation, Assessment and Evaluation 
  • Annotating Cultural Heritage Artifacts 
  • Data Gathering, Content Synthesis, Multimodal Design, Interactive Storytelling, Data Visualization, User Experience (UX) Design, Multimodal Information Integration and Presentation. 

Skills

  • Ability to gather information from various sources, such as text, visuals, drawings, sketches, focus groups, interviews, etc. 
  • Translation literature and information into tangible Illustrations, sketches or drawings.

Multimedia Systems; Signal Processing; Computer Graphics; Machine Learning

Responsible person: Irene Viola (irene@cwi.nl)
https://www.ireneviola.com

Topics

  • Volumetric video acquisition: create a dataset for volumetric video, using our acquisition system [1].
  • Volumetric video processing: create and validate algorithms for improving the quality of volumetric video contents (smoothing, hole-filling, inpainting, super resolution,…)
  • Temporal interpolation for volumetric video: create and validate algorithm for increasing the temporal resolution of volumetric videos [2]
  • Compression solutions for volumetric video: create (real-time) compression algorithms for optimal transmission of volumetric contents, to be integrated in our system [3]
  • Adaptive streaming strategies for eXtended Reality (XR) systems: design and validate adaptive streaming strategies for XR systems [4]
  • Perceptual models for XR: create and validate algorithms that can predict the visual quality and perception of 3D contents in XR environments [5]
  • Quality of Experience for XR systems: design and test methodologies for subjectively evaluating the quality of experience in XR systems [6]

Skills

Required: programming skills (depending on the topic, MATLAB, Python, Unity, …); image and video processing Topic-specific: Machine learning, Networks

References

  • Reimat, I., Alexiou, E., Jansen, J., Viola, I., Subramanyam, S., and Cesar, P., 2021. CWIPC-SXR: Point Cloud dynamic human dataset for Social XR. In Proceedings of the 12th ACM Multimedia Systems Conference (pp. 300-306).
  • Viola, I., Mulder, J., De Simone, F. and Cesar, P., 2019, December. Temporal interpolation of dynamic digital humans using convolutional neural networks. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) (pp. 90-907). IEEE.
  • Viola, I., Jansen, J., Subramanyam, S., Reimat, I. and Cesar, P., 2023. VR2Gather: A collaborative social VR system for adaptive multi-party real-time communication. IEEE MultiMedia.
  • Subramanyam, S., Viola, I., Jansen, J., Alexiou, E., Hanjalic, A. and Cesar, P., 2022, October. Evaluating the Impact of Tiled User-Adaptive Real-Time Point Cloud Streaming on VR Remote Communication. In Proceedings of the 30th ACM International Conference on Multimedia (pp. 3094-3103).
  • Alexiou, E., Nehmé, Y., Zerman, E., Viola, I., Lavoué, G., Ak, A., Smolic, A., Le Callet, P. and Cesar, P., 2023. Subjective and objective quality assessment for volumetric video. In Immersive Video Technologies (pp. 501-552). Academic Press.
  • Viola, I., Subramanyam, S., Li, J. and Cesar, P., 2022. On the impact of VR assessment on the quality of experience of highly realistic digital humans: A volumetric video case study. Quality and User Experience, 7(1), p.3.

Exploring wearable haptic interfaces for biosignal visualization in (social) VR

Responsible person: Abdallah El Ali (aea@cwi.nl)
https://abdoelali.com/

Description

Haptic stimulation is an intrinsic aspect of sensory and perceptual experience, and is tied with several experience facets, including cognitive, emotional, and social phenomena. The capability of haptic stimuli to evoke emotions has been demonstrated in isolation, or to augment media. This project will build on our prior work on visualizing biosignals (Lee et al., 2022; El Ali et al. 2023), and exploring virtual agent biosignals through haptic displays (cf., El Ali et al., 2020), to create new forms of social experiences in social VR that leverage physiological signals and body-based actuation.

Skills

  • Required: Information visualization (sketching + prototyping); biosensors (e.g., HR, EDA, EMG); HCI research methods; quantitative and qualitative analysis; statistics
  • Recommended: Hardware prototyping (e.g., Arduino), fabrication, thermal, vibrotactile, and/or multimodal output

Investigating immersion and presence in 360° Videos using grip force analysis

Responsible person: Abdallah El Ali (aea@cwi.nl) and Ashutosh Singla (Ashutosh.Singla@cwi.nl)
https://abdoelali.com/ and https://scholar.google.co.in/citations?user=iI1PSjkAAAAJ&hl=en

Description

With the increasing availability of head-mounted displays (HMDs) and wearable technology that enable immersive media experiences, we are witnessing a shift in the way videos can be delivered and consumed. The rising demand for 360° videos and the exploration of immersion/presence in virtual reality (VR) experiences signify a significant shift in this field. This thesis aims to explore the time point at which users feel immersed and forget about the real environment while watching 360° videos with a HMD. The thesis incorporates a simple task where users hold a ball in their hand while viewing 360° videos. Throughout the viewing session, continuous measurements of grip force or pressure are recorded using EMG and acceleration/gyroscope sensors. The analysis of changes in grip strength over time provides insights into the user’s sense of immersion/presence. The findings from this research contribute to understanding the psychological and physiological aspects of user experience in immersive virtual reality environments. In this thesis, hardware prototyping with a particular emphasis on pressure sensing is focused on. The thesis involves utilizing conductive yarn and various sensors to develop a ball that can measure grip force or pressure continuously. Furthermore, the experimental design, test setup, selection of the specific 360° videos needs to be discussed with the supervisors. The subjective test may include any pre- or post-experiment questionnaires to capture user feedback.

Skills

  • Required: Arduino (physical prototyping), soldering, electronics, EMG sensing; Interaction design; Unity/C#, experiment design (controlled), controlled user studies, statistics
  • Recommended: Interest in physiological signals; HCI research methods; qualitative analysis

User perceptions of human and AI news using voice assistants

Responsible person: Abdallah El Ali (aea@cwi.nl)
https://abdoelali.com/

Description

This project explores user perceptions of human and AI-generated news delivered using a range of voice assistants. This topic is a collaboration with the AI, Media, Democracy lab (https://www.aim4dem.nl/). More details upon request.

Skills

  • Required: Interaction design; HCI research methods; quantitative and qualitative analysis
  • Recommended: Interest in journalism; psychophysiology; synthetic speech technology

Creating a biosignal card deck for designing biofeedback experiences

Responsible person: Abdallah El Ali (aea@cwi.nl)
https://abdoelali.com/

Description

Card decks can help more effectively explore specific design and system development problems, by aiding in iterative design explorations. Some notable past card decks are PLEX cards, IDEA method cards, or the Happiness Deck. In this project, you will create a card deck focus on biofeedback experiences, from input to output to device to environment to ethical considerations of using such bioresponsive systems. This would help future researchers, designers, and practitioners more effectively ideate when creating systems and techniques that utilize biosignal sensing and actuation.

Skills

  • Required: Interaction design; visual design; HCI research methods; qualitative analysis
  • Recommended: Interest in physiological signals

Developing the Biosignal Visualization Acceptability Scale (BVAS)

Responsible person: Abdallah El Ali (aea@cwi.nl)
https://abdoelali.com/

Description

As visualizing human biosignals becomes more prevalent across wearable and ubiquitous systems, there is a need to understand the (social) acceptability of different types of biosignals, how and when they are shared, their fidelity and plausability, their context of use, and how they are perceived. In this project, you will build on our prior work on biosignal visualization, and help create a questionnaire to assess the acceptability of biosignal visualizations. This will involve a thorough literature review, designing biosignals, and creating a questionnaire which needs to be evaluated for at least construct and content validity. This project may be a collaboration with Aalto University (Finland) and OFFIS (Germany).

Skills

  • Required: HCI research methods; quantitative analysis; statistics; HCI theory
  • Recommended: interest in physiological signals and biosensing technology

Generative Dialogue Agents in Immersive Social Virtual Reality

Responsible person: Jiahuan Pei (j.pei@cwi.nl) https://scholar.google.com/citations?user=cnhyEW0AAAAJ&hl=en

Description

Large language models (LLMs), such as ChatGPT, have shown significant advancements in dialogue systems and their applications. However, the rich contextual information in the VR applications (e.g., VR2Gather) remains under-explored. For example, fundamental elements, archetypes, and components of immersive communication systems. Humans engage in conversations using various senses or modalities (e.g., sound, sight, touch, smell, and taste), but collecting a user’s multimodal context is expensive and laborious. To this end, most recent researches highlight new challenges and use multimodal interaction simulator to create dataset and study on user’s multimodal context.

This research aims to address this gap by constructing a multimodal VR dialogue dataset specifically tailored for VR2Gather. First, we develop APIs to build up interaction between the VR application and the dialogue agent, powered by autonomous LLMs. Then, we use the state-of-the-art LLM (e.g., ChatGPT) to play with the workflow and generate large-scale of synthetic data. Furthermore, we will conduct commonly used automatic evaluation for response generation quality and task completion [8]. Last but not least, the comparison and deep-dive analysis between VR and non-VR domains will shed light on the unique challenges and opportunities present in VR dialogue systems.

Skills

  • Python programming skills and understanding of deep learning libraries such as Pytorch/Langchain;
  • Basic understanding of natural language processing, especially large language models;
  • Basic programming skills in C# and Unity (optional).

References

  • Ni, J., Young, T., Pandelea, V., Xue, F., & Cambria, E. (2023). Recent advances in deep learning based dialogue systems: A systematic survey. Artificial intelligence review, 56(4), 3055-3155.
  • Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H. T., … & Le, Q. (2022). Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239.
  • Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., … & Wen, J. R. (2023). A survey of large language models. arXiv preprint arXiv:2303.18223.
  • Pérez, P., González-Sosa, E., Gutiérrez, J., & García, N. (2022). Emerging Immersive Communication Systems: Overview, Taxonomy, and Good Practises for QoE Assessment. arXiv preprint arXiv:2205.05953.
  • Moon, S., Kottur, S., Crook, P. A., De, A., Poddar, S., Levin, T., … & Geramifard, A. (2020, December). Situated and Interactive Multimodal Conversations. In Proceedings of the 28th International Conference on Computational Linguistics (pp. 1103-1121).
  • Kottur, S., Moon, S., Geramifard, A., & Damavandi, B. (2021, November). SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 4903-4912).
  • Sundar, A., & Heck, L. (2022, May). Multimodal Conversational AI: A Survey of Datasets and Approaches. In Proceedings of the 4th Workshop on NLP for Conversational AI (pp. 131-147).
  • Deriu, J., Rodrigo, A., Otegi, A., Echegoyen, G., Rosset, S., Agirre, E., & Cieliebak, M. (2021). Survey on evaluation methods for dialogue systems. Artificial Intelligence Review, 54, 755-810.

Understanding and Re-designing Virtual Reality Questionnaires

Responsible people: Géry Casiez (gery.casiez@univ-lille.fr), Abdallah El Ali (aea@cwi.nl)

Description

In a controlled experiment, researchers can measure both quantitative and qualitative variables to study a phenomenon. Quantitative variables, such as time, can be measured using some instruments while qualitative variables are typically measured using questionnaires. These questionnaires can include different type of questions, such as closed questions (e.g. yes / no), open-ended questions (e.g., “What do you think of xxx?") and questions requiring an answer on an absolute (e.g., Likert-type) scale (e.g., “How successful were you in accomplishing what you were asked to do?” in a NASA Task Load Index or “I felt out of my body” in a virtual embodiment questionnaire).

Recent work suggests participants have difficulties answering questions in an absolute way and in fact answer in a relative way, based on their answers to the previous questions. This project aims at better understanding how participants answer qualitative questionnaires with absolute scale answers. The goal is to define guidelines for the creation and administration of such questionnaires, especially in the context of the measure of the sense of agency and embodiment in Virtual Reality. This project will be supervised by researchers at INRIA - Loki (https://loki.lille.inria.fr/) and CWI - DIS (https://www.dis.cwi.nl/) group.

Skills

  • Required: HCI research methods; programming (C# / Unity), quantitative analysis; statistics; HCI theory
  • Recommended: interest in Virtual Reality research and avatar embodiment

References

  • G. Richard, T. Pietrzak, F. Argelaguet, A. Lécuyer and G. Casiez, “Within or Between? Comparing Experimental Designs for Virtual Embodiment Studies,” 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 2022, pp. 186-195, doi: 10.1109/VR51125.2022.00037.