Mind the Visual Discomfort: Assessing Event-Related Potentials as Indicators for Visual Strain in Head-Mounted Displays
When using Head-Mounted Displays (HMDs), users may not always notice or report visual discomfort by blurred vision through unadjusted lenses, motion sickness, and increased eye strain. Current measures for visual discomfort rely on users’ self-reports those susceptible to subjective differences and lack of real-time insights. In this work, we investigate if Electroencephalography (EEG) can objectively measure visual discomfort by sensing Event-Related Potentials (ERPs). In a user study (N=20), we compare four different levels of Gaussian blur in a user study while measuring ERPs at occipito-parietal EEG electrodes. The findings reveal that specific ERP components (i.e., P1, N2, and P3) discriminated discomfortrelated visual stimuli and indexed increased load on visual processing and fatigue. We conclude that time-locked brain activity can be used to evaluate visual discomfort and propose EEG-based automatic discomfort detection and prevention tools.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
Your Eyes on Speed: Using Pupil Dilation to Adaptively Select Speed-Reading Parameters in Virtual Reality
Rapid Serial Visual Presentation (RSVP) improves the reading speed for optimizing the user’s information processing capabilities on Virtual Reality (VR) devices. Yet, the user’s RSVP reading performance changes over time while the reading speed remains static. In this paper, we evaluate pupil dilation as a physiological metric to assess the mental workload of readers in real-time. We assess mental workload under different background lighting and RSVP presentation speeds to estimate the optimal color that discriminates the pupil diameter varying RSVP presentation speeds. We discovered that a gray background provides the best contrast for reading at various presentation speeds. Then, we conducted a second study to evaluate the classification accuracy of mental workload for different presentation speeds. We find that pupil dilation relates to mental workload when reading with RSVP. We discuss how pupil dilation can be used to adapt the RSVP speed in future VR applications to optimize information intake.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
“AI enhances our performance, I have no doubt this one will do the same”: The Placebo Effect Is Robust to Negative Descriptions of AI
Heightened AI expectations facilitate performance in human-AI interactions through placebo effects. While lowering expectations to control for placebo effects is advisable, overly negative expectations could induce nocebo effects. In a letter discrimination task, we informed participants that an AI would either increase or decrease their performance by adapting the interface, when in reality, no AI was present in any condition. A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information. A replication study verified that negative AI descriptions do not alter expectations, suggesting that performance expectations with AI are biased and robust to negative verbal descriptions. We discuss the impact of user expectations on AI interactions and evaluation.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
HappyRouting: Learning Emotion-Aware Route Trajectories for Scalable In-The-Wild Navigation
Routes represent an integral part of triggering emotions in drivers. Navigation systems allow users to choose a navigation strategy, such as the fastest or shortest route. However, they do not consider the driver’s emotional well-being. We present HappyRouting, a novel navigation-based empathic car interface guiding drivers through real-world traffic while evoking positive emotions. We propose design considerations, derive a technical architecture, and implement a routing optimization framework. Our contribution is a machine learning-based generated emotion map layer, predicting emotions along routes based on static and dynamic contextual data. We evaluated HappyRouting in a real-world driving study (N=13), finding that happy routes increase subjectively perceived valence by 11% (p=.007). Although happy routes take 1.25 times longer on average, participants perceived the happy route as shorter, presenting an emotion-enhanced alternative to today’s fastest routing mechanisms. We discuss how emotion-based routing can be integrated into navigation apps, promoting emotional well-being for mobility use.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
SensCon: Embedding Physiological Sensing into Virtual Reality Controllers
Virtual reality experiences increasingly use physiological data for virtual environment adaptations to evaluate user experience and immersion. Previous research required complex medical-grade equipment to collect physiological data, limiting real-world applicability. To overcome this, we present SensCon for skin conductance and heart rate data acquisition. To identify the optimal sensor location in the controller, we conducted a first study investigating users’ controller grasp behavior. In a second study, we evaluated the performance of SensCon against medical-grade devices in six scenarios regarding user experience and signal quality. Users subjectively preferred SensCon in terms of usability and user experience. Moreover, the signal quality evaluation showed satisfactory accuracy across static, dynamic, and cognitive scenarios. Therefore, SensCon reduces the complexity of capturing and adapting the environment via real-time physiological data. By open-sourcing SensCon, we enable researchers and practitioners to adapt their virtual reality environment effortlessly. Finally, we discuss possible use cases for virtual reality-embedded physiological sensing.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
The placebo effect of human augmentation: Anticipating cognitive augmentation increases risk-taking behavior
Human Augmentation Technologies improve human capabilities using technology. In this study, we investigate the placebo effect of Augmentation Technologies. Thirty naïve participants were told to be augmented with a cognitive augmentation technology or no augmentation system while conducting a Columbia Card Task. In this risk-taking measure, participants flip win and loss cards. The sham augmentation system consisted of a brain–computer interface allegedly coordinated to play non-audible sounds that increase cognitive functions. However, no sounds were played throughout all conditions. We show a placebo effect in human augmentation, where a sustained belief of improvement remains after using the sham system and an increase in risk-taking conditional on heightened expectancy using Bayesian statistical modeling. Furthermore, we identify differences in event-related potentials in the electroencephalogram that occur during the sham condition when flipping loss cards. Finally, we integrate our findings into theories of human augmentation and discuss implications for the future assessment of augmentation technologies.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
The Placebo Effect of Artificial Intelligence in Human–Computer Interaction
In medicine, patients can obtain real benefits from a sham treatment. These benefits are known as the placebo effect. We report two experiments (Experiment I: N = 369; Experiment II: N = 100) demonstrating a placebo effect in adaptive interfaces. Participants were asked to solve word puzzles while being supported by no system or an adaptive AI interface. All participants experienced the same word puzzle difficulty and had no support from an AI throughout the experiments. Our results showed that the belief of receiving adaptive AI support increases expectations regarding the participant’s own task performance, sustained after interaction. These expectations were positively correlated to performance, as indicated by the number of solved word puzzles. We integrate our findings into technological acceptance theories and discuss implications for the future assessment of AI-based user interfaces and novel technologies. We argue that system descriptions can elicit placebo effects through user expectations biasing the results of user-centered studies.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
Technical Design Space Analysis for Unobtrusive Driver Emotion Assessment Using Multi-Domain Context
Driver emotions play a vital role in driving safety and performance. Consequently, regulating driver emotions through empathic interfaces have been investigated thoroughly. However, the prerequisite – driver emotion sensing – is a challenging endeavor: Body-worn physiological sensors are intrusive, while facial and speech recognition only capture overt emotions. In a user study (N=27), we investigate how emotions can be unobtrusively predicted by analyzing a rich set of contextual features captured by a smartphone, including road and traffic conditions, visual scene analysis, audio, weather information, and car speed. We derive a technical design space to inform practitioners and researchers about the most indicative sensing modalities, the corresponding impact on users’ privacy, and the computational cost associated with processing this data. Our analysis shows that contextual emotion recognition is significantly more robust than facial recognition, leading to an overall improvement of 7% using a leave-one-participant-out cross-validation.
Download the full data set here. Please cite the appropriate paper when using the data set in a scientific publication.
VEmotion: Using Driving Context for Indirect Emotion Prediction in Real-Time
Detecting emotions while driving remains a challenge in Human-Computer Interaction. Current methods to estimate the driver’s experienced emotions use physiological sensing (e.g., skin conductance, electroencephalography), speech, or facial expressions. However, drivers need to use wearable devices, perform explicit voice interaction, or require robust facial expressiveness. We present VEmotion (Virtual Emotion Sensor), a novel method to predict driver emotions in an unobtrusive way using contextual smartphone data. VEmotion analyzes information including traffic dynamics, environmental factors, in-vehicle context, and road characteristics to implicitly classify driver emotions. We demonstrate the applicability in a real-world driving study (N = 12) to evaluate the emotion prediction performance. Our results show that VEmotion outperforms facial expressions by 29% in a person-dependent classification and by 8.5% in a person-independent classification. We discuss how VEmotion enables empathic car interfaces to sense the driver’s emotions and will provide in-situ interface adaptations on-the-go.
Download the full data set here. Please cite the appropriate paper when using the data set in a scientific publication.
One Does not Simply RSVP: Mental Workload to Select Speed Reading Parameters using Electroencephalography
Rapid Serial Visual Presentation (RSVP) has gained popularity as a method for presenting text on wearable devices with limited screen space. Nonetheless, it remains unclear how to calibrate RSVP display parameters, such as spatial alignments or presentation rates, to suit the reader’s information processing ability at high presentation speeds. Existing methods rely on comprehension and subjective workload scores, which are influenced by the user’s knowledge base and subjective perception. Here, we use electroencephalography (EEG) to directly determine how individual information processing varies with changes in RSVP display parameters. Eighteen participants read text excerpts with RSVP in a repeated-measures design that manipulated the Text Alignment and Presentation Speed of text representation. We evaluated how predictive EEG metrics were of gains in reading speed, subjective workload, and text comprehension. We found significant correlations between EEG and increasing Presentation Speeds and propose how EEG can be used for dynamic selection of RSVP parameters.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
Identifying Cognitive Assistance with Mobile Electroencephalography: A Case Study with In-Situ Projections for Manual Assembly
Contrary to popular beliefs, manual assembly at production is a mentally demanding task. With current trends of rapid prototyping and smaller production lot sizes, this will result in frequent changes of assembly instructions that have to be memorized by workers. Assistive systems can compensate this expected increase in mental workload, specifically working memory load, by providing “just-in-time” assembly instructions through in-situ projections. The implementation of such systems and their benefits to reducing mental workload have previously been justified with self-perceived ratings or think-aloud studies. However, there is no evidence by objective measures if mental workload is truly reduced by in-situ assistance. In our work, we showcase electroencephalography as a complementary evaluation tool to assess the cognitive demand placed by two different assistive systems in an assembly task, namely paper instructions and in-situ projections. We identified the individual electroencephalographic bandwidth that varied with changes in working memory load.We show, that changes in the corresponding bandwidth are found between paper instructions and in-situ projections, indicating that they reduce working memory compared to paper instructions. These findings converge with NASA-TLX questionnaire responses for subjective workload. Methodically, the current work contributes by demonstrating how design claims of cognitive workload alleviation can be validated. Moreover, it directly validates the use of engineered assistive systems for delivering context-aware information. Finally, we analyze the characteristics of electroencephalography as real-time assessment for cognitive workload to provide insights regarding the mental demand placed by assistive systems.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
Your Eyes Tell: Leveraging Smooth Pursuit for Assessing Cognitive Workload
A common objective for context-aware computing systems is to predict how user interfaces impact user performance regarding their cognitive capabilities. Existing approaches such as questionnaires or pupil dilation measurements either only allow for subjective assessments or are susceptible to environmental influences and user physiology. We address these challenges by exploiting the fact that cognitive workload influences smooth pursuit eye movements. We compared three trajectories and two speeds under different levels of cognitive workload within a user study (N=20). We found higher deviations of gaze points during smooth pursuit eye movements for specific trajectory types at higher cognitive workload levels. Using an SVM classifier, we predict cognitive workload through smooth pursuit with an accuracy of 99.5% for distinguishing between low and high workload as well as an accuracy of 88.1% for estimating workload between three levels of difficulty. We discuss implications and present use cases of how cognition-aware systems benefit from inferring cognitive workload in real-time by smooth pursuit eye movements.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.
The Brain Matters: A 3D Real-Time Visualization to Examine Brain Source Activation leveraging Neurofeedback
As Brain-Computer Interfaces become available to the consumer market, this provides more opportunities in analyzing brain activity in response to different external stimuli. Current output modalities often generate a lot data, such as an electroencephalogram which only displays electrode measurements. We introduce a three-dimensional real-time brain data visualization based on the measured values received by a brain-computer interface. Instead of visualizing the collected voltages by electrodes, we calculate a current density distribution to estimate the origin of electrical source which is responsible for perceived values at electrodes. Understanding where the centers of activation in the brain are allows to better understand the relationship between external stimuli and brain activity. This could be relevant in the context of information presentation for doctors to analyze pathological phenomena. A pilot study was conducted using Virtual Reality as input stimulus. Results indicate visible changes in real-time regarding brain activation.
Download the full data set here. Please cite the appropriate paper when using the dataset in a scientific publication.