Date of Award
May 2021
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Human Centered Computing
Committee Member
Sabarish V. Babu
Committee Member
Andrew T. Duchowski
Committee Member
Larry Hodges
Committee Member
Bart P. Knijnenburg
Abstract
This manuscript presents the result of a series of studies intended to shed light on understanding the effects of virtual human interactions on users’ impressions. This phenomenon was empirically examined in two distinct scenarios namely Dyadic and Crowd.For the dyadic investigation scenarios (studies one, two, three, and four), I used a medical training interactive software named the Rapid Response Training System (RRTS). This simulation was originally used for training medical personnel in the recognition and detection of signs and symptoms of patient’s medical deterioration. This simulation presented a hospital room with a virtual patient. Users were tasked with checking the patient’s health status over time. Users interacted with a virtual patient by asking questions relevant to his health status using a pre-defined questionnaire. Also, users used digital tools such as a stethoscope, or nurse-on-a-stick to collect the patient’s vital signs. For a further description of this system please read Section 3.1.1. In a first evaluation using the RRTS, I examined the extent that near-realistic vs. nonrealistic rendering of the virtual patient affected users emotionally. Participants interacted with a virtual patient that exhibited incremental negative affective behaviors expressed in their verbal and non-verbal expressions. The study presented three conditions of virtual human rendering style: Sketch-Shading, Cartoon, and Realistic that ranged from non-realistic to realistic appearance respectively. During the course of the experiment, we measured the users’ emotional reactions using objective and subjective tools. In the next study, I utilized the RRTS scenario to measure how different animation fidelity impacted the users’ visual attention during interaction with the virtual patient. I presented three between-subjects animation conditions of the virtual human patient: A) No animation; B) Non-Conversational; C) Conversational. In the No-Animation condition, the virtual human performed no animations but maintained a static pose; In the Non-Conversation condition, the virtual patient demonstrated life-like animations (Breathing, coughing, posture changes, or facial expressions), but no conversational behaviors. Finally, the agent in the Conversational condition showed life-like behaviors and also conversational behaviors such as lip-sync or joint gaze with the participant during simulated dialogue. During the experiment, we measured the users’ visual attention via an eye tracker. A third study analyzed the relationship between the users’ visual attention and their emotional responses during interaction with a virtual patient in the rapid response training system (RRTS). In this experiment, I collected the users’ emotional impact by questionnaires and their visual attention by an eye tracker, during the interaction with the virtual patient. Then, I measured the interplay between the users’ visual attention and emotions during interaction with a virtual patient that was presented in a distinct rendering style from the non-photorealistic to the realistic continuum. I implemented a cross lagged panel model followed by mediation analysis to establish the relationship between how the rendering style of the virtual patient affected the users’ visual attention and emotional reaction over time. In a fourth study, I examined the extent that users’ visual attention varied across five samples of simulation rendering styles from the non-photorealistic to the realistic continuum. In a mixed-design study, the between-subjects variable was the rendering style and the within-subjects variable was the affective behaviors of the virtual patient from one time-step to another during which the virtual patient’s health declined. The rendering style conditions included an All Pencil Shaded simulation, a Pencil Shaded virtual patient only, an All Cartoon Shaded simulation, a Cartoon Shaded virtual patient only, and a virtual patient with near high fidelity Human-Like rendering (from a non-photorealistic to realistic continuum). To measure the users’ visual attention during the interaction with the virtual human in the RRTS, the users’ gaze was measured via a non-invasive eye tracking device. For the studies conducted using a crowd of virtual humans (studies five and six), I utilized a simulation that included a multitude of conversational virtual humans in an immersive virtual reality open area market environment. For a further description of this system please read Section 3.1.3. The fifth study of this dissertation measured the extent that a crowd of virtual humans was able to elicit emotional contagion on users in an immersive virtual reality scenario. In a between-subjects design, a total of four emotional crowd groups were presented. The virtual agents showed effective verbal and non-verbal behaviors representing positive, negative, neutral, or mixed emotional disposition. The mixed virtual crowd condition consisted of agents with a random distribution of emotions that included positive, negative, and neutral. The users’ emotional contagion and overall behavior were analyzed using metrics collected during the experiment. The sixth study examined how users’ emotional contagion was affected by the language familiarity used for conversing with the virtual humans. This study was also conducted in the virtual crowd marketplace simulation described in Section 3.1.3. The experiment design consisted of a 3 (condition) x 4 (emotion) between-subjects study. On one hand, there were three distinct language familiarity between-subjects conditions, and on the other hand, four emotional virtual crowds between-subjects conditions. Participants in the USA interacted in English with the virtual agents (a familiar language), another group of users in Taiwan interacted with the virtual agents in English (an unfamiliar language), and yet another group in Taiwan interacted with the virtual agents in Mandarin (a familiar language). Also, each of these regions, included four emotional crowd groups namely positive, negative, neutral, and mixed virtual crowds (See Figure 9.2). I measured the users’ emotional reactions using subjective surveys at the end of the virtual reality experience. The results of these studies are highlighted in the chapters that discuss these studies in detail, followed by a discussion of the major findings overall. Finally, I end with highlighting the major impact of my findings and future work directions in the last sec
Recommended Citation
Volonte, Matias, "Effects of Virtual Human in Dyadic and Crowd Settings on Emotion, Visual Attention and Task Performance in Interactive Simulations" (2021). All Dissertations. 2815.
https://open.clemson.edu/all_dissertations/2815