Date of Award

12-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Human Centered Computing

Committee Chair/Advisor

Sabarish V. Babu

Committee Member

Andrew Robb

Committee Member

Dawn Sarno

Committee Member

Christopher Flathmann

Committee Member

Edward B. Duffy

Abstract

Large Language Models (LLMs) have advanced conversational agents, enabling natural, human-like interactions in domains such as education, programming, and workplace collaboration. Yet, user distrust persists over privacy, accuracy, and bias. As developers work to mitigate these issues and human-AI collaboration expands, reinforcing trust in LLM-driven systems is essential. To address this problem, this dissertation explores the role of anthropomorphic form in LLM-driven conversational agents and its impact on user perception.

According to the familiarity thesis, humans attribute human-like characteristics to nonhuman entities — a process known as anthropomorphism — to better comprehend unfamiliar phenomena, based on the assumption that they understand themselves best. Designers apply this by embedding human-like attributes and behaviors—referred to as anthropomorphic form—to enhance familiarity and promote trust. Research shows that such form can improve user trust, presence, usability, and experience. Yet, limited research has explored this effect in HAIIs involving conversational agents, and even fewer studies have focused on LLM-driven agents. Consequently, this dissertation investigates two anthropomorphic forms — Behavioral and Embodied — across three studies, evaluating their impact on user perceptions and delegation behaviors in collaborative interactions with LLM-driven conversational agents across varied contexts.

Study 1 examines both anthropomorphic forms in a controlled lab experiment involving dyadic interaction with LLM-driven conversational agents, exploring their influence on trust, perceived anthropomorphism, presence, usability, and overall experience. The Embodied Anthropomorphic Form (EA) variable includes three interface designs with increasing embodiment: a text-based chatbot, a chatbot with text-to-speech (TTS), and an embodied conversational agent (ECA). ECAs are dynamic virtual human interfaces that simulate face-to-face interactions through a combination of verbal and non-verbal communication behaviors. The Behavioral Anthropomorphic Form (BA) variable manipulates the presence or absence of Theory of Mind (ToM) principles in the LLM’s responses, facilitated through specific prompting techniques. ToM is the ability to attribute mental states—such as emotions, intentions, and beliefs—to oneself and others, a fundamentally human trait. Findings show both forms positively affect trust and user experience, but the combined highest (i.e., ECA prompted with ToM behaviors) and lowest (i.e., Chatbot without ToM behaviors) lowered trust, suggesting a complex relationship between embodiment and ToM that requires further investigation.

Study 2 builds on these findings through a survey-based experiment to examine how Animation Behavior Interaction Fidelity and interaction context jointly influence user perceptions of ToM-prompted agents on perceived user experience, presence, trust, uncanny valley, and delegation. To examine the combined influence of these features, we present ECAs at four levels of increasing Animation Behavior Interaction Fidelity: low fidelity, mid-fidelity, upper-mid fidelity, and high-fidelity, with higher levels progressively approximating human behavior and appearance. A control condition served as the baseline, featuring only chatbot + text-to-speech (TTS) functionality. Additionally, the present study examines interaction context by evaluating high and low risk level. Results demonstrate a positive relationship between the inclusion of micro-expressions in ECA design and the enhancement of user trust. Furthermore, findings highlights the potential of combining micro- and macro-expressions with pupil dilation to mitigate uncanny valley effects across varying risk level conditions within human–AI collaboration (HAIC). Additionally, the perceived risk level of the task significantly influenced participants’ perceptions of the agent and their delegation behaviors, with greater hesitation observed in high-risk scenarios.

Study 3 further investigates these effects in a controlled lab setting by expanding interaction fidelity to two dimensions—Animation Behavior Interaction Fidelity (low vs. high) and Visual Interaction Fidelity (toon-shaded vs. realistic)—tested across high and low risk levels. This produced four ECAs with matched or mismatched interaction fidelity: high-level animation behaviors with realistic rendering style, high-level animation behaviors with toon rendering style, low-level animation behaviors with realistic rendering style, and low-level animation behaviors with toon rendering style. Results revealed greater trust in Sam’s capabilities, confidence in their own abilities, heightened awareness of Sam’s affective state, and higher preference towards a collaborative relationship when Sam displayed high-level Animation Behavior Interaction Fidelity via micro- and macro-expressions with pupil dilation. Additionally, participants reported higher levels of trust and propensity to view Sam as a collaborative partner when Sam featured high-level Visual Interaction Fidelity via realistic shading properties. However, findings also indicate a complex dynamic between Animation Behavior Interaction Fidelity and Visual Interaction Fidelity, suggesting that Sam’s animation behaviors had a stronger influence on user trust than visual appearance alone.

This dissertation addresses a critical gap in the AI literature concerning LLM-driven conversational agents and anthropomorphic form. First, this research is among the first to implement Behavioral Anthropomorphic Form and Embodied Anthropomorphic Form in an LLM-driven agent to mitigate mistrust and enhance user perception. Second, this work is the first to utilize a novel prompting technique to instruct an LLM-driven conversational agent with ToM principles. Third, these findings demonstrate how varied agent fidelity levels in ToM-enabled ECAs enhance user perceptions across different risk levels. As such, the outcomes of this dissertation hold significance not only for the research community but also for developers and end users of LLM-driven conversational agent systems.

Author ORCID Identifier

0000-0002-3534-7966

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.