Â̾ÞÈËÊÓƵ

Faculty and Staff

The Risk of Building Emotional Ties with Responsive AI

By
Katie Todd
Posted
August 15, 2024
A man in a suit shaking hands with a cybernetic arm.

In an August outlining some of the risks identified in the newly released GPT-4o, OpenAI conducted research on the possibilities of users forming an emotional reliance on the generative AI platform. Through a psychological phenomenon called anthropomorphization, users could attribute human-like qualities to the chatbot and begin to liken it to a real person–which raises many red flags regarding the impact on that user's ability not just to think critically about the information they receive from GPT-4o, but how they engage with other people in their lives.

As someone in a human-centered profession, the thought of people depending on algorithmic means of emotional fulfillment doesn't exactly get me excited. Relationships are at the core of human contentment, and AI cannot provide a relationship that is authentically reciprocal. It can appear that way, which is problematic. AI itself is not the issue. It's more what is missing in society that might cause people to turn to artificial intelligence to have their needs met. Increased siloing of individuals following the pandemic, phone addiction, social media's influence in polarization of thought and opinion, and a lack of third spaces for people to get together and make new connections. GenAI is a tool, not a replacement for real social connections.

Curious to learn what my colleagues at the Seidenberg School of Computer Science and Information Systems at Â̾ÞÈËÊÓƵ thought, I asked some faculty members for their thoughts on how anthropomorphization would impact users from a psychological perspective. See below for some responses from not just AI experts but also professors specializing in Human-Centered Design.

Dr. Juan Shan, Associate Professor of Computer Science

The voice model is amazing from the technical perspective. It is another surprise and milestone from OpenAI. At the same time, I hear people’s worries about the possible negative sides, such as becoming emotionally hooked, ethical issues, security issues, and so on. I share those worries too. If we look back at tech history, there are always positive and negative sides to new technologies, and there are always controversies about new technological advancements. In my opinion, what we can do is to keep ourselves knowledgeable about advancements, use them, test them, and help shape them. We also need to educate our students for both technical skills and strong ethical standards. At the same time, the government should take more responsibility to investigate and lead the development of AI, to detect possible misuse of AI, to estimate possible consequences, and to regulate the publication process of AI products. My optimal vision for the future is that people can enjoy the benefits and convenience brought by AI in daily life, while potential harm is under control and made known to the public.

Dr. Jonathan Williams, Clinical Assistant Professor, Human-Centered Design

The companionship and emotional lives of objects and tools has long been established, but this is a new era where the tool mirrors that relationship back to the user. Humans have the capacity to emote and attach to technology, but the joy, hope, or love they may receive back through AI will be algorithmically defined. While there may be emotional ties from a human to the AI, authentic reciprocity is not received in return.

Human to human emotion takes on a full spectrum of behaviors, feelings, and thoughts. Generative AI is heavily moderated and censored. Generative AI can't get angry or profane, mourn or grieve, or call on personal experiences. For the many emotions we may offer to generative AI, only a select few can be returned to us.

Dr. Zhan Zhang, Associate Professor, Director of Human-Centered Design

People forming emotional ties with AI tools is not a new phenomenon; it has been observed in prior research involving human interactions with voice assistants like Alexa and social robots. Emotional connections with AI are a nuanced and complex subject. On one hand, AI can offer significant emotional support, such as companionship or empathy, by engaging users in meaningful conversations, particularly when it is designed to mimic human-like voices. However, these emotional ties raise critical questions and concerns. For example, forming emotional connections with AI often involves sharing personal thoughts and feelings, which brings up issues of data privacy and the potential misuse of this sensitive information. Moreover, since AI does not possess genuine emotions, any emotional connection felt by the user is inherently one-sided. This creates ethical concerns about the possibility of manipulating users' emotions. More research is needed to investigate this fascinating topic, which has significant societal implications. In particular, human-computer interaction (HCI) researchers can play a crucial role in examining emotional ties with GenAI from both sociotechnical and ethical perspectives.

More from Pace