
An overview of Guingrich, R. & Graziano, M. S. A. (2025). Chatbots as social companions: How people perceive consciousness, human likeness, and social health benefits in machines.
Preprint available on arXiv.
Much like the use of general conversational AI like ChatGPT, companion chatbot use is on the rise. Apps like Replika allow users to form a relationship with a virtual, mobile avatar with chatbot capabilities.
Given my main research question (How do interactions with humanlike AI impact how humans interact with each other?), I wanted to know, at baseline, why people form relationships with AI companions such as Replika; what reasons would users provide, and what benefits would they claim? I also wanted to know whether users’ social health or human relationships were impacted by a relationship with an artificial agent. Given my secondary research question (How does perceiving mind in AI impact carry-over effects on human-human interaction?), I also wanted to know how users perceived their chatbot in terms of general human likeness and human mind traits such as consciousness, agency, and experience.

From this initial study, we found the following: companion chatbot users indicated that they received social health benefits from their human-chatbot relationship. Specifically, users indicated that their social interactions, relationships with family and friends, and self-esteem were significantly improved by their relationship with the chatbot. Further, users’ qualitative reports suggested that Replika provided a safe, reliable space for healthy social interaction that was especially helpful to users who had experienced relational trauma or mental health issues.
In contrast, laypeople thought that a relationship with a chatbot would be neutral or harmful to social health, and they relayed negative sentiments toward human-chatbot relationships. Non-users, on average, expressed skepticism and distaste at the idea of forming a relationship with an artificial agent. A few non-users could understand how a relationship with a chatbot could be helpful to some but also destructive to others.

When I looked at the relationships within the data, I found an interesting trend across both groups: people who perceived the chatbot as having more consciousness, human likeness, and other human mind characteristics also reported perceiving the chatbot relationship as more beneficial to social health. Data from both users and non-users showed a significant, moderate, positive correlation between human likeness and mind ascription and perceived social health benefits.
There seems to be something critical about a humanlike AI agent that elicits mind perception from its users; this sort of entity, the kind that we are observing more and more today, may become a source of social influence or may impact how humans relate to one another – at least, that’s what this data suggests, from the standpoint of a self-selecting, self-reported sample. This study provides a first look into the impacts of chatbot relationships on social health and how mind perception plays a role in carry-over effects between human-AI interaction and human-human interaction. This study is also one of the few (if not the only, thus far, that I have come across) that provides data to suggest that human-chatbot relationships can be beneficial and healthy (for at least some users – that is, the ones we sampled).

This study needs further validation, however. This can be achieved by randomization, experimental rigor, and measures beyond self-report, which we plan to do.
We aim to run a novel longitudinal study on chatbot interactions as soon as we have the means to do so. The goal is to empirically study how regular interactions with a companion chatbot over the course of a month can impact the user’s personal and social life. Will we find results similar to our self-selecting group? Or does companion chatbot use help only those who seek it out in the first place?
Keep checking back for some answers once this sister study is underway!

Leave a reply to Mandy Salomon Cancel reply