An overview of my theory paper: Guingrich, R. E. & Graziano, M. S. A. (2024). Ascribing consciousness to artificial intelligence: Human-AI interaction and its carry-over effects on human-human interaction.
Publication link: Frontiers in Psychology

Can artificial intelligence (AI) be conscious?
Is AI conscious?
Are AI agents self-aware, and if so, is that a bad thing?
If AI becomes conscious, will it take over?
These are some of the common questions that people ask about AI and consciousness. When someone talks your opinions about “AI sentience” or if “you think AI will take over the world,” they are likely referring to the philosophical debate of whether AI is conscious or could be in the future.
The central concern of this perspective is about AI’s inherent characteristics. Either AI has consciousness or it does not. The possibility of sentient AI, in most cases, is viewed as unnerving and worrisome.
In my research, I look at the topic of AI consciousness from a different angle. I am not concerned with whether AI is or can be conscious. Why? Because people already attribute consciousness to AI, even though the debate has not been settled. Regardless of whether AI is inherently conscious, people can view AI as conscious, as having a humanlike mind, or at least as having characteristics of a humanlike mind.

People naturally engage in anthropormophism, which is the process by which people ascribe human traits to non-human beings. People can view objects, NPCs, pets, plants, and more as having agency (the ability to act on one’s own accord) and experience (the ability to feel emotions and subjective experience). They also anthropomorphize artificial intelligence agents: someone observing a social robot sees the robot trip and fall and feels bad, because that person has ascribed to the robot the ability to feel pain and embarrassment. The person has ascribed human traits to a non-human being: the machine has feelings.
People also ascribe higher-order human traits, like consciousness, to non-human beings, especially AI. Social AI (like chatbots, Siri, and social robots) is a special type of non-human being: it can talk back to you as if you’re having a conversation. Social AI has linguistic powers that allow us to converse and interact socially with something that is not human.

When people view AI as having a humanlike mind–which is only natural and fairly automatic, given AI’s humanlike capabilities–the way they interact with the AI is likely to impact how they interact with other people. When you think two game characters, for example, have the same internal experiences as you (human consciousness), social consequences arise. For example, one study1 had participants play a game with either a human-backed “avatar” or a computer-backed “agent.” Both groups of participants played with a human-backed avatar, but they were unaware of this fact. When participants thought that their game partner was not a conscious being (agent condition), how their game partner interacted with them did not change how the participant acted toward a conscious being (a human) thereafter. However, if a participant played the game with an avatar, how the human-backed avatar treated the participant in game play impacted the participant’s subsequent interaction with a human. When an avatar helped the participant and acted pro-socially, the participant was more likely to act pro-socially toward a human thereafter. In contrast, if an agent helped a participant, the participant did not increase their pro-social behaviors with a human thereafter.
This study is just one example that illustrates how perceiving a humanlike mind in an artificial agent can lead to carry-over effects on human-human interaction. Granted, this study attempted to control who perceived a humanlike mind in their interaction partner. With the influx of humanlike social AI in our regular environments, however, people automatically perceive their AI interaction partner as having a humanlike mind.

When AI is humanlike enough to elicit automatic ascriptions of humanlike consciousness, carry-over effects on subsequent human-human interaction are more likely to occur. Because both the AI and the subsequent person are perceived to have similar internal essences or experiences, how you treat one being is likely to inform how you treat the other.
These are the basics of my theory of human-AI interaction. First, there are carry-over effects between human-AI interaction and human-human interaction that we must consider. Second, viewing the AI agent you interact with as conscious makes carry-over effects more likely or more pronounced. Therefore, ascribing consciousness to AI, whether we do so consciously or subconsciously, has profound social implications. As such, we must be cautious about how we interact with artificial intelligence and be cognizant of the potential social consequences of these interactions. This is especially important as AI advances in its human likeness, because these developments increase the automaticity of ascribing consciousness to AI agents.

- Velez, J. A., Loof, T., Smith, C. A., Jordan, J. M., Villarreal, J. A., & Ewoldsen, D. R. (2019). Switching Schemas: Do Effects of Mindless Interactions With Agents Carry Over to Humans and Vice Versa? Journal of Computer-Mediated Communication, 24(6), 335–352. https://doi.org/10.1093/jcmc/zmz016

Leave a comment