Artificial intelligence has advanced; but will psychology as a discipline follow suit?

A quarter of a century ago, one of the first psychological theories about interactions with artificial, computerized agents was developed by Clifford Nass and colleagues [1,2]. Nass, Steuer, and Simioff (1994) coined the Computers As Social Actors paradigm, finding that people viewed computers as social actors [1]. In their research, these psychologists found that users viewed the computer itself as a social agent, so they mindlessly applied human social scripts to human-computer interaction [2]. As a result, the outcomes of these interactions reflected many of the common outcomes of human-human interaction, including outcomes of trust and gender stereotypes.

Since then, computers and their artificial intelligence counterparts have evolved. In response, fields of research specific to human-computer and -AI interaction have risen within research disciplines. Computer science, most notably, has risen to this challenge. Of course, this discipline was created in response to such technological advancements. Psychology, on the other hand, has not co-evolved on a notable scale. This needs to change.

In 2001, researchers Nicole Krämer and Gary Bente were among the first psychology researchers at the turn of the century to begin evaluating the impacts of interactions with computers [3]. Twenty years later, these same psychologists took a look back at how AI technology has changed and how research has responded since the early 2000s. In their review on interactions with artificial agents, they wrote that their hopes in 2001 for more psychologists to take up this new field were unmet. In the wake of computer science using psychology for research and development in human-AI interaction, psychologists – but not psychology itself – have become dispensable, according to Krämer and Bente [4].

Computer scientists, as are others, are keenly aware of the importance of bringing psychological concepts into AI development. If there’s a research paper about human-AI interaction, it’s most often written by researchers in computer science or even those in marketing. Psychology as a whole, discordantly, has yet to view human-AI interaction or human-artificial-agent interaction as a critical subfield within the discipline, and this shows in psychologists lack of involvement in this research. It’s possible this subfield will take off within psychology in the next few years, or perhaps not. In any case, the time for a subfield of psychology of human-AI interaction is long overdue.

Krämer and Bente suggested that psychologists need to get involved, at the very least, by being key interdisciplinary collaborators in computer science research, especially for the ethics side of human-AI interaction [4]. Although the concept of interdisciplinary research is not new and is critical across for all types of research, I suggest that psychology as a discipline needs to recognize the need for a psychology-specific human-AI interaction department within universities. From my own experience talking to budding human-AI interaction researchers, many are seeking a psychology-specific avenue through which to tackle this research; yet, as of now, the only track available within a university is through a computer science department. Many of these aspiring researchers are not looking for a ML, math, and coding-forward approach and are forced to seek alternatives or find themselves on a track they cannot sustain or that is not specific enough to their goals. This reflected my own experience as a pre-graduate student. I chose psychology, since I wanted to conduct human subjects research the social outcomes of human-AI interaction and speak to the ethics of AI R&D, despite the lack of institutionalized focus or support in the field. To date, I am the only student in my department studying the psychology of human-AI interaction.

Historically, psychology has researched the modes and outcomes of interactions between humans and artificial agents. Psychologists such as Heider and Simmel (1944) conducted research that showed how people anthropomorphize, or ascribe humanlike traits to, inanimate objects such as triangles and squares, despite the goal of the study to understand interpersonal perception between people in various contexts. These researchers created simulations of different shapes interacting with each other, and participants created a social story about the shapes’ motivations and feelings while watching these shapes interact [5]. There has been further research on people’s interactions with dolls as well as living, non-human agents such as pets, yet a field specific to this research has not developed within psychology. The question is, why?

The question may in part be answered by the fact that multiple psychological mechanisms are at play during these interactions, ranging from cognitive to social applications, and so there has not yet been a need (or space) for a full subfield to exist to study human and non-human interaction. Given the state of artificial intelligence technology and its inherently social outcomes due to the human-likeness and conversational ability of AI agents, however, it’s due time for psychology to add this subfield to its docket more formally. Relying on the hope of psychologists’ involvement through interdisciplinary collaboration is not enough, and given the expertise of psychologists in social relations and social cognition, psychology has much that it can contribute to human-AI interaction research both now and in the future.

So: what could a psychology subfield of Human-AI interaction bring to the table?

At the present time, computer science research that uses psychological concepts may lack the depth to address pressing social and relational consequences brought forth by the onset of conversational, social AI. This is not to say that computer scientists have not made great attempts and are not doing work that desperately needs to be done, but expertise in multiple fields is difficult to attain, and as such, the ability to address two fields at once is difficult and leaves something to be desired.

Broadly, as Krämer and Bente touched on, psychologists are especially needed for considerations about ethical AI. So, too, are philosophers, though philosophers are needed for critical evaluation of ethics and reasoning through the conceptual what-if’s. Psychologists have a depth of understanding in the research behind interpersonal perception and interaction. It’s this expertise on empirical research that psychologists cannot be discounted for and should take responsibility to address.

What we need to understand about human-AI interaction is how the onset of humanlike, social AI will impact our perceptions of personhood, responsibility, mind and morality, relationships, and more. These perceptions shape the fabric of our social selves and therefore the structure of our societies. Psychology is about how humans think and interact, why people think and act the way they do, how context shapes outcomes, and how outcomes impact people. As technological progress has now thrown “humanlike AI agents” into this mix, psychologists’ expertise is critical.

There’s still much left to be said about what psychology can and should contribute for the sake of our future with AI agents, but it’s here that I rest my claim.

References

[1] Nass, C., Steuer, J., & Siminoff, E. (1994). Computer are social actors. 204. https://doi.org/10.1145/259963.260288

[2] Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues56(1), 81–103. https://doi.org/10.1111/0022-4537.00153

[3] Krämer, N., & Bente, G. (2001). Mehr als Usability: (Sozial-)psychologische Aspekte bei der Evaluation von anthropomorphen Interface-Agenten (More than usability: (Socio-) psychlogical aspects in the evaluation of anthropomorphic interface agents). i-com, p. 26. https: //doi.org/10.1524/icom.2001.0.0.26.

[4] Krämer, N., & Bente, G. (2021). Interactions with Artificial Entities Reloaded: 20 Years of Research from a Social Psychological Perspective. I-Com20(3), 253–262. https://doi.org/10.1515/icom-2021-0032

[5] Heider F. & Simmel M. (1944). An Experimental Study of Apparent Behavior, The American Journal of Psychology, 57 (2) 243. DOI: 10.2307/1416950

Images: https://unsplash.com

Leave a comment

Discover more from Guingrich, r. e.

Subscribe now to keep reading and get access to the full archive.

Continue reading