I’m a PhD candidate in Psychology & Social Policy at Princeton University.

My research falls under the umbrella of human-AI interaction, but I focus on artificial intelligence that is humanlike and conversational, such as digital voice assistants (Siri, Alexa, Cortana, Google Assistant, etc.) and chatbots on various apps (Replika, WoeBot, customer service chatbots, etc.).

In my research, I address the following question, broadly: how does human-AI interaction impact human-human interaction? Some specific questions of mine include:

  1. Can the appearance, behavior, and other characteristics of conversational AI impact our perceptions of self and others?
  2. How do bot perceptions impact perceptions of self, personhood, and consciousness?
  3. Does interacting with humanlike AI result in practice or relief of behaviors toward people? (For example, if I am rude to a chatbot, does that make me more or less likely to be rude to a human thereafter?)
  4. How might humanlike AI exacerbate bias in relation to gender, race, and expired social norms?
  5. If human-AI interaction yields negative results (ex. exacerbating bias), what can we do to mitigate these negative results or make them positive?

My goal as a researcher is to understand why human-AI interaction matters, and what we can do about it. I look at this topic from an interdisciplinary lens, taking theories from social and cognitive psychology, social policy and ethics, cognition and consciousness, philosophy, and science fiction.

I’d like to thank my advisors at Princeton, Michael Graziano, Rebecca Carey, & Steven Kelts, for aiding me in my pursuit of theory and impact in the realm of human-AI interaction.