Fpsyg 15 1322781
Fpsyg 15 1322781
Fpsyg 15 1322781
Ascribing consciousness to
OPEN ACCESS artificial intelligence: human-AI
interaction and its carry-over
EDITED BY
Chiara Lucifora,
University of Bologna, Italy
REVIEWED BY
Francesca Ciardo,
effects on human-human
University of Milano-Bicocca, Italy
Kostas Karpouzis,
Panteion University, Greece
interaction
*CORRESPONDENCE
Rose E. Guingrich Rose E. Guingrich 1,2* and Michael S. A. Graziano 1,3
rose.guingrich@princeton.edu 1
Department of Psychology, Princeton University, Princeton, NJ, United States, 2 Princeton School of
RECEIVED 16October 2023 Public and International Affairs, Princeton University, Princeton, NJ, United States, 3 Princeton
ACCEPTED 13 March 2024 Neuroscience Institute, Princeton University, Princeton, NJ, United States
PUBLISHED 27 March 2024
CITATION
Guingrich RE and Graziano MSA (2024) The question of whether artificial intelligence (AI) can be considered conscious
Ascribing consciousness to artificial and therefore should be evaluated through a moral lens has surfaced in recent
intelligence: human-AI interaction and its years. In this paper, we argue that whether AI is conscious is less of a concern
carry-over effects on human-human
interaction. than the fact that AI can be considered conscious by users during human-AI
Front. Psychol. 15:1322781. interaction, because this ascription of consciousness can lead to carry-over
doi: 10.3389/fpsyg.2024.1322781 effects on human-human interaction. When AI is viewed as conscious like a
COPYRIGHT human, then how people treat AI appears to carry over into how they treat other
© 2024 Guingrich and Graziano. This is an
people due to activating schemas that are congruent to those activated during
open-access article distributed under the
terms of the Creative Commons Attribution interactions with humans. In light of this potential, we might consider regulating
License (CC BY). The use, distribution or how we treat AI, or how we build AI to evoke certain kinds of treatment
reproduction in other forums is permitted,
from users, but not because AI is inherently sentient. This argument focuses
provided the original author(s) and the
copyright owner(s) are credited and that the on humanlike, social actor AI such as chatbots, digital voice assistants, and
original publication in this journal is cited, in social robots. In the first part of the paper, we provide evidence for carry-over
accordance with accepted academic
effects between perceptions of AI consciousness and behavior toward humans
practice. No use, distribution or reproduction
is permitted which does not comply with through literature on human-computer interaction, human-AI interaction, and
these terms. the psychology of artificial agents. In the second part of the paper, we detail
how the mechanism of schema activation can allow us to test consciousness
perception as a driver of carry-over effects between human-AI interaction and
human-human interaction. In essence, perceiving AI as conscious like a human,
thereby activating congruent mind schemas during interaction, is a driver for
behaviors and perceptions of AI that can carry over into how we treat humans.
Therefore, the fact that people can ascribe humanlike consciousness to AI
is worth considering, and moral protection for AI is also worth considering,
regardless of AI’s inherent conscious or moral status.
KEYWORDS
Introduction
Consciousness is considered the subjective experience that people feel in association with
events, such as sensory events, memories, and emotions (Nagel, 1974; Harley, 2021). Many
people study consciousness, and there are just as many competing theories about what it is
and how it is generated in the human brain (e.g., Chalmers, 1996; Baars, 1997; Tononi, 2007;
Graziano, 2013; Doerig et al., 2020). Recently, people have speculated linguistic style—to its ability to process social information, are unique
that artificial intelligence can also have consciousness (e.g., O’Regan, within the category of artificial, non-human agents. Social actor AI is
2012; Yampolskiy, 2018; Chalmers, 2023). Whether that is possible, arguably more akin to humans than are other machines and objects.
and how, is still debated (e.g., Koch, 2019). However, it is undeniable As such, how people behave toward social actor AI agents might
that children and adults attribute consciousness to AI through Theory be more likely to impact how they behave toward another human,
of Mind attributions (Kahn et al., 2012; Broadbent et al., 2013; Eyssel despite the fact that these AI agents are not themselves living beings.
and Pfundmair, 2015; Martini et al., 2016; Tanibe et al., 2017; Velez et al. (2019) posited that “an increasingly important question is
Świderska and Küster, 2018; Heyselaar and Bosse, 2020; Küster and how these social responses to agents will influence people’s subsequent
Świderska, 2020; Taylor et al., 2020). Some researchers have argued interactions with humans.” Moreover, social actor AI is evolving
that consciousness is fundamentally an attribution, a construct of rapidly. As Etzrodt et al. (2022) described it, “We are witnessing a
social cognitive machinery, and that we attribute it to other people and profound change, in which communication through technologies is
to ourselves (Frith, 2002; Graziano, 2013; Prinz, 2017). As such, extended by communication with technologies.” Instead of using
regardless of whether AI is conscious, attributing consciousness to AI social media as a medium through which you can interact with other
matters in the same way attributing it to other humans does. people, users can, for example, download an app through which they
Premack and Woodruff (1978) coined the term Theory of Mind can interact with a non-human being. Companion chatbots like
(ToM), which is the ability to attribute mind states to oneself and Replika, Anima, or Kiku have millions of people using their apps.
others more expansive. For example, one heavily studied aspect of Millions more have digital voice assistants such as Siri and Alexa
ToM is the ability to recognize false beliefs in others (Wimmer and operating on their smartphones and in their homes. People form
Perner, 1983). This cognitive capability has historically distinguished relationships with these agents and can come to view them as
humans from many other species, yet Rabinowitz et al. (2018) claimed members of the family, friends, and even lovers (Croes and Antheunis,
that artificial intelligence passed the false belief test. ToM may extend 2020; Garg and Sengupta, 2020; Brandtzæg et al., 2022; Xie and
beyond attributing beliefs to attributing other aspects of mind such as Pentina, 2022; Guingrich and Graziano, 2023; Loh and Loh, 2023). AI
emotions and intentionality. According to some, ToM can be divided agents will almost certainly become both more ubiquitous and
into two distinct processes: attributing agency, or the ability to decide humanlike. As new generations grow up with these technologies on
and act autonomously, and attributing experience, or the ability to their mobile devices and in their homes, the consequences of
have subjective states (Gray et al., 2007; Knobe and Prinz, 2007). humanlike AI will likely become more pronounced over time.
Attributing consciousness to AI is therefore probably not one, single In this paper, we will not consider what, exactly, consciousness is,
process, but instead should be broken down into experience and what causes it, or whether non-human machines can have it. Instead,
agency, with each part analyzed separately (Ward et al., 2013; Küster the goal here is to discuss how people perceive consciousness in social
et al., 2020). actor AI, to explore the possible profound social implications, and to
It has been suggested that attributing experience, rather than suggest potential research questions and regulatory considerations for
agency, plays a larger role in the perception of consciousness in AI others to pursue within this scope of research.
(Knobe and Prinz, 2007). This distinction may present some
difficulties for accurately measuring whether people view AI as
conscious. People are generally more willing to assign agency rather Part 1: evidence for carry-over effects
than experience to a variety of targets, including robots (Gray and between human-AI interaction and
Wegner, 2012; Jacobs et al., 2021). This may be due in part to it being human-human interaction
easier to determine whether an agent can make decisions or act on its
own (agency) than whether an agent can feel pain or pleasure Carry-over effects between AI’s tangible
(experience). Adding further complexity, not all people ascribe agency and intangible characteristics
and experience to AI in the same manner. For example,
psychopathology and personality traits such as emotional stability and When people interact with AI, tangible characteristics of the agent
extraversion correlate with whether someone ascribes agency or such as appearance or embodiment, behavior, communication style,
experience to robots: emotional stability positively correlates with gender, and voice can affect how people perceive intangible
ascribing agency to robots, and extraversion positively correlates with characteristics such as mind and consciousness, emotional capability,
attributing experience to robots (Tharp et al., 2016). Other individual trustworthiness, and moral status (Powers and Kiesler, 2006; Gray and
differences such as people’s formal education may also relate to Wegner, 2012; Broadbent et al., 2013; Eyssel and Pfundmair, 2015;
whether someone attributes agency characteristics like intentionality Seeger and Heinzl, 2018; Lee et al., 2019; Küster et al., 2020; Dubosc
to a humanoid robot (Roselli et al., 2023). Given these findings, it may et al., 2021; Rhim et al., 2022). The critical tangible-intangible
be useful to operationalize ToM as a complex, overarching collection relationship examined here is the one between an agent’s humanlike
of interrelated processes, each of which plays a different role in how embodiment and consciousness ascription (Krach et al., 2008;
people attribute consciousness to machines. Broadbent et al., 2013; Ferrari et al., 2016; Abubshait and Wiese, 2017;
The attribution of consciousness to AI is particularly relevant to Stein et al., 2020).
social actor AI. These humanlike agents are social embodiments of Generally, the more tangibly humanlike that people perceive an
intelligent algorithms that people can talk to and even engage with AI agent to be, the more likely people are to ascribe mind to the agent
physically. Social actor AI includes chatbots, digital voice assistants, (e.g., Broadbent et al., 2013). At least one study suggests that mind
and social robots. Social actor AI’s humanlike characteristics, from ascription does not increase with human likeness until a particular
how the AI is embodied—like its bodily form, voice, and even threshold of human likeness is reached; once an agent’s appearance
reaches the middle of the machine-to-human spectrum and the AI online (N = 354, recruited from Amazon Mechanical Turk) in which
agent’s appearance includes actual human features such as eyes and a participants each watched eight videos of robots requesting help using
nose, then mind ascription begins to increase with human likeness various politeness strategies, and study 2 was a behavioral lab study
(Martini et al., 2016). (N = 48, recruited via university participant pools and postings in local
People are not always aware that they attribute mind to an AI areas) with three conditions that were based on study 1’s results. In
agent during interaction. In other words, the construct of mind or study 2, participants watched a movie with a robot in the room
consciousness activated in people during these interactions may (Willow Garage’s Personal Robot 2). During the movie, the robot
be implicit, making it more difficult to measure. Banks (2019) brought food to the participant and mentioned that the room looked
conducted an online survey to compare participants’ implicit and like it needed to be cleaned, offered to do so, and requested aid from
explicit ascriptions of mind to an agent. Participants (N = 469) were the participant. While the majority of participants helped the robot,
recruited from social media and university research pools and were those participants who rated the robot as more agentic came to its aid
randomly assigned to one of four agents. Three of the agents were more quickly.
social AIs that varied in their human likeness and mind capacity, and Depending on the paradigm, ascribing mind to AI can affect ease
one was a human control, all named “Ray.” Banks tested implicit of interaction by augmenting or inhibiting the dyadic flow. Interacting
ascription of mind using five classic ToM tests that measure whether with a humanlike artificial agent spurs the automatic use of human
participants ascribe mind to an agent including the white lie scenario social scripts (Nass and Moon, 2000; Nass and Brave, 2005) and other
and the Sally-Anne test. Explicit measures of mind were measured by social processes (von der Pütten et al., 2009), which can facilitate
two questions: do you think Ray has a mind, and how confident are human-AI interaction (Sproull et al., 1996; Rickenberg and Reeves,
you in your response? For the implicit tests’ open-ended responses, 2000; Krämer et al., 2003a,b; Duffy, 2008; Krämer et al., 2009; Vogeley
trained, independent raters coded the data for mentalistic explanations and Bente, 2010; Kupferberg et al., 2011). Facilitation of interaction
of behavior. The results showed that while people implicitly ascribed and likability are however dependent on individual differences such
ToM to humanlike AI, this implicit ascription did not correlate with as familiarity with the AI (Wang et al., 2021), need for social inclusion
explicit mind ascriptions. or interaction (Lee et al., 2006; Eyssel and Pfundmair, 2015), and other
Mind ascription appears to be automatically induced by AI’s individual differences (Lee, 2010).
tangible human likeness, even when subjects are prompted to believe At a certain point, interaction facilitation no longer increases with
the opposite. Stein et al. (2020) compared mind ascriptions in a 2 × 2 human likeness across both tangible and intangible domains. The
between-subjects design of embodiment and mind capability for 134 benefits of human likeness decrease dramatically when human
German-speaking participants recruited from social media and likeness suddenly becomes creepy, according to the Uncanny Valley
mailing lists. Stimuli included vignettes and videos of either a text- Hypothesis coined by Mori (1970). When an AI agent’s appearance
based chatbot interface (Cleverbot) or a humanoid robot (with a 3-D approaches the tipping point of “not enough machine, not enough
rendered face of a woman) that was described as built on a simple or human,” the AI has entered the dip of the uncanny valley. At this
complex algorithm. The complex algorithm description included point, an artificial agent’s human likeness becomes disturbing, thereby
humanlike mind traits such as empathy, emotions, and understanding causing anxiety or discomfort in users. The discomfort arising from
of the user. The researchers found a multivariate main effect of the uncanny valley effect is generally distinct from dislike yet can have
embodiment, such that people ascribed more mind capabilities to the similar negative effects on the flow of interaction (Quadflieg
humanoid robot than the text-based chatbot, regardless of whether it et al., 2016).
was based on a simple or complex algorithm. These researchers The uncanny valley theory of human-AI interaction more recently
reported that “a digital agent with human-like visual features was acquired a qualifier: the uncanny valley of mind (Stein and Ohler,
indeed attributed with a more human-like mind—regardless of the 2017; Appel et al., 2020). No longer just concerned with general
cover story that was given regarding its actual mental prowess.” human likeness, the uncanny valley effect can occur when AI’s mind
In sum, evidence suggests that an AI agent’s observable or tangible capabilities get too close to that of a human mind. It is uncertain
characteristics, specifically its humanlike appearance, leads whether negative uncanny valley effects of mind are stable, however,
automatically to ascribing intangible characteristics, including given the contradictions within this more recent scope of research. In
consciousness, to the AI agent. As such, slight adjustments to AI’s Stein et al.’s study, they also found that the AI with low mind capacity,
tangible characteristics can impact whether people perceive the based on a simple algorithm rather than an advanced one, caused
artificial agent as conscious. more discomfort when the AI was embodied rather than solely text-
based. In another study, the researchers found that the more people
perceived AI or humans to have a typically human mind, the less eerie
Carry-over effects between perceiving feelings they experienced (Quadflieg et al., 2016). Due to inconsistent
mind in AI and human-AI interaction stimuli across studies, it is possible that slight variations in facial
features or voice of the AI agent drove these dissimilar effects. In these
In some cases, ascribing a mind to AI is linked with viewing the cases, it may be useful to control for appearance when attempting to
agent as likable and trustworthy (Young and Monroe, 2019), which parse out the impacts of the uncanny valley of mind on how people
can impact whether people engage in helping behaviors. Srinivasan interact with AI agents.
and Takayama (2016) found that when people perceived a robot as Via a series of three studies, Gray and Wegner (2012) made the
having an agentic mind, such that the robot was acting of its own claim that experiential aspects of mind, and not those of agentic mind,
accord rather than being controlled by a human, they came to its aid drive uncanny valley effects. In one of the studies, participants,
50% more quickly. Study 1 was a mixed experiment design conducted recruited from subway stations and dining halls (N = 45), were given
vignettes of a supercomputer that was described as having only source of social influence on the self.” In other words, “being watched
experience capabilities, having only agency, or simply mechanical. by others matters, perhaps especially when others have a mind like
They then rated their feelings (uneasy, unnerved, and creeped out) one’s own.” Social actor AI is an anthropomorphized target; therefore,
and perceptions of the supercomputer’s agency and experience. The it can serve as a role model or operate as an ingroup member that has
experiential supercomputer elicited significantly higher uncanny some involvement in setting social norms, as seen with the persuasive
valley feelings than agents in the other two conditions. Apparently, an chatbot that convinced people to donate less to charity (Zhou et al.,
intelligent computer that is seen as having emotion is creepier than 2022), the chatbot that persuaded users to get vaccinated for
one that can make autonomous decisions. The distinction between COVID-19 or participate in social distancing (Kim and Ryoo, 2022),
uncanny valley effects of experience and agency may be caused by and the humanlike avatar that elicited more socially desirable
feelings of threat: AI agents that are capable of humanlike emotion responses from participants than a mere text-based chatbot did
threaten that which makes mankind special (Stein and Ohler, 2017). (Krämer et al., 2003a). Social actor AI can persuade people in these
If threat drove discomfort in Gray and Wegner’s participants, then ways, regardless of whether people trust it or perceive it as credible
familiarity with the agent might mitigate perceptions of threat to the (Lee and Liang, 2016, 2019). In some paradigms, chatbot influence
point at which the uncanny valley switches into the “happy valley.” mimics that of people: chatbots can implement foot-in-the-door
According to that hypothesis, after long-term, comfortable, and safe techniques to influence people’s emotions and bidding behavior in
exposure to a humanlike AI agent, people might find the agent’s gambling (Teubner et al., 2015) and can alter consumers’ attitudes and
human likeness to increase its likability, which might facilitate purchasing behavior (Han, 2021; Poushneh, 2021).
human-AI interaction (Cheetham et al., 2014). Another explanation for why AI can socially influence people may
The uncanny valley effect with respect to AI is therefore more be that the user views the agent as being controlled by another human.
complicated and difficult to study than it may at first appear. Some research suggests that perceiving a human in the loop during
Familiarity with AI over time, combined with the increasing ubiquity interactions with AI results in stronger social influence and more
of social actor AI, may eliminate uncanny valley effects altogether. social behavior (Appel et al., 2012; Fox et al., 2014). This idea, however,
Uncanny valley effects differ across studies, and are affected by has since been contested (Krämer et al., 2015). Indeed, early research
multiple factors, including expectation violation (Spence et al., 2014; on human-computer interaction found that when people perceived a
Edwards et al., 2019; Lew and Walther, 2022), individual differences computer as a social agent, they did not simply view it as a product of
(MacDorman and Entezari, 2015), and methodological differences human creation, nor did they imagine that they were interacting with
such as stimuli and framing. Further, the way the uncanny valley the human engineer who created the machine (Nass et al., 1994;
graph rises to a peak has been contested. For example, researchers Sundar and Nass, 2000). Nass and colleagues designed a series of
have debated exactly where that peak lies on the machine-to-human paradigms in which participants were tutored, via audio emitting from
scale (Cheetham et al., 2014; Pütten and Krämer, 2014; Stein et al., computer terminals, by computers or human programmers that
2020). However, what we do know is that perceiving mind in AI affects subsequently evaluated participants’ performance. To account for the
people’s emotional state and how they interact with AI, making the novelty of computers at this time, earlier studies were conducted with
intangible characteristic of mind one of the mechanisms that impacts experienced computer users. They found significant differences
human-AI interaction. between computer and human tutor conditions, such that people
viewed computers as not just entities controlled by human
programmers, but entities to which the ideas of “self ” and “other” and
Carry-over effects between human-AI social agency applied. Nass and colleagues laid the groundwork for
interaction and human-human interaction evaluating social consequences of interacting with intelligent
machines, as their experiments provided initial evidence that people
Most studies on human-AI interactions, such as those reviewed treated the machines themselves as social actors. As such, it may
above, focus on what could be called one-step effects like the uncanny be the case that social influence is strengthened when people think a
valley effect, trust, and likability. Such studies are concerned with how human is involved, yet social influence still exists when the AI agent
characteristics of AI impact how people interact with the agent. is perceived as acting on its own accord.
Arguably a more important question is the two-step effect of how Communication researchers have found that the way people
human-AI interactions might impact subsequent human-human communicate with AI is linked to how they communicate with
interactions. Though findings on these two-step effects are limited and other humans thereafter, such that people are then more likely to
sometimes indirect, the data do suggest that such effects are present. speak to another human in the same way in which they habitually
The impact of AI is not confined to the interaction between a user and speak to an artificial agent. For example, talking with the
an AI agent, but rather carries over into subsequent interactions companion chatbot Replika caused users’ linguistic styles to
between people. converge with the style of their chatbot over time (Wilkenfeld
Social Cognitive Theory, anthropomorphism, and ToM literature et al., 2022). The way children speak with social actor AI such as
provide theoretical foundations for why interactions with social actor the home assistant, Alexa, can carry over into how children speak
AI could prompt carry-over effects on human-human interaction. to their parents and others (Hiniker et al., 2021). Garg and
Due to the social nature of these agents, AI can act as a model for Sengupta (2020) tracked and interviewed 18 families over an
social behavior that users may learn from (Bandura, 1965, 1977). average of 58 weeks who used a digital voice assistant in their
According to Waytz et al. (2010), when someone anthropomorphizes homes and analyzed raw audio interactions with their assistant.
or ascribes mind to an artificial agent, that agent then “serves as a These researchers found that “when children give commands at
a high volume, there is an aggressive tone, which often In part 2, we address the mechanism of this moderator through
unintentionally seeps into children’s conversations with friends congruent schema activation. We further pose two theoretical types
and family.” A parent in the study commented that, “If I do not of carry-over effects that may occur via congruent schema activation:
listen to what my son is saying, he will just start shouting in an relief and practice.
aggressive tone. He thinks, as Google responds to such a tone,
I would too.” While home assistants can negatively impact
communication, they can also foster communication within Part 2: mechanisms and types of
families and alter how communication breakdowns are repaired carry-over effects: schemas and relief
(Beneteau et al., 2019, 2020). Parents have concerns about their or practice
children interacting with social actor AI, but they also see AI’s
potential to support children by “attuning to others, cultivating Schema congruence and categorization
curiosity, reinforcing politeness, and developing emotional
awareness” (Fu et al., 2022). According to the observational What is the mechanism by which people’s attributions of
learning concept in Social Cognitive Theory (Bandura, 1965), consciousness to AI lead to carry-over effects on interactions with
assistants might provide models for prosocial behavior that other humans? One possibility is the well-known mechanism of
children could learn from (such as being polite, patient, and activating similar schemas of mind when interacting with different
helpful) regardless of whether the assistant provides positive agents. We propose that ascribing mind or consciousness to AI
reinforcement when children act in these prosocial ways. The through automatic, congruent schema activation is the driving
studies mentioned above show how both children’s positive and mechanism for carry-over effects between human-AI interaction and
negative modes of communication can be reinforced via human-human interaction.
interactions with home assistants. Schemas are mental models with identifiable properties that are
Not only can social actor AI affect the way that people activated when engaging with an agent or idea and are useful ways of
communicate with each other within their relationships, but also it has organizing information that help inform how to conceptualize and
the potential to impact relationships with other people due to interact with new stimuli (Ortony and Anderson, 1977; McVee et al.,
attachment to the agent. Through in-depth interviews of existing 2005; Pankin, 2013). For example, the schema you have for your own
Replika users (N = 14, ages 18–60), Xie and Pentina (2022) suggested consciousness informs how you understand the consciousness of
that AI companions might replace important social roles such as others. You assume, because your experience of consciousness
family, friends, and romantic partners through unhealthy attachment contains X and Y characteristics, that another person’s consciousness
and addiction. An analysis of families’ use of Google Home revealed also contains X and Y characteristics, and this facilitates understanding
that children, specifically those between the age of 5–7, believed the and subsequent social interaction between you and the other person
device to have feelings, thoughts, and intentions and developed an (Graziano, 2013).
emotional attachment to it (Garg and Sengupta, 2020). These children Researchers have analyzed the consequences of failing to fully
viewed Google Home as if it had a mind through ascribing activate all properties of mind schemas between similar agents.
characteristics of agency and experience to it. For example, the act of dehumanization reflects a disconnect
The psychosocial benefits of interactions with social actor AI may between how you view your mind and that of other people.
either contribute to positive relational skill-building if AI is used as a Instead of activating the consciousness schema with X and Y
tool, or they may lead to human relationship replacement if these characteristics during interaction with another human, you may
benefits are comparatively too difficult to get from relationships with activate only the X characteristic of the schema. Dehumanization
real people. Research suggests that people self-disclose more when is linked to social consequences such as ostracism and exclusion,
interacting with a computer versus with a real person, in part due to which can harm social interaction (Bastian and Haslam, 2010;
people having lower fear of being judged, thereby prompting more Haslam and Loughnan, 2014).
honest answers (Lucas et al., 2014). This effect is found even though We can apply the idea of schema congruence to interactions with
benefits of emotional self-disclosure are equal whether people are social actor AI while also taking into consideration the level of
interacting with chatbots or human partners (Ho et al., 2018). Further, advancement of the AI in question. Despite AI being more advanced
compared to interacting with other people, those interacting with than other technology like personal mobile devices or cars in terms of
artificial agents experience fewer negative emotions and lower desire human likeness and mind ascription, some research suggests that
for revenge or retaliation (Kim et al., 2014). Surveys of users of the social actor AI still falls short of the types of mind schemas that are
companion chatbot, Replika, suggest that users find solace in human- activated when people interact with each other. However, humanlike
chatbot relationships. Specifically, those who have experienced trauma AI is developing at a rapid rate. As it does, the schematic differences
in their human relationships, for example, indicate that Replika between AI agents and humans will likely blur more than they already
provides a safe, consistent space for positive social interaction that can have. To better understand the consequences of current social actor
benefit their social health (Ta et al., 2020; Guingrich and Graziano, AI, it may be prudent to observe the impacts of human-AI interaction
2023). The question is whether the benefits of human-AI interaction through ingroup-outgroup or dehumanization processes, both of
presented here may lead to people choosing AI companions over which are useful psychological lenses for group categorization.
human ones. We propose that psychological tests of mind schema activation will
In part 1, we have reviewed evidence that human-AI interaction, be especially useful for more advanced, future AI that is more clearly
when moderated by perceiving the agent as having a humanlike mind different from possessions like cars and phones but similar to humans
or consciousness, has carry-over effects on human-human interaction. in terms of mind characteristics.
Schematic incongruence yields uncanny the artificial agent has a mind but simulates that mind through the
valley effects neural machinery of the person’s own mind. Simulation allows the
agent to seem more familiar, which facilitates interaction.
Categorization literature attempts to delineate whether people Some researchers have used schemas as a lens to explain why
treat social actor AI as non-human, human, or other. The data are people interact differently with computer partners vs. human ones
mixed, but some of the results may stem from earlier AI that is not as (Hayashi and Miwa, 2009; Merritt, 2012; Velez et al., 2019). In this
capable. Now that AI is becoming sophisticated enough that people type of research, participants play a game online and are told that their
can more easily attribute mind to it, the categories may change. In this teammate is either a human or a computer, but, unbeknownst to the
literature, social AI is usually classified by study participants as participants, they all interact with the same confederate-controlled
somewhere on the spectrum between machine and human, or it is player. This method allows researchers to observe how schemas drive
classified as belonging to its own, separate category (Severson and perceptions and behavior, given that the prime is the only difference.
Carlson, 2010). That separate category is often described as not quite According to Fox et al. (2014), when people believed themselves to
machine, not quite human, with advanced communication skills and be interacting with a human agent, they were more likely to be socially
other social capabilities, and has been labeled with mixed-category influenced. Velez et al. (2019) took this paradigm one step further and
words like humanlike, humanoid, and personified things (Etzrodt and observed that activating schemas of a human mind during an initial
Engesser, 2021). interaction with an agent resulted in carry-over effects on subsequent
Some researchers claim that the uncanny valley effect is interactions with a human agent. These researchers employed a 2 × 2
driven by categorization issues. In that hypothesis, humanlike AI between-subjects design in which participants played a video game
is creepy because it does not fit into categories for machine or with a computer agent or human-backed avatar. They then were
human but exists in a space for which people do not have a presented with the option to engage prosocially through a prisoner’s
natural, defined category (Burleigh et al., 2013; Kätsyri et al., dilemma money exchange with a stranger thereafter. When
2015; Kawabe et al., 2017). Others claim that category uncertainty participants (N = 184) thought they were interacting with a human
is not the driver of the uncanny valley effect, but, rather, and that player acted pro-socially, they behaved more pro-socially
inconsistency is (MacDorman and Chattopadhyay, 2016). In that toward the stranger. However, when participants believed they were
hypothesis, because of the inconsistencies between AI and the interacting with a computer-controlled agent and it behaved
defining features of known categories, people treat humanoid AI pro-socially toward them, they had lower expectations of reciprocity
agents as though they do not fit into a natural, existing category and donated less game credits to the human stranger with whom they
(Gong and Nass, 2007; Kahn et al., 2011). Because social actor AI interacted subsequently. In the interpretation of Velez et al., the
defies boundaries, it may trigger outgroup processing effects such automatic anthropomorphism of the computer-backed agent was a
as dehumanization that contribute to negative affect. The mindless process (Kim and Sundar, 2012) and therefore not
cognitive load associated with category uncertainty, more compatible with the cognitive-load-requiring social processes
generally, may also trigger negative emotions that are associated thereafter (Velez et al., 2019).
with the uncanny valley effect. One of the theories that arose from research on schema activation
Social norms likely play a role in explicit categorization of social in gaming is the Cooperation Attribution Framework (Merritt, 2012).
AI (Hoyt et al., 2003). People may be adhering to a perceived social According to Merritt, the reason people behave differently when game
norm when they categorize social AI as machinelike rather than playing with a human vs. an artificial partner is that they generate
humanlike. It is possible that people explicitly place AI into a separate different initial expectations about the teammate. These expectations
category from people, while the implicit schemas activated during activate stereotypes congruent with the teammates’ identity, and
interaction contradict this separation. The uneasy feeling from the confirmations of those stereotypes are given more attention during
uncanny valley effect may be a product of people switching between game play, causing a divergence in measured outcomes. According to
ascribing congruent mind schemas to the agent in one moment and Merritt, “the differences observed are broadly the result of being
incongruent ones in the next. unable to imagine that an AI teammate could have certain attributes
(e.g., emotional dispositions). …the ‘inability to imagine’ impacts
decisions and judgments that seem quite unrelated.” The computer-
Schematic congruence yields carry-over backed agents used in this research may evoke a schema incompatible
effects on human-human interaction with humanness—one that aligns with the schema of a
pre-programmed player without agency—whereas more modern,
As humanlike AI approaches the human end of the machine-to- advanced AI might evoke a different, more congruent schema in
human categorization spectrum, it also advances toward a position in human game players.
which people can more easily ascribe a conscious mind to it, thereby Other studies examined schema congruence by seeing how people
activating congruent mind schemas during interactions with it. interact with and perceive an AI agent if its appearance and behavior
Activating congruent schemas impacts how people judge the agent do not fit into the same humanlike category. Expectation violation and
and its actions. For example, the belief that you share the same schema incongruence appear to impact social responses to AI agents.
phenomenological experience with a robot changes the way you view In two studies, Ciardo et al. (2021, 2022) manipulated whether an AI
its level of intent or agency (Marchesi et al., 2022). Activation of mind- agent looked humanlike and made errors in humanlike (vs.
similarity may resemble simulation theory (Harris, 1992; Röska- mechanical) ways. They then observed whether people attributed
Hardy, 2008). In that hypothesis, the observer does not merely believe intentionality to the agent or were socially inclusive with it.
Coordination with the AI agent during the task and social inclusion communication capabilities and can respond intelligently to your
with the AI agent after the task were impacted by humanlike errors inputs, you feel a sense of relief from berating something that reacts
during the task only if the agent’s appearance was also humanlike. This to your anger. Over time, you rely on ranting to this chatbot to release
variation in response toward the AI may have to do with ease of your anger, and as a result, you are relieved of your negative emotions
categorization: if an agent looks humanlike and acts humanlike, the and are less likely to lash out at other people.
schemas activated during interaction are stable, which facilitates social Now consider an example of practice. Suppose you are angry.
response to the agent. On the other hand, if an agent looks humanlike You decide to talk to a companion chatbot and unleash your negative
but does not act humanlike, schemas may be switching and people emotions on the chatbot, speaking to it rudely through name-calling and
may incur cognitive load and feel uncertain about how to respond to insults. The chatbot responds only positively or neutrally to your attacks,
the agent’s errors. In their other study, these researchers found that offering no negative backlash in return. This works for you, so
when a humanlike AI agent’s mistakes were also humanlike, people you continue to lash out at the chatbot when angry. Since this chatbot is
attributed more intentionality to it than when a humanlike AI agent’s humanlike, you tend not to distinguish between this chatbot and other
mistakes were mechanical. humans. Over time, you start to lash out at people as well, since you have
To understand why people might unconsciously or consciously not received negative feedback from lashing out at a humanlike agent. The
view social actor AI as having humanlike consciousness, it is useful to risk threshold for relieving your anger at something that will socialize with
understand individual differences that contribute to automatic you is decreased. You have effectively practiced negative behavior with a
anthropomorphism (Waytz et al., 2010) and therefore congruent humanlike chatbot, which led to you engaging more in that type of
schema activation. Children who have invisible imaginary friends are negative behavior with humans. Practice can involve more than negative
more likely to anthropomorphize technology, and this is mediated by behaviors. Suppose you have a friendly, cooperative interaction with an
what the researchers call the “imaginative process of simulating and AI, in which you feel safe enough to share your feelings. Having engaged
projecting internal states” through role-play (Severson and Woodard, in that practice, maybe you are more likely to engage in similar positive
2018). As social AI agents become more ubiquitous, it is likely that behavior to others in your life.
mind-ascription anthropomorphism will occur more readily; for Both of these examples illustrate ways in which antisocial behavior
instance, intensity of interaction with the chatbot Replika mediates toward humans can be reduced or increased by interactions with
anthropomorphism (Pentina et al., 2023). Currently, AI is not social actor AI. There are also situations in which prosocial behaviors
humanlike enough to be indistinguishable from real humans. People can be reinforced. Which of the scenarios, relief or practice, are
are still able to identify real from artificial at a level better than chance, we more likely to observe? The answer to this question will inform the
but this is changing. What might happen once AI becomes even more way society should respond to or regulate social actor AI.
humanlike to the point of being indistinguishable from real humans?
At that point, the people who have yet to generate a congruent
consciousness schema for social actor AI may do so. Others may Evidence against relief and evidence for
respond by becoming more sensitive to subtle, distinguishing cues and practice effects
by creating more distinct categories for humans and AI agents. At
some point in the development of AI, perhaps even in the near future, Researchers have proposed that people should take advantage of
the distinction between AI behavior and real human behavior may social actor AI’s human likeness to use it as a cathartic object.
disappear entirely, and it may become impossible for people to Coined by Luria et al. (2020), the idea of a cathartic object is familiar:
accurately separate these categories no matter how sensitive they are for example, a pillow can be used as a cathartic object by punching
to the available cues. it in anger, thereby relieving oneself of the emotion. This is,
colloquially, a socially acceptable behavior toward the target. Luria
takes this one step further by suggesting that responsive, robotic
Possible types of carry-over effects: relief agents that react to pain or other negative input can provide even
or practice more relief than an inanimate object, and that we should use them
as cathartic objects. Luria claims that the reaction itself, which
What, exactly, is the carry-over effect between human-AI interaction mirrors a humanlike pain response, provides greater relief than that
and human-human interaction? We will examine two types of carry-over of an object that does not react. One such “cathartic object” designed
effects that do not necessarily reflect all potential outcomes but that by Luria is a cushion that vibrates in reaction to being poked by a
provide a useful comparison by way of their consequences: relief and sharp tool. The more tools you put into the cushion, the more it
practice. In the case of relief, doing X behavior with AI will cause you to vibrates until it shakes so violently that the tools fall out. You can
do less of X behavior with humans subsequently. In the case of practice, repeat the process as much as desired.
doing X behavior with AI will cause you to do more of X behavior with The objects presented by Luria as potential agents of negative-
humans subsequently. The preponderance of the evidence so far suggests emotion relief are simply moving, responsive objects at this stage.
that practice is more likely to be observed, and its consequences outweigh However, Luria proposes the use of more humanlike agents, such as
those of relief (Garg and Sengupta, 2020; Hiniker et al., 2021; Wilkenfeld social robots, as cathartic objects. In one such proposition, Luria
et al., 2022). suggests that people throw knives at a robotic, humanlike bust that
The following scenarios illustrate theoretical examples of both responds to pain. In another example, Luria suggests a ceremonial
effects. Consider an example of relief. You are angry, and you let out interaction in which a child relieves negative emotions with a
your emotions on a chatbot. Because the chatbot has advanced responsive robot that looks like a duck.
Luria’s proposal rests on the assumption that releasing negative (Festerling et al., 2022). Children do distinguish between technology
emotions on social robots will relieve the user of that emotion. and other living things through ascriptions of intelligence, however
Catharsis literature, however, challenges this assumption: research (Bernstein and Crowley, 2008). Goal-directed, autonomous behavior
suggests that catharsis of aggression does not reduce subsequent (a component of ToM) is one of the key mechanisms by which
aggression, but can in fact increase it, providing evidence for practice children distinguish an object as being alive (Opfer, 2002; Opfer and
effects (Denzler and Förster, 2012; Konečni, 2016). Catharsis Siegler, 2004). Given that children appear to be ascribing mind to
researchers posit that the catharsis of negative behavior and feelings technology more than ever, this trend is likely to continue with
requires subsequent training, learning, and self-development post- AI advancement.
catharsis to lead to a reduction of the behavior. Therapy, for example, We are skeptical that socially mistreating AI can result in
provides a mode through which patients can feel catharsis and then emotional relief, translating into better social behavior toward other
learn methods to reduce negative feelings or behaviors toward others. people. Although the theory has been proposed, little if any evidence
Even so, the catharsis or immediate relief alone does not promise a supports it. Encouraging people, and especially children, to berate or
reduction of that behavior or feeling (Alexander and French, 1946; socially mistreat AI on the theory that it will help them become kinder
Dollard and Miller, 1950; Worchel, 1957) and can in many ways toward people seems ill-advised to us. In contrast, the existing
exacerbate negative feelings (Anderson and Bushman, 2002; evidence suggests that human treatment of AI can sometimes result
Bushman, 2002). Other researchers found that writing down feelings in a practice effect, which carries over to how people treat each other.
of anger was less effective than writing to the person who made the Those practice effects could either result in social harm, if antisocial
participant angry, yet neither mode of catharsis alleviated anger behavior is practiced, or social benefit, if pro-social behavior
responses (Zhan et al., 2021). These findings suggest that whether is practiced.
you were to write to a chatbot and tell it about your anger, or bully it,
the behavior would only result in increased aggression toward
other people. Discussion
Recent data on children and their interactions with home
assistants such as Amazon’s Alexa or Google Assistant suggest for The moral issue of perceiving
plural data that negative interactions with AI, including using an consciousness in AI and suggested
aggressive, loud tone of voice with it, does not lead to a cathartic regulations
reduction in aggression toward others, but to the opposite, an increase
in aggressive tone toward other people (Beneteau et al., 2019, 2020; As stated at the beginning of this article, we do not take sides here
Garg and Sengupta, 2020; Hiniker et al., 2021). This data suggests that on the question of whether AI is conscious. However, we argue that
catharsis does not work for children in their interactions with AI and the fact that people often perceive it to be conscious is important and
may be cause for concern. has social consequences. Mind perception is central to this process,
This concern is especially important given that children tend to and mind perception itself evokes moral thinking. Some researchers
perceive a humanlike mind in non-human objects in general, more so claim that “mind perception is the essence of morality” (Gray and
than adults. When asked to distinguish between living and non-living Wegner, 2012). When people perceive mind in an agent, they may also
agents, including robots, children experience some difficulty. Even view it as capable of having conscious experience and therefore
when children do not ascribe biological properties to robots, research perceive it as something worthy of moral care (Gray et al., 2007). Mind
suggests that children can still ascribe psychological properties, like perception moderates whether someone judges an artificial agent’s
agency and experience, to robots (Nigam and Klahr, 2000). There actions as moral or immoral (Shank et al., 2021). We suggest that
appears to be a historical trend of increasing mind ascription to when people perceive an agent to possess subjective experience, they
technology in children over the years. This trend may reflect the perceive it to be conscious; when they perceive it to be conscious, they
increased human likeness and skills of technology, and therefore are more likely to perceive it as worthy of moral consideration. A
provide us a prediction for the future. In 1995, children at the age of conscious being is perceived as an entity that can act morally or
five reported that robots and computers did not have brains like immorally, and that can be treated morally or immorally.
people (Scaife and Van Duuren, 1995), but in a research study in 2000, We suggest it is worth at least considering whether social actor AI,
children ascribed emotion, cognitive abilities, and volition to robots, as it becomes more humanlike, should be viewed as having the status
even though most did not consider the robot to be alive (Nigam and of a moral patient or a protected being that should be treated with
Klahr, 2000). In studies conducted in 2002 and 2003, children care. The crucial question may not be whether the artificial agent
3–4 years old tended not to ascribe experiential mind to robots but did deserves moral protection, but rather whether we humans will harm
ascribe agentic qualities such as the ability to think and remember ourselves socially and emotionally if we practice harming humanlike
(Mikropoulos et al., 2003). According to Severson and Woodard AI, and whether we will help ourselves if we practice pro-social
(2018), not unlike some theories of consciousness in which people behavior toward humanlike AI. We have before us the potential for
perceive there to be a person inside their mind, “There are numerous cultural improvement or cultural harm as we continue to integrate
anecdotes that young children think there’s a little person inside the social actor AI into our world. How can we ensure that we use AI for
device” in home assistants like Alexa. Children with more exposure to good? There are several options, some of which are unlikely and
and affinity with digital voice assistants have more pronounced unenforceable, and one of which we view as being the optimal choice.
psychological conceptions of technology, but it is unclear whether One option is to enforce how people treat AI, to reduce the risk
conceptions of technology and living things are blurred together of the public practicing antisocial behavior and to increase the
practice of prosocial behavior. Some have taken the stance that AI required to show extensive safety data before releasing a product. AI
should be morally protected. According to philosophers such as companies currently are not. It is in this space that government
Ryland (2021a,b), who characterizes relationships with robots in regulation of AI makes sense to us.
terms of friendship and hate, hate toward robots is morally wrong, Others have made claims in the name of ethics about regulating
and we should consider it even more so as robots become more characteristics of AI; however, these suggestions seem outdated.
humanlike. Others have claimed that we should give AI rights or According to Bryson (2010), robots should be “slaves”—this does not
protections, because AI inherently deserves them due to its moral- mean that we should make robots slaves, but rather, we should keep
care status (Akst, 2023). Not only is this suggestion vague, but it is them at a simpler developmental level by not giving them
also pragmatically unlikely. Politically, it is overwhelmingly characteristics that might enable people to view them as anything
unlikely that any law would be passed in which a human being is other than owned and created by humans for humans. Bryson claims
supposed to be arrested, charged, or serve jail time for abusing a that it would be immoral to create a robot that can feel emotions like
chatbot. The first politician to suggest it would end their career. pain. Metzinger (2021) called for a ban on development of AI that
Any political party to support it would lose the electorate. We can could be considered sentient. AI advancement, however, continues in
barely pass laws to protect transgender people; imagine the this direction. Calls for stopping the technological progress have not
political and cultural backlash to any such legal protections for been effective. Relatively early in development of social actor AI,
non-human machines. Regulating human treatment of AI is, in our computer science researchers created benchmarks for human likeness
opinion, a non-starter. to enable people to create more humanlike AI (Kahn et al., 2007). That
A second option is to regulate AI such that it discourages human likeness has increased since. Our proposal has less to do with
antisocial behavior and encourages prosocial behavior. We suggest regulating how advanced or how humanlike AI becomes, and more to
this second option is much more feasible. For example, abusive do with regulating how AI impacts the psychology of users by
treatment of AI by the user could be met with a lack of response (the providing a model for prosocial behavior or by ignoring, confronting,
old, “just ignore the bully and he’ll go away, because he will not get the or rectifying antisocial behavior.
reaction he’s looking for”). The industries backing digital voice Almost all discussion of regulating AI centers around its
assistants have already begun to integrate this approach into responses potential for harm. We will end this article by noting the enormous
to bullying speech. In 2010, if a user told Siri, “You’re a slut,” it was potential for benefit, especially in light of AI’s guaranteed
programmed to respond with, “I’d blush if I could.” Due to stakeholder permanence in our present and future. Social AI is increasingly
feedback, the response has now been changed to a more socially similar to humans in that it can engage in humanlike discourse,
healthy, “I will not respond to that” (UNESCO & EQUALS Skills appear humanlike, and impact our social attitudes and interactions.
Coalition et al., 2019; UNESCO, 2020). Currently, the largest Yet, social AI differs from humans in at least one significant way: it
industries backing AI, such as OpenAI with ChatGPT, are altering and does not experience social or emotional fatigue. The opportunity to
restricting the types of inputs their social actor AI will respond to. This practice prosocial behavior is endless. For example, a chatbot will
trend toward industry self-regulation of AI is encouraging. However, not grow tired and upset if you need to constructively work through
we are currently entirely dependent on the good intentions of industry a conflict with it. Neither will a chatbot disappear in the middle of a
leaders to control whether social actor AI encourages prosocial or conversation when you are experiencing sadness or hurt and are in
antisocial behavior in users. Governing bodies have begun to make need of a friend. Social actor AI can both provide support and model
regulation attempts, but their proposals have received criticism: such prosocial behavior by remaining polite and present. Chatbots like
documents try a “one-size-fits-all approach” that may result in further WoeBot help users work through difficult issues by asking questions
inequality. For example, the EU drafted an Artificial Intelligence Act in the style of cognitive behavioral therapy (Fitzpatrick et al., 2017).
(AIA) that proposes a ban on AI that causes psychological harm, but Much like the benefits of journaling (Pennebaker, 1997, 2004), this
the potential pitfalls of this legislation appear to outweigh its impact human-chatbot engagement guides the user to make meaning of
on psychological well-being (Pałka, 2023). their experiences. It is worth noting that people who feel isolated or
Social actor AI is increasingly infiltrating every part of society, have experienced social rejection or social frustration may be a
interacting with an increasing percentage of humanity, and therefore significant source of political and social disruption in today’s world.
even if it only subtly shapes the psychological state and interpersonal If a universally available companion bot could boost their sense of
behavior of each user, it could cause a massive shift of normative social social well-being and allow them to improve their social interaction
behavior across the world. If there is to be government regulation of skills through practice, that tool could make a sizable contribution
AI to reduce its risk and increase its benefit to humanity, we suggest to society. If AI is regulated such that it encourages people to treat it
that regulations aimed at its prosociality would make the biggest in a positive, pro-social way, and if carry-over effects are real, then
difference. One could imagine a Food and Drug Administration AI becomes a potential source of enormous social and psychological
(FDA) style agency, informed by psychological experts, that studies good in the world.
how to build AI such that it reinforces prosociality in users. Assays If we are to effectively tackle the ever-growing issue of what to do
could be developed to test AI on sample groups to measure its short- in response to the surge of AI in our world, we cannot continue to
and long-term psychological impacts on users, data that is point out only the ways in which it is harmful. AI is here to stay, and
unfortunately largely missing at the present time. Perhaps, akin to therefore we should be pragmatic with our approach. By
FDA regulations on new drugs, new AI that is slated to be released to understanding the ways in which interactions with AI can be both
a wider public should be put through a battery of tests to show that, at positive and negative, we can start to mitigate the bad by replacing it
the very least, it does no psychological harm. Drug companies are with the good.
Data availability statement Science Foundation Graduate Research Fellowship Program under
Grant No. KB0013612. Any opinions, findings, and conclusions or
The original contributions presented in the study are included in recommendations expressed in this material are those of the authors
the article/supplementary material, further inquiries can be directed and do not necessarily reflect the views of the National
to the corresponding author. Science Foundation.
Publisher’s note
Funding
All claims expressed in this article are solely those of the authors
The author(s) declare financial support was received for the and do not necessarily represent those of their affiliated organizations,
research, authorship, and/or publication of this article. RG is funded or those of the publisher, the editors and the reviewers. Any product
by the National Science Foundation Graduate Research Fellowship that may be evaluated in this article, or claim that may be made by its
Program. This material is based upon work supported by the National manufacturer, is not guaranteed or endorsed by the publisher.
References
Abubshait, A., and Wiese, E. (2017). You look human, but act like a machine: agent Burleigh, T., Schoenherr, J. R., and Lacroix, G. (2013). Does the uncanny valley exist?
appearance and behavior modulate different aspects of human-robot interaction. Front. An empirical test of the relationship between eeriness and the human likeness of
Psychol. 8:1393. doi: 10.3389/fpsyg.2017.01393 digitally created faces. Comput. Hum. Behav. 29, 759–771. doi: 10.1016/j.chb.2012.11.021
Akst, D. (2023). Should robots with artificial intelligence have moral or legal rights? Bushman, B. J. (2002). Does venting anger feed or extinguish the flame? Catharsis,
WSJ. Available at: https://www.wsj.com/articles/robots-ai-legal-rights-3c47ef40 rumination, distraction, anger, and aggressive responding. Personal. Soc. Psychol. Bull.
28, 724–731. doi: 10.1177/0146167202289002
Alexander, F., and French, T. M. (1946). Psychoanalytic Therapy: Principles and
Application. New York: Ronald Press. Bernstein, D., and Crowley, K. (2008). Searching for signs of intelligent life: An
investigation of young children’s beliefs about robot intelligence. Journal of the Learning
Anderson, C. A., and Bushman, B. J. (2002). Human aggression. Annu. Rev. Psychol. Sciences 17, 225–247. doi: 10.1080/10508400801986116
53, 27–51. doi: 10.1146/annurev.psych.53.100901.135231
Chalmers, D. J. (1996). Facing Up to the Problem of Consciousness. The MIT
Appel, M., Izydorczyk, D., Weber, S., Mara, M., and Lischetzke, T. (2020). The uncanny Press eBooks.
of mind in a machine: humanoid robots as tools, agents, and experiencers. Comput.
Hum. Behav. 102, 274–286. doi: 10.1016/j.chb.2019.07.031 Chalmers, D. J. (2023). Could a large language model be conscious? arXiv [Preprint].
doi: 10.48550/arxiv.2303.07103
Appel, J., Von Der Pütten, A., Krämer, N. C., and Gratch, J. (2012). Does humanity
matter? Analyzing the importance of social cues and perceived agency of a computer Cheetham, M., Suter, P., and Jäncke, L. (2014). Perceptual discrimination difficulty
system for the emergence of social reactions during human-computer interaction. Adv. and familiarity in the Uncanny Valley: more like a “Happy Valley”. Front. Psychol. 5:1219.
Hum. Comput. Interact. 2012, 1–10. doi: 10.1155/2012/324694 doi: 10.3389/fpsyg.2014.01219
Ciardo, F., De Tommaso, D., and Wykowska, A (2021). “Effects of erring behavior in
Baars, B. J. (1997). In the Theater of Consciousness.
a human-robot joint musical task on adopting intentional stance toward the iCub robot”
Bandura, A. (1965). Influence of models’ reinforcement contingencies on the acquisition of in 2021 30th IEEE International Conference on Robot & Human Interactive
imitative responses. J. Pers. Soc. Psychol. 1, 589–595. doi: 10.1037/h0022070 Communication (RO-MAN). Vancouver, BC, Canada, 698–703.
Bandura, A. (1977). Social Learning Theory. Englewood Cliffs, N.J.: Prentice Hall. Ciardo, F., De Tommaso, D., and Wykowska, A. (2022). Joint action with artificial agents:
human-likeness in behaviour and morphology affects sensorimotor signaling and social
Banks, J. (2019). Theory of mind in social robots: replication of five established human
inclusion. Comput. Hum. Behav. 132:107237. doi: 10.1016/j.chb.2022.107237
tests. Int. J. Soc. Robot. 12, 403–414. doi: 10.1007/s12369-019-00588-x
Croes, E., and Antheunis, M. L. (2020). Can we be friends with Mitsuku? A
Bastian, B., and Haslam, N. (2010). Excluded from humanity: the dehumanizing effects of
longitudinal study on the process of relationship formation between humans and a social
social ostracism. J. Exp. Soc. Psychol. 46, 107–113. doi: 10.1016/j.jesp.2009.06.022
chatbot. J. Soc. Pers. Relat. 38, 279–300. doi: 10.1177/0265407520959463
Beneteau, E., Boone, A., and Wu, Y., Kientz, J.A., Yip, J., and Hiniker, A. (2020).
Denzler, M., and Förster, J. (2012). A goal model of catharsis. Eur. Rev. Soc. Psychol.
“Parenting with Alexa: exploring the introduction of smart speakers on family
23, 107–142. doi: 10.1080/10463283.2012.699358
dynamics” in Proceedings of the 2020 CHI conference on human factors in computing
systems (CHI '20). Association for Computing Machinery, New York, NY, USA. 1–13. Doerig, A., Schurger, A., and Herzog, M. H. (2020). Hard criteria for empirical theories of
consciousness. Cogn. Neurosci. 12, 41–62. doi: 10.1080/17588928.2020.1772214
Beneteau, E., Richards, O. K., Zhang, M., Kientz, J. A., Yip, J., and Hiniker, A. (2019).
“Breakdowns between families and Alexa” in Proceedings of the 2019 CHI Conference on Dollard, J., and Miller, N. E. (1950). Personality and Psychotherapy. New York: McGraw-Hill.
Human Factors in Computing Systems (CHI‘19). Association for Computing Machinery, Dubosc, C., Gorisse, G., Christmann, O., Fleury, S., Poinsot, K., and Richir, S. (2021).
New York, NY, USA. 14. Impact of avatar facial anthropomorphism on body ownership, attractiveness and social
Brandtzæg, P. B., Skjuve, M., and Følstad, A. (2022). My AI friend: how users of a presence in collaborative tasks in immersive virtual environments. Comput. Graph. 101,
social chatbot understand their human–AI friendship. Hum. Commun. Res. 48, 404–429. 82–92. doi: 10.1016/j.cag.2021.08.011
doi: 10.1093/hcr/hqac008 Duffy, B. (2008). Fundamental issues in affective intelligent social machines. Open
Broadbent, E., Kumar, V., Li, X., Sollers, J. J., Stafford, R., MacDonald, B. A., et al. Artif. Intellig. J. 2, 21–34. doi: 10.2174/1874061800802010021
(2013). Robots with display screens: a robot with a more humanlike face display is Edwards, A., Edwards, C., Westerman, D., and Spence, P. R. (2019). Initial
perceived to have more mind and a better personality. PLoS One 8:e72589. doi: 10.1371/ expectations, interactions, and beyond with social robots. Comput. Hum. Behav. 90,
journal.pone.0072589 308–314. doi: 10.1016/j.chb.2018.08.042
Bryson, J. J. (2010). “Robots Should be Slaves”, in Close Engagements with Artificial Etzrodt, K., and Engesser, S. (2021). Voice-based agents as personified things:
Companions: Key social, psychological, ethical and design issues. Ed. Yorick Wilks John assimilation and accommodation as equilibration of doubt. Hum. Machine Commun. J.
Benjamins Publishing Company eBooks, 63–74. 2, 57–79. doi: 10.30658/hmc.2.3
Etzrodt, K., Gentzel, P., Utz, S., and Engesser, S. (2022). Human-machine- Kahn, P. H., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., et al.
communication: introduction to the special issue. Publizistik 67, 439–448. doi: 10.1007/ (2012). “Robovie, you’ll have to go into the closet now”: Children’s social and moral
s11616-022-00754-8 relationships with a humanoid robot. Dev. Psychol. 48, 303–314. doi: 10.1037/a0027033
Eyssel, F. A., and Pfundmair, M. (2015). “Predictors of psychological Kätsyri, J., Förger, K., Mäkäräinen, M., and Takala, T. (2015). A review of empirical
anthropomorphization, mind perception, and the fulfillment of social needs: A case evidence on different uncanny valley hypotheses: support for perceptual mismatch as
study with a zoomorphic robot” in Proceedings of the 24th IEEE International Symposium one road to the valley of eeriness. Front. Psychol. 6:390. doi: 10.3389/fpsyg.2015.00390
on Robot and Human Interactive Communication.
Kawabe, T., Sasaki, K., Ihaya, K., and Yamada, Y. (2017). When categorization-based
Ferrari, F., Paladino, M. P., and Jetten, J. (2016). Blurring human–machine distinctions: stranger avoidance explains the uncanny valley: a comment on MacDorman and
anthropomorphic appearance in social robots as a threat to human distinctiveness. Int. Chattopadhyay (2016). Cognition 161, 129–131. doi: 10.1016/j.cognition.2016.09.001
J. Soc. Robot. 8, 287–302. doi: 10.1007/s12369-016-0338-y
Kim, D., Frank, M. G., and Kim, S. T. (2014). Emotional display behavior in different
Festerling, J., Siraj, I., and Malmberg, L. E. (2022). Exploring children’s exposure to forms of computer mediated communication. Comput. Hum. Behav. 30, 222–229. doi:
voice assistants and their ontological conceptualizations of life and technology. AI & Soc. 10.1016/j.chb.2013.09.001
doi: 10.1007/s00146-022-01555-3
Kim, W., and Ryoo, Y. (2022). Hypocrisy induction: using chatbots to promote
Fitzpatrick, K. K., Darcy, A., and Vierhile, M. (2017). Delivering cognitive behavior COVID-19 social distancing. Cyberpsychol. Behav. Soc. Netw. 25, 27–36. doi: 10.1089/
therapy to Young adults with symptoms of depression and anxiety using a fully cyber.2021.0057
automated conversational agent (Woebot): a randomized controlled trial. JMIR Mental
Kim, Y., and Sundar, S. S. (2012). Anthropomorphism of computers: is it mindful or
Health 4:e7785. doi: 10.2196/mental.7785
mindless? Comput. Hum. Behav. 28, 241–250. doi: 10.1016/j.chb.2011.09.006
Fox, J., Ahn, S. J., Janssen, J., Yeykelis, L., Segovia, K. Y., and Bailenson, J. N. (2014).
Knobe, J., and Prinz, J. (2007). Intuitions about consciousness: experimental studies.
Avatars versus agents: a meta-analysis quantifying the effect of agency on social
Phenomenol. Cogn. Sci. 7, 67–83. doi: 10.1007/s11097-007-9066-y
influence. Hum. Comput. Interact. 30, 401–432. doi: 10.1080/07370024.2014.921494
Koch, C. (2019). The feeling of life itself: why consciousness is widespread but Can’t
Frith, C. D. (2002). Attention to action and awareness of other minds. Conscious.
be computed. Available at: https://openlibrary.org/books/OL29832851M/Feeling_of_
Cogn. 11, 481–487. doi: 10.1016/s1053-8100(02)00022-3
Life_Itself
Fu, Y., Michelson, R., Lin, Y., Nguyen, L. K., Tayebi, T. J., and Hiniker, A. (2022). Social
Konečni, V. (2016). The anger-aggression bidirectional-causation (AABC) model’s
emotional learning with conversational agents. Proc. ACM Interact. Mobile Wearable
relevance for dyadic violence, re-venge and catharsis. Soc. Behav. Res. Pract. Open J. 1,
Ubiquit. Technol. 6, 1–23. doi: 10.1145/3534622
1–9. doi: 10.17140/SBRPOJ-1-101
Garg, R., and Sengupta, S. (2020). He is just like me. Proc. ACM Interact. Mobile
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., and Kircher, T. (2008). Can
Wearable Ubiquit. Technol. 4, 1–24. doi: 10.1145/3381002
machines think? Interaction and perspective taking with robots investigated via fMRI.
Gong, L., and Nass, C. (2007). When a talking-face computer agent is half-human and PLoS One 3:e2597. doi: 10.1371/journal.pone.0002597
half-humanoid: human identity and consistency preference. Hum. Commun. Res. 33,
Krämer, N. C., Bente, G., Eschenburg, F., and Troitzsch, H. (2009). Embodied
163–193. doi: 10.1111/j.1468-2958.2007.00295.x
conversational agents: research prospects for social psychology and an exemplary study.
Gray, H. M., Gray, K., and Wegner, D. M. (2007). Dimensions of mind perception. Soc. Psychol. 40, 26–36. doi: 10.1027/1864-9335.40.1.26
Science 315:619. doi: 10.1126/science.1134475
Krämer, N., Bente, G., and Piesk, J. (2003a). The ghost in the machine. The influence of
Gray, K., and Wegner, D. M. (2012). Feeling robots and human zombies: mind Embodied Conversational Agents on user expectations and user behavior in a TV/VCR
perception and the uncanny valley. Cognition 125, 125–130. doi: 10.1016/j. application. ResearchGate. Available at: https://www.researchgate.net/publication/242273054_
cognition.2012.06.007 The_ghost_in_the_machine_The_influence_of_Embodied_Conversational_Agents_on_
user_expectations_and_user_behaviour_in_a_TVVCR_application1
Graziano, M. S. A. (2013). Consciousness and the Social Brain. New York, NY: Oxford
University Press. Krämer, N. C., Rosenthal-von der Pütten, A. M., and Hoffmann, L. (2015). “Social
effects of virtual and robot companions” in The Handbook of the Psychology of
Guingrich, R., and Graziano, M. S. A. (2023). Chatbots as social companions: how
Communication Technology, Ch. 6 (John Wiley & Sons, Ltd.), 137–159.
people perceive consciousness, human likeness, and social health benefits in machines
(arXiv:2311.10599). arXiv [Preprint]. doi: 10.48550/arXiv.2311.10599 Krämer, N. C., Tietz, B., and Bente, G. (2003b). “Effects of embodied Interface agents
and their gestural activity” in 4th International Working Conference on Intelligent Virtual
Han, M. C. (2021). The impact of anthropomorphism on consumers’ purchase decision
Agents. Hamburg: Springer. 292–300.
in chatbot commerce. J. Internet Commer. 20, 46–65. doi: 10.1080/15332861.2020.1863022
Kupferberg, A., Glasauer, S., Huber, M., Rickert, M., Knoll, A., and Brandt, T. (2011).
Harley, T. A. (2021). The Science of Consciousness. Cambridge, UK: Cambridge
Biological movement increases acceptance of humanoid robots as human partners in
University Press.
motor interaction. AI & Soc. 26, 339–345. doi: 10.1007/s00146-010-0314-2
Harris, P. L. (1992). From simulation to folk psychology: the case for development.
Mind Lang. 7, 120–144. doi: 10.1111/j.1468-0017.1992.tb00201.x Küster, D., and Świderska, A. (2020). Seeing the mind of robots: harm augments mind
perception but benevolent intentions reduce dehumanisation of artificial entities in
Haslam, N., and Loughnan, S. (2014). Dehumanization and infrahumanization. Annu. visual vignettes. Int. J. Psychol. 56, 454–465. doi: 10.1002/ijop.12715
Rev. Psychol. 65, 399–423. doi: 10.1146/annurev-psych-010213-115045
Küster, D., Świderska, A., and Gunkel, D. J. (2020). I saw it on YouTube! How online
Hayashi, Y., and Miwa, K. (2009). “Cognitive and emotional characteristics of videos shape perceptions of mind, morality, and fears about robots. New Media Soc. 23,
communication in human-human/human-agent interaction” in Proceedings of the 13th 3312–3331. doi: 10.1177/1461444820954199
International Conference on Human-Computer Interaction. Part III: Ubiquitous and
Intelligent Interaction. Springer Science & Business Media, 267–274. Lee, E. (2010). The more humanlike, the better? How speech type and users’ cognitive
style affect social responses to computers. Comput. Hum. Behav. 26, 665–672. doi:
Heyselaar, E., and Bosse, T. (2020). “Using Theory of Mind to Assess Users’ Sense of 10.1016/j.chb.2010.01.003
Agency in Social Chatbots,” in Chatbot Research and Design. Eds. A. Følstad, T. Araujo, S.
Papadopoulos, E. L.-C. Law, O.-C. Granmo, E. Luger, and P. B. Brandtzaeg. Vol. 11970 Lee, K. M., Jung, Y., Kim, J., and Kim, S. R. (2006). Are physically embodied social
(Springer International Publishing), 158–169. agents better than disembodied social agents?: the effects of physical embodiment,
tactile interaction, and people’s loneliness in human–robot interaction. Int. J. Hum.
Hiniker, A., Wang, A., Tran, J., Zhang, M. R., Radesky, J., Sobel, K., et al. (2021). Can Comput. Stud. 64, 962–973. doi: 10.1016/j.ijhcs.2006.05.002
Conversational Agents Change the Way Children Talk to People? in: IDC ‘21:
Proceedings of the 20th Annual ACM Interaction Design and Children Conference, Lee, S. A., and Liang, Y. (2016). The role of reciprocity in verbally persuasive robots.
338–349. Cyberpsychol. Behav. Soc. Netw. 19, 524–527. doi: 10.1089/cyber.2016.0124
Ho, A. S., Hancock, J., and Miner, A. S. (2018). Psychological, relational, and emotional Lee, S. A., and Liang, Y. (2019). Robotic foot-in-the-door: using sequential-request
effects of self-disclosure after conversations with a chatbot. J. Commun. 68, 712–733. doi: persuasive strategies in human-robot interaction. Comput. Hum. Behav. 90, 351–356.
10.1093/joc/jqy026 doi: 10.1016/j.chb.2018.08.026
Hoyt, C. L., Blascovich, J., and Swinth, K. R. (2003). Social inhibition in immersive Lee, S., Ratan, R., and Park, T. (2019). The voice makes the Car: enhancing
virtual environments. Presence Teleoperat. Virtual Environ. 12, 183–195. doi: autonomous vehicle perceptions and adoption intention through voice agent gender and
10.1162/105474603321640932 style. Multimod. Technol. Interact. 3:20. doi: 10.3390/mti3010020
Jacobs, O., Gazzaz, K., and Kingstone, A. (2021). Mind the robot! Variation in Lew, Z., and Walther, J. B. (2022). Social scripts and expectancy violations: evaluating
attributions of mind to a wide set of real and fictional robots. Int. J. Soc. Robot. 14, communication with human or AI Chatbot Interactants. Media Psychol. 26, 1–16. doi:
529–537. doi: 10.1007/s12369-021-00807-4 10.1080/15213269.2022.2084111
Kahn, P. H., Ishiguro, H., Friedman, B., Kanda, T., Freier, N. G., Severson, R. L., et al. Loh, J., and Loh, W. (2023). Social Robotics and the Good Life: The Normative Side
(2007). What is a human? Interact. Stud. 8, 363–390. doi: 10.1075/is.8.3.04kah of Forming Emotional Bonds With Robots. transcript Verlag. Bielefeld, Germany.
Kahn, P. H. Jr., Reichert, A. L., Gary, H. E., Kanda, T., Ishiguro, H., Shen, S., et al. Lucas, G. M., Gratch, J., King, A., and Morency, L. (2014). It’s only a computer: virtual
(2011). “The new ontological category hypothesis in human-robot interaction” in humans increase willingness to disclose. Comput. Hum. Behav. 37, 94–100. doi:
HRI‘11. Association for Computing Machinery, New York, NY, USA. 159–160. 10.1016/j.chb.2014.04.043
Luria, M., Sheriff, O., Boo, M., Forlizzi, J., and Zoran, A. (2020). Destruction, catharsis, Pütten, A. M. R. D., and Krämer, N. C. (2014). How design characteristics of robots
and emotional release in human-robot interaction. ACM Trans. Hum. Robot Interaction determine evaluation and uncanny valley related responses. Comput. Hum. Behav. 36,
9, 1–19. doi: 10.1145/3385007 422–439. doi: 10.1016/j.chb.2014.03.066
MacDorman, K. F., and Chattopadhyay, D. (2016). Reducing consistency in human Quadflieg, S., Ul-Haq, I., and Mavridis, N. (2016). Now you feel it, now you don’t.
realism increases the uncanny valley effect; increasing category uncertainty does not. Interact. Stud. 17, 211–247. doi: 10.1075/is.17.2.03qua
Cognition 146, 190–205. doi: 10.1016/j.cognition.2015.09.019
Rabinowitz, N. C., Perbet, F., Song, H. F., Zhang, C., Eslami, S. M. A., and Botvinick, M.
MacDorman, K. F., and Entezari, S. O. (2015). Individual differences predict sensitivity (2018). Machine theory of mind. arXiv [Preprint]. doi: 10.48550/ARXIV.1802.07740
to the uncanny valley. Interact. Stud. 16, 141–172. doi: 10.1075/is.16.2.01mac
Rhim, J., Kwak, M., Gong, Y., and Gweon, G. (2022). Application of humanization to
Marchesi, S., De Tommaso, D., Pérez-Osorio, J., and Wykowska, A. (2022). Belief in survey chatbots: change in chatbot perception, interaction experience, and survey data
sharing the same phenomenological experience increases the likelihood of adopting the quality. Comput. Hum. Behav. 126:107034. doi: 10.1016/j.chb.2021.107034
intentional stance toward a humanoid robot. Technol Mind Behav 3:11. doi: 10.1037/
Rickenberg, R., and Reeves, B. (2000). The effects of animated characters on anxiety,
tmb0000072
task performance, and evaluations of user interfaces. Lette. CHI 2000, 49–56. doi:
Martini, M. C., Gonzalez, C., and Wiese, E. (2016). Seeing minds in others—can 10.1145/332040.332406
agents with robotic appearance have human-like preferences? PLoS One 11:e0146310.
Roselli, C., Navare, U. P., Ciardo, F., and Wykowska, A. (2023). Type of education
doi: 10.1371/journal.pone.0146310
affects individuals’ adoption of intentional stance towards robots: an EEG study. Int. J.
McVee, M. B., Dunsmore, K., and Gavelek, J. R. (2005). Schema theory revisited. Rev. Soc. Robot. 16, 185–196. doi: 10.1007/s12369-023-01073-2
Educ. Res. 75, 531–566. doi: 10.3102/00346543075004531
Röska-Hardy, L. (2008). “Theory (Simulation Theory, Theory of Mind)”, in
Merritt, T. R. (2012). A failure of imagination: a failure of imagination: how and Encyclopedia of Neuroscience. Eds. M. Binder, N. Hirokawa, U. Windhorst and H. Hirsch,
why people respond differently to human and computer team-mates. ResearchGate. Berlin/Heidelberg Germany: Springer eBooks, 4064–4067.
Available at: https://www.researchgate.net/publication/292539389_A_failure_of_
Ryland, H. (2021a). It’s friendship, Jim, but not as we know it: a degrees-of-friendship
imagination_How_and_why_people_respond_differently_to_human_and_
view of human–robot friendships. Mind. Mach. 31, 377–393. doi: 10.1007/
computer_team-mates
s11023-021-09560-z
Metzinger, T. (2021). Artificial suffering: an argument for a global moratorium on
Ryland, H. (2021b). Could you hate a robot? And does it matter if you could? AI &
synthetic phenomenology. J. Artific. Intellig. Consciousness 8, 43–66. doi: 10.1142/
Soc. 36, 637–649. doi: 10.1007/s00146-021-01173-5
s270507852150003x
Scaife, M., and Van Duuren, M. V. (1995). Do computers have brains? What children
Mikropoulos, T. A., Misailidi, P., and Bonoti, F. (2003). Attributing human properties
believe about intelligent artifacts. Br. J. Dev. Psychol. 13, 367–377. doi:
to computer artifacts: developmental changes in children's understanding of the
10.1111/j.2044-835x.1995.tb00686.x
animate-inanimate distinction. Psychology 10, 53–64. doi: 10.12681/psy_hps.23951
Seeger, A., and Heinzl, A. (2018). “Human versus machine: contingency factors of
Mori, M. (1970). Bukimi no tani [the uncanny valley]. Energy 7, 33–35.
anthropomorphism as a trust-inducing design strategy for conversational agents” in
Nagel, T. (1974). What is it like to be a bat? Philos. Rev. 83:435. doi: 10.2307/2183914 Lecture Notes in Information Systems and Organisation, Eds. F. D. Davis, R. Riedl, J. vom
Brocke, P.-M. Léger, and A. B. Randolph. Springer International Publishing. 129–139.
Nass, C., and Brave, S. (2005). Wired for speech: How voice activates and advances
the human-computer relationship. Boston Review: Boston, Massachusetts. Severson, R. L., and Carlson, S. M. (2010). Behaving as or behaving as if? Children’s
conceptions of personified robots and the emergence of a new ontological category.
Nass, C., and Moon, Y. (2000). Machines and mindlessness: social responses to
Neural Netw. 23, 1099–1103. doi: 10.1016/j.neunet.2010.08.014
computers. J. Soc. Issues 56, 81–103. doi: 10.1111/0022-4537.00153
Severson, R. L., and Woodard, S. R. (2018). Imagining others’ minds: the positive
Nass, C., Steuer, J., and Tauber, E. R. (1994). “Computers are social actors” in
relation between children’s role play and anthropomorphism. Front. Psychol. 9:2140. doi:
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 72–78.
10.3389/fpsyg.2018.02140
Nigam, M. K, and Klahr, D. (2000). “If robots make choices, are they alive?:
Shank, D. B., North, M., Arnold, C., and Gamez, P. (2021). Can mind perception
Children's judgments of the animacy of intelligent artifacts” in Proceedings of the explain virtuous character judgments of artificial intelligence? Technol Mind Behav 2.
Annual Meeting of the Cognitive Science Society, 22. Available at: https:// doi: 10.1037/tmb0000047
escholarship.org/uc/item/6bw2h51d
Spence, P. R., Westerman, D., Edwards, C., and Edwards, A. (2014). Welcoming our
O’Regan, J. K. (2012). How to build a robot that is conscious and feels. Mind. Mach. robot overlords: initial expectations about interaction with a robot. Commun. Res. Rep.
22, 117–136. doi: 10.1007/s11023-012-9279-x 31, 272–280. doi: 10.1080/08824096.2014.924337
Opfer, J. E. (2002). Identifying living and sentient kinds from dynamic information: Sproull, L., Subramani, M. R., Kiesler, S., Walker, J., and Waters, K. (1996). When
the case of goal-directed versus aimless autonomous movement in conceptual change. the interface is a face. Hum. Comput. Interact. 11, 97–124. doi: 10.1207/
Cognition 86, 97–122. doi: 10.1016/s0010-0277(02)00171-3 s15327051hci1102_1
Opfer, J. E., and Siegler, R. S. (2004). Revisiting preschoolers’ living things concept: a Srinivasan, V., and Takayama, L. (2016). “Help me please: robot politeness strategies
microgenetic analysis of conceptual change in basic biology. Cogn. Psychol. 49, 301–332. for soliciting help from humans” in CHI‘16. Association for Computing Machinery,
doi: 10.1016/j.cogpsych.2004.01.002 New York, NY, USA. 4945–4955.
Ortony, A., and Anderson, R. C. (1977). Definite descriptions and semantic memory. Stein, J., Appel, M., Jost, A., and Ohler, P. (2020). Matter over mind? How the
Cogn. Sci. 1, 74–83. doi: 10.1016/s0364-0213(77)80005-0 acceptance of digital entities depends on their appearance, mental prowess, and the
interaction between both. Int. J. Hum. Comput. Stud. 142:102463. doi: 10.1016/j.
Pałka, P. (2023). AI, consumers & psychological harm (SSRN scholarly paper ijhcs.2020.102463
4564997). Available at: https://papers.ssrn.com/abstract=4564997
Stein, J., and Ohler, P. (2017). Venturing into the uncanny valley of mind—the
Pankin, J. (2013). Schema theory and concept formation. Presentation at MIT, Fall. influence of mind attribution on the acceptance of human-like characters in a virtual
Available at: https://web.mit.edu/pankin/www/Schema_Theory_and_Concept_ reality setting. Cognition 160, 43–50. doi: 10.1016/j.cognition.2016.12.010
Formation.pdf
Sundar, S. S., and Nass, C. (2000). Source orientation in human-computer interaction.
Pennebaker, J. W. (1997). Writing about emotional experiences as a therapeutic Commun. Res. 27, 683–703. doi: 10.1177/009365000027006001
process. Psychol. Sci. 8, 162–166. doi: 10.1111/j.1467-9280.1997.tb00403.x
Świderska, A., and Küster, D. (2018). Avatars in pain: visible harm enhances mind
Pennebaker, J. W. (2004). Writing to Heal: A Guided Journal for Recovering from perception in humans and robots. Perception 47, 1139–1152. doi:
Trauma and Emotional Upheaval. Oakland, CA: New Harbringer Publications. 10.1177/0301006618809919
Pentina, I., Hancock, T., and Xie, T. (2023). Exploring relationship development with Ta, V. P., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., et al. (2020). User
social chatbots: a mixed-method study of replika. Comput. Hum. Behav. 140:107600. doi: experiences of social support from companion chatbots in everyday contexts: thematic
10.1016/j.chb.2022.107600 analysis. J. Med. Internet Res. 22:e16235. doi: 10.2196/16235
Poushneh, A. (2021). Humanizing voice assistant: the impact of voice assistant Tanibe, T., Hashimoto, T., and Karasawa, K. (2017). We perceive a mind in a robot
personality on consumers’ attitudes and behaviors. J. Retail. Consum. Serv. 58:102283. when we help it. PLoS One 12:e0180952. doi: 10.1371/journal.pone.0180952
doi: 10.1016/j.jretconser.2020.102283
Taylor, J., Weiss, S. M., and Marshall, P. (2020). Alexa, how are you feeling today?
Powers, A., and Kiesler, S. (2006). “The advisor robot: tracing people’s mental model Interact. Stud. 21, 329–352. doi: 10.1075/is.19015.tay
from a robot’s physical attributes” in Proceedings of the 1st ACM SIGCHI/SIGART
Conference on Human-Robot Interaction, Salt Lake City, USA. 218–225. Teubner, T., Adam, M. T. P., and Riordan, R. (2015). The impact of computerized
agents on immediate emotions, overall arousal and bidding behavior in electronic
Premack, D., and Woodruff, G. (1978). Does the chimpanzee have a theory of mind? auctions. J. Assoc. Inf. Syst. 16, 838–879. doi: 10.17705/1jais.00412
Behav. Brain Sci. 1, 515–526. doi: 10.1017/s0140525x00076512
Tharp, M., Holtzman, N. S., and Eadeh, F. R. (2016). Mind perception and individual
Prinz, W. (2017). Modeling self on others: an import theory of subjectivity and differences: a replication and extension. Basic Appl. Soc. Psychol. 39, 68–73. doi:
selfhood. Conscious. Cogn. 49, 347–362. doi: 10.1016/j.concog.2017.01.020 10.1080/01973533.2016.1256287
Tononi, G. (2007). “The information integration theory of consciousness,” The Waytz, A., Cacioppo, J., and Epley, N. (2010). Who sees human?: the stability and
Blackwell companion to consciousness. Eds. M. Velmans and S. Schneider (Oxford: importance of individual differences in anthropomorphism. Perspect. Psychol. Sci. 5,
Blackwell), 287–299. 219–232. doi: 10.1177/1745691610369336
UNESCO (2020). Artificial intelligence and gender equality: Key findings of Wimmer, H., and Perner, J. (1983). Beliefs about beliefs: Representation and
UNESCO’s Global Dialogue—UNESCO Digital Library. Available at: https://unesdoc. constraining function of wrong beliefs in young children’s understanding of deception.
unesco.org/ark:/48223/pf0000374174 (Accessed October 13, 2023). Cognition, 13:103–128.
UNESCO & EQUALS Skills Coalition Mark, W., Rebecca, K., and Chew, H. E. (2019). I’d Wilkenfeld, J. N., Yan, B., Huang, J., Luo, G., and Algas, K. (2022). “AI love you”:
blush if I could: Closing gender divides in digital skills through education—UNESCO linguistic convergence in human-chatbot relationship development. Academy of
Digital Library.
Management Proceedings, 17063. doi: 10.5465/AMBPP.2022.17063abstract
Velez, J. A., Loof, T., Smith, C. A., Jordan, J. M., Villarreal, J. A., and Ewoldsen, D. R. (2019).
Worchel, P. (1957). Catharsis and the relief of hostility. J. Abnorm. Soc. Psychol. 55,
Switching schemas: do effects of mindless interactions with agents carry over to humans and
vice versa? J. Comput.-Mediat. Commun., 24, 335–352. doi: 10.1093/jcmc/zmz016 238–243. doi: 10.1037/h0042557
Vogeley, K., and Bente, G. (2010). “Artificial humans”: psychology and neuroscience Xie, T., and Pentina, I. (2022). “Attachment theory as a framework to understand
perspectives on embodiment and nonverbal communication. Neural Netw. 23, relationships with social Chatbots: a case study of Replika” in Proceedings of the 55th
1077–1090. doi: 10.1016/j.neunet.2010.06.003 Annual Hawaii International Conference on System Sciences.
Von Der Pütten, A. M., Reipen, C., Wiedmann, A., Kopp, S., and Krämer, N. C. (2009). “The Yampolskiy, R. V. (2018). Artificial consciousness: an illusionary solution to the hard
impact of different embodied agent-feedback on users ́ behavior” in Lecture Notes in Computer problem. Reti Saperi Linguag. 2, 287–318. doi: 10.12832/92302
Science, Eds. Z. Ruttkay, M. Kipp, A. Nijholt, and H. H. Vilhjálmsson, 549–551. Young, A. D., and Monroe, A. E. (2019). Autonomous morals: inferences of mind
Wang, Q., Saha, K., Gregori, E., Joyner, D., and Goel, A. (2021). “Towards mutual theory of predict acceptance of AI behavior in sacrificial moral dilemmas. J. Exp. Soc. Psychol.
mind in human-ai interaction: how language reflects what students perceive about a virtual 85:103870. doi: 10.1016/j.jesp.2019.103870
teaching assistant” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Zhan, J., Yu, S., Cai, R., Xu, H., Yang, Y., Ren, J., et al. (2021). The effects
Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, 384, 1–14. of written catharsis on anger relief. PsyCh J. 10, 868–877. doi: 10.1002/pchj.490
Ward, A. F., Olsen, A. S., and Wegner, D. M. (2013). The harm-made mind: observing Zhou, Y., Fei, Z., He, Y., and Yang, Z. (2022). How Human–Chatbot Interaction
victimization augments attribution of minds to vegetative patients, robots, and the dead. Impairs Charitable Giving: The Role of Moral Judgment. Journal of Business Ethics, 178,
Psychol. Sci. 24, 1437–1445. doi: 10.1177/0956797612472343 849–865. doi: 10.1007/s10551-022-05045-w