The Relationship Between AI and Humans
The Relationship Between AI and Humans
The Relationship Between AI and Humans
If you ask something of Chatgpt, an artificial-intelligence (ai) tool that is all the rage,
the responses you get back are almost instantaneous, utterly certain and often wrong.
It is a bit like talking to an economist. The questions raised by technologies like
Chatgpt yield much more tentative answers. But they are ones that managers ought to
start asking.
One issue is how to deal with employees’ concerns about job security. Worries are
natural. An ai that makes it easier to process your expenses is one thing; an ai that
people would prefer to sit next to at a dinner party quite another. Being clear about
how workers would redirect time and energy that is freed up by an ai helps foster
acceptance. So does creating a sense of agency: research conducted by mit Sloan
Management Review and the Boston Consulting Group found that an ability to
override an ai makes employees more likely to use it.
Whether people really need to understand what is going on inside an ai is less clear.
Intuitively, being able to follow an algorithm’s reasoning should trump being unable
to. But a piece of research by academics at Harvard University, the Massachusetts
Institute of Technology and the Polytechnic University of Milan suggests that too
much explanation can be a problem.
The different ways that people respond to humans and to algorithms is a burgeoning
area of research. In a recent paper Gizem Yalcin of the University of Texas at Austin
and her co-authors looked at whether consumers responded differently to decisions—
to approve someone for a loan, for example, or a country-club membership—when
they were made by a machine or a person. They found that people reacted the same
when they were being rejected. But they felt less positively about an organisation
when they were approved by an algorithm rather than a human. The reason? People
are good at explaining away unfavourable decisions, whoever makes them. It is harder
for them to attribute a successful application to their own charming, delightful selves
when assessed by a machine. People want to feel special, not reduced to a data point.
The picture that emerges from such research is messy. It is also dynamic: just as
technologies evolve, so will attitudes. But it is crystal-clear on one thing. The impact
of Chatgpt and other ais will depend not just on what they can do, but also on how
they make people feel