Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ai Utopia or Dystopia by Vinod Khosla

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

AI UTOPIA OR DYSTOPIA BY VINOD KHOSLA

Summary form
I've seen technology reshape our world repeatedly. Previous
technology platforms amplified human capabilities but didn't
fundamentally alter the essence of human intellect. They
extended our reach but didn't multiply our minds.

Artificial intelligence is different. It's past the point where a


difference in degree becomes a difference in kind. AI amplifies
and multiplies the human brain, much like steam engines once
amplified muscle power. Before engines, we consumed food for
energy and that energy we put to work. Engines allowed us to tap
into external energy sources like coal and oil, revolutionizing
productivity and transforming society. AI stands poised to be the
intellectual parallel, offering a near-infinite expansion of
brainpower to serve humanity.

AI promises a future of unparalleled abundance. However, as we


transition to a post-scarcity society, the journey may be complex,
and the short term may be painful for those displaced. Mitigating
these challenges requires well-reasoned policy. The next 0–10
years, 10–25 years, and 25–50 years will each be radically
different. The pace of change will be hard to predict or anticipate,
especially as technology capabilities far exceed human
intelligence and penetrate society at varying rates.
Pessimists paint a dystopian future in two parts—economic and
social. They fear widespread job loss, economic inequality, social
manipulation, erosion of human agency, loss of creativity, and
even existential threats from AI. I believe these fears are largely
unfounded, myopic, and harmful. They are addressable through
societal choices. Moreover, the real risk isn't “sentient AI” but
losing the AI race to nefarious “nation states,” or other bad actors,
making AI dangerous for the west. Ironically, those who fear AI
and its capacity to erode democracy and manipulate societies
should be most fearful of this risk!

In an economic dystopia, wealth concentrates at the top while


intellectual and physical work are devalued. Widespread job loss
and deflation destroy the economy and purchasing power,
exacerbating inequalities. AI could create a world where a small
elite thrives while the rest face instabilit

But with smart interventions—like income redistribution or


universal basic income (UBI), and strategic legislation—we can
prevent this. Capitalism operates by the permission of democracy,
and we have the collective power to shape economic outcomes if
we handle this transition wisely.

Factor in an aging global population and a shrinking pool of young


workers, and AI becomes essential. With the right policies, we
could smooth the transition and even usher in a three-day
workweek. If GDP growth jumps from 2% to 5% or more, we'll
have the abundance to create "transition funds," much like the oil
funds that have fueled prosperity in countries like Norway.

Naysayers envision AI undermining humanity through pervasive


surveillance and manipulation. They fear AI being used to control
information, influence elections, and erode democracy via
targeted propaganda or deepfakes, making truth difficult to
discern.

But these outcomes aren't inevitable. Legislation will shape how


AI integrates into our lives. In democratic societies, these are
collective choices. With AI’s abundance, the reasons for crime
might even diminish. A balance can be achieved where we benefit
from AI's advancements without succumbing to dystopian visions.

Fears of manipulation rely on the assumption of a single, despotic


AI overlord, which is far-fetched. More likely, we'll see diverse AIs
serving different interests, preventing the consolidation of power.

Concerns about AI making critical decisions in healthcare, justice,


and governance are valid, given hidden biases in current
systems. But these biases originate from humans, and AI offers a
chance to recognize and correct them. For example, human
physicians perform more surgeries if they're paid by the
surgery—hardly unbiased. AI can surface and correct such
biases, providing more equitable outcomes.
Humans will retain the power to revoke AI's decision-making
privileges, ensuring AI remains guided by human consensus. The
specter of a sentient, malevolent AI is a risk, but one we can
mitigate through vigilance and proper safeguards.

Critics fear over-reliance on AI could diminish human creativity


and critical thinking, as people depend on machines for decisions.
They worry about cultural homogenization due to AI algorithms
creating echo chambers.

But I see AI expanding our creativity. Someone like


me—endowed with zero musical talent—can create a
personalized song . AI enables new forms of expression,
expanding our abilities rather than replacing them.

Doomers warn that AI could become uncontrollable and render


humans extinct. While we must invest heavily in AI safety
research, it's important to balance this concern against AI's
immense benefits.

The larger and more immediate risk is losing the AI race to


nations like China, making AI dangerous for the West. China's
five-year plan explicitly aims to win in AI. If authoritarian regimes
develop advanced AI before democratic societies, they could
manipulate societies, erode democracy, and consolidate power.
Ironically, those who fear AI eroding democracy should be most
concerned about this risk. We must step up and use AI for
humanity's benefit, ensuring democratic values prevail.

Further, it is likely that we'll have multiple AIs, making it unlikely


that all would turn against humanity simultaneously, even in a
worst-case scenario. Most likely, the growing emphasis on AI
explainability will enhance safety by aligning AI's goals with
human values. Within the next decade, I believe we'll move
beyond the scare-mongering around "black box systems" with no
controllability.

However, solving this problem requires a laser focus on AI safety


and ethics.Investing heavily in AI safety is crucial, and a
substantial portion of university research should focus on this
area.The federal government should invest more in safety
research and detection of AI. Features like “off switches” should
be required after appropriate research and testing. It's also
important to remember that humanity faces many existential
risks—pandemics, asteroid impacts, nuclear war, to name a few.
AI is just one risk in a broader context, and we need to consider
the trade-offs between these risks and the potential benefits AI
can bring.

Concerns about tech CEOs wielding unprecedented sway over


global structures are valid. But we must consider whether we're
more comfortable with unelected leaders like Xi Jinping's global
influence or that of tech CEOs. While both wield power without
direct democratic accountability, tech CEOs rely on market forces
and public opinion.

Moreover, democratization of AI development and multiple AI’s


makes power concentration unlikely.

Part of my motivation to pen this piece is to dispel the dystopian


vision of an AI-first world. First and foremost, it is a cognitively
lazy vision – easy to fall into and lacking all imagination:
large-scale job losses, the rich getting richer, the devaluation of
intellectual expertise as well as physical work, and the loss of
human creativity all in service of our AI overlords. On the contrary,
AI can provide near free AI tutors to every child on the planet and
near free AI physician expertise to everyone on the planet.
Virtually every kind of expertise will be near free from oncologists
to structural engineers, software engineers to product designers
and chip designers and scientists all fall into this camp. It will also
help control plasma in fusion reactors and self flying aircraft,
self-driving cars and public transit making all substantially more
affordable and accessible by all. AI promises to democratize even
how we build enterprises. But more than anything it will be an
equalizing force as all humans will be able to harness the same
expertise.

I estimate that over the next 25 years, AI can perform 80% of the
work in 80% of all jobs—whether doctors, salespeople, engineers,
or farm workers. Mostly, AI will do the job better and more
consistently. Anywhere that expertise is tied to human outcomes,
AI can and will outperform humans, and at near-free prices. AI will
transform how we discover and utilize natural resources such as
lithium, cobalt, steel and copper, such that our resource discovery
capabilities outpace consumption. The current challenge is not a
lack of resources, but a limitation in our capacity to find them – a
barrier AI is poised to help break. Further, AI could help optimize
the use of resources and it will help discover new materials.

For the next 5-10 years, humans will oversee AI "interns,"


doubling or tripling productivity. Eventually, we'll decide which jobs
to assign to AI and which to keep. AI will make expertise nearly
free, making goods and services more accessible to everyone.

Our physical lives will transform. Bipedal robots could


revolutionize sectors from housekeeping to manufacturing, freeing
people from undesirable jobs. In 25 years, there could be a billion
bipedal robots performing the wide range of tasks that humans
do. We could free humans from the slavery of the bottom 50% of
really undesirable jobs like assembly line & farm workers.

It is not just our physical lives that will be transformed. Soon, most
consumer access to the internet could be agents acting on behalf
of consumers and empowering them to efficiently manage daily
tasks and fend off marketers and bots. This could be a great
equalizer for consumers against the well-oiled marketing
machines that attempt to co-opt the human psyche to increase
consumerism and sell them stuff or bias their thinking.
AI could revolutionize healthcare with personalized medicine,
tailoring treatments to individual genetics, lifestyle, and
environment. AI could be used to detect diseases at an early
stage, often before symptoms appear, allowing for more effective
and less invasive treatments. AI will augment biotechnology to
create effective, scalable precision medicines. An AI oncologist
could access terabytes of research, more than any human could,
making better-informed decisions.

Near-free AI physicians could offer high-quality healthcare


globally. Expanding basic primary care, chronic care, and
specialized care (i.e., cardiology, oncology, musculoskeletal, etc)
is essential to improving the health of those living in emerging
markets and preventing disease. Near-free 24x7 doctors,
accessible by every child in the world would be impossible if we
were to continue relying on humans for healthcare. Indeed, the
current debate has painfully failed to focus on the most salient
consequence of AI: those who stand to be most impacted by this
AI revolution are the bottom half of the planet – 4 billion people –
who struggle everyday to survive.

AI could create personalized learning experiences adapting to


each student's needs and interests. AI tutors, available 24/7,
could make high-quality education accessible worldwide,
unlocking opportunity and fostering self-efficacy and AI
researchers could expand human knowledge and rate of
discovery.
AI could address climate change by optimizing energy use,
reducing emissions, but more than anything, help in developing
low carbon technologies. It could aid in environmental monitoring
and conservation, leading to a sustainable economy.

Powering this AI-utopia will require complementary technologies


such as fusion for limitless, clean and cheap power generation.
My bet is on fusion boilers to retrofit and replace coal and natural
gas boilers rather than building whole new fusion or nuclear
plants. There are additionally promising efforts using geothermal,
solar and advanced battery systems for clean, dispatchable
electric power. Multiple vectors are driving down the
environmental cost of compute.

AI could augment human capabilities, allowing us to tackle


complex problems. It could be a creative partner, assisting in art,
design, and innovation, pushing boundaries in various fields.

New “jobs” will emerge, and creativity will flourish.

AI could help create just societies by ensuring fair


decision-making, reducing biases, and promoting transparency in
governance, well beyond what humans have been able to do. It
could assist in developing evidence-based policies through vast
data analysis.

We could have 24/7 lawyers for every citizen, amplifying


professional capacity and expanding access to justice. Education,
legal, and financial advice would no longer be reserved for
society's upper crust.

In a utopian vision, AI could shift societal focus from economic


growth to well-being and fulfillment. Imagine a world where
passions emerge naturally, as people pursue what excites them
without the pressure to secure a job or develop a career.

Professions not typically associated with financial security—like


arts, competitions and sports—could become achievable for
anyone, unconstrained by the need to make a living. Life would
become more meaningful as the 40-hour workweek disappears.

Obstacles stand in the way—incumbent resistance, political


exploitation of fears, technical failures, financial risks, anti-tech
sentiment, and negative public perception. But I believe an
AI-driven utopia is achievable with the right societal choices and
technological advancements.

In the next five years, life may not feel dramatically different. But
between 10 and 20 years from now, we'll witness dramatic
transformations reshaping society. While still on the horizon, this
era of unprecedented prosperity is visible today.

Capitalism may need to evolve. The diminishing need for


traditional economic efficiency allows us to prioritize empathetic
capitalism and economic equality. Disparity beyond a point leads
to unrest, so policy must address this.
Human labor may be devalued, putting downward pressure on
wages. Labor will be devalued relative to capital and even more
so relative to ideas and AI technology.

AI's leveling of skill differences could compress wages. Value


creation may shift to creativity, innovation, or AI ownership,
potentially leading to new inequalities. We can't simply extrapolate
past economic history; AI may surpass human capabilities
altogether, making education and upskilling less effective.

The AI cycle will be faster than previous technological shifts,


making adjustment harder. Changes could hit some more
seriously than others, especially in the next decade or two, even if
society as a whole improves.

Let’s continue this thought experiment around wage compression


and job disruption using the aggregate cost of physician salaries
in the U.S. healthcare system as a starting point. It is north of
$300 billion dollars, likely closer to $400 billion (take 1mn doctors
each making $300-400k). Predicting the fate of the $300–$400
billion spent annually on U.S. physician salaries hinges on supply
and demand elasticities in healthcare. Consider demand elasticity.
If medical costs drop by 90% due to AI automation, will
consumption increase tenfold to keep the ~$350 billion spent on
U.S. physician salaries constant? Unlikely. People won't break
more bones because orthopedic care is cheaper. But they might
increase preventive care, mental healthcare, and elective
procedures as access barriers fall. AI will hyper personalize and
possibly commodify high quality entertainment and media, and
any art form will vie for the same 24 hours of user attention each
day. Diversity and quality of media will likely expand dramatically;
will consumer spending also increase? In other areas like
accounting even if services become cheaper through automation,
a company won't require ten times more audits. The demand is
bounded by regulatory needs, not cost.

Even if per-service costs decline, total spending may stay the


same if increased volumes balance lower prices. Each sector will
find its equilibrium between supply, demand, and elasticity,
making precise predictions difficult without a nuanced,
sector-specific analysis for which, today, we have insufficient data.
In the fullness of time, the new AI economy will find an equilibrium
once demand hits the asymptote of total consumption and time in
each sector.

AI's surge in productivity could lead to deflation—a decrease in


general price levels. Increased efficiency with fewer inputs (like
lower labor costs due to AI and robotics) and heightened
competition can trigger deflation and job loss.

But this deflationary economy challenges traditional measures like


GDP. If we consume more but spend less due to lower prices,
GDP may not reflect well-being. GDP won't mean much if it
doesn't capture increased living standards and abundance.
We need new economic measures accounting for these changes.
Deflation here isn't negative; it's increased efficiency, production
of goods and services and abundance. Our current lexicon
equates GDP growth with prosperity—a flaw. Monetary policy
may not be as effective in this new age.

We face choices: accelerate, slow down, or moderate disruptive


technologies, and decide whether to compensate those displaced.
Change can be painful for the disrupted, and embracing AI's
positives requires keeping those affected at the center of policy.
These changes pose significant challenges, but they also offer an
opportunity to create in the 25+ year windows a more empathetic
society and a post-resource-constrained world. This is a luxury
that has been unaffordable in the past but may now be ours to
use.

Given the massive productivity gains on the horizon, and a


potential for annual GDP growth to increase from 2 to potentially
5+% over the next 50 years, per capita GDP could hit ~1M
(assuming 5% annual growth for 50 years if GDP is still a good
measure). A deflationary enough economy makes current nominal
dollars go much further and I suspect current measures of GDP
will be poor measures of economic well being. Of course, this
vision is only possible with a UBI-like mechanism that provides a
minimum standard – that on the whole – far exceeds today’s,
given accessibility of goods and services that enrich our lives.
I can imagine a consumer utopia in 25+ years, where we’re not
supply constrained in most areas and deflation is actually a
positive tailwind for access and more equal consumption. Imagine
a world in which housing, energy, healthcare, food, and
transportation is all delivered or at your door, for near-free, by
machines; few jobs in those fields remain. What would be the key
characteristics of that world, and what would it be like to live in it?
Humans will finally be “free”.

An interesting parallel is China whose entry into the World Trade


Organization (WTO) in 2001 indeed created deflationary
pressures on the United States in the years that followed. This
was largely due to several factors related to trade liberalization
and increased competition from Chinese exports.The movement
of labor overseas has resulted in a loss of tens of millions of
stateside manufacturing jobs, yet little policy was centered around
upskilling or taking care of those whose livelihoods were
upended. With AI, we have the opportunity to free ourselves from
this low-cost labor.

Ultimately, the future will be what we decide to guide this powerful


tool toward. It will be a series of policy choices, not technological
inevitability. Choices will vary by country. We must harness AI
responsibly, ensuring its benefits are distributed equitably.

I'm a technology possibilist, a techno-optimist—for technology


used with care. Reflecting on my words from 2000, we'll need to
redefine what it means to be human. This new definition should
focus not on the need for work or productivity but on passions,
imagination, and relationships, allowing individual interpretations
of humanity.

1. Introduction
For four decades, I've devoted myself to and studied disruptive
innovation. I started with the microprocessor, a seismic shift that
gave rise to two key developments: the distributed computing
pioneered by Sun Microsystems, of which I was a co-founder, and
the personal computer. Then, in 1996, the browser emerged,
marking another epochal shift. I was part of the team that invested
in Netscape, the first important browser, and incubated Juniper,
laying the groundwork for the fundamental TCP/IP backbone of
the internet—a technology that many major telecommunications
companies had cavalierly dismissed. This was the dawn of the
internet revolution, during which we made strategic investments in
nascent giants like Amazon and Google. The profound impact of
the internet revolution speaks for itself. Then, in 2007, came the
iPhone, and with it, the mobile platform era. Each new platform
allowed large applications innovation and an explosion of new
ideas.

There is a point at which a difference in degree becomes a


difference in kind. It is likely that AI is different in kind from the
previous technological phase changes. The microprocessor, the
internet, and the mobile phone were tools for the human brain to
leverage and made much of our lives mostly better. But they did
not multiply the human brain itself. AI, by contrast, amplifies and
multiplies the human brain much as the advent of steam engines
and motors amplified muscle power.

Prior to these engines, we relied on passive devices such as


levers and pulleys, and animals. We consumed food for energy
and expended it in labor. The invention of engines allowed us to
convert energy not from our bodies but from oil, steam, and coal,
thereby allowing humans to use external non-biologic energy to
augment output. This transition improved the human condition in
such transformative ways.

AI stands as the intellectual parallel to these engines, albeit likely


more impactful. Its capacity to multiply expertise, thinking ability,
and knowledge, means over the next decade we can significantly
transcend human brain capacity. We’re on the cusp of a
near-infinite expansion of brain power that can serve humanity.

Artificial intelligence promises a future of unparalleled abundance.


The belief in the boundless potential of what could be animates
not just techno-optimists like myself but entrepreneurs who know
that the "possible" doesn't simply manifest—it must be brought
into existence. Imagine a post-scarcity economy, similar to the
vision in Star Trek, where technology eliminates material
limitations and goods are produced so efficiently that scarcity
becomes obsolete. Most jobs went away, be they in healthcare,
law, engineering, warehousing, retail, restaurants, etc. And yet,
we’d still have enough abundance to pay citizens via some
redistribution effort so they can cover a minimum standard of
living materially higher than today’s minimum.Everything would be
substantially cheaper and yet everyone would have enough
income for their needs. If people work, it will be because they
choose to, not because they need to and “work” will be a pursuit
of passions not a requirement to provide for one’s family. In this
post-utopian world, the economic impact of this future would
eclipse that of previous technological revolutions, such as the
microprocessor or the internet.

The dream of AI's potential is vast, but the journey toward it is


complex, as our transition to a post-scarcity society may be
painful in the short term for those displaced. Mitigation of those
sequelae is possible and must come from well-reasoned policy.
The next 0-10 years, 10-25 years and 25-50 year time frames will
each be radically different from each other. The pace of change
will be hard to predict or anticipate with respect to technology
capability far exceeding human intelligence, and the rate of
societal penetration by area.

Adding to some current confusion on whether the future is


dystopic or utopic is the current AI hype cycle, which with its
concomitant failures, distorts views. Most AI ventures will end in
financial loss. In aggregate more money will be made than lost,
but by a small concentration of world-changing companies. What
excites me most isn't the magnitude of AI’s profits but its potential
to indelibly rechart the course of the world and reinvent societal
infrastructure for the better.
2. Dystopian view of AI
Pessimists and doomers paint a dystopian future in two parts -
economic and social. For each of their concerns, I address why
these fears are mostly unfounded, likely myopic, alarmist, and
actually harmful. They are also addressable through societal
choice. The doomers’ dystopia also represents errant risk/reward
arithmetic in my view. I understand acutely that AI is not without
its risks. AI risks are real but manageable. In the present debate,
the doomers are focusing on the small “bad sentient AI” risk, and
not the most obvious one – losing the AI race to nefarious “nation
states,” or other bad actors, making AI dangerous for the west.
Ironically, those who fear AI and its capacity to erode democracy
and manipulate societies should be most fearful of this risk! It is
why we can’t lose to China, and why we must step up and use AI
for the benefit of all humanity. China is the fastest way the
doomers’ nightmares come true. Are you ready to trust Xi and his
Putin-like appendages for the equitable distribution of one of the
world’s most powerful technologies? That would be dystopian.

A. Job loss and economic inequality


In an economic dystopia, wealth gets increasingly concentrated at
the top while both intellectual and physical work gets devalued,
widespread job loss and deflation destroys the economy and
purchasing power, and inequalities are exacerbated. AI could
create a world where a small elite thrives while the rest face
economic instability, especially in a democracy that drifts without
strong policy. But smart interventions—like income redistribution,
minimum living standards (perhaps UBI?), and strategic
legislation over the next 25 years, driven by democracy—can
prevent this. I believe these interventions are achievable because
Western capitalism is by permission of democracy and its voters.
If we correctly handle this phase shift, AI will generate more than
enough wealth to go around, and everyone will be better off than
in a world without it. Over the next 5-20y, it is possible that AI will
create new jobs we cannot currently conceive of. But over the
long haul, AI will eliminate most “jobs” insofar as a job is defined
as a trade or profession one must pursue to support their needs
and lifestyle.

Factor in an aging global population and a shrinking pool of young


workers, and AI becomes essential. With the right policies, we
could smooth the transition and even usher in a 3-day workweek.
If GDP growth jumps from 2% to 5%, we'll have the abundance to
create "transition funds," much like the oil funds that have fueled
prosperity in certain states and countries like Norway. I expand on
some of these economic possibilities in section 4.

B. Social control and manipulation


Socially, the naysayers see a world in which AI undermines
humanity, starting with pervasive surveillance. But these
outcomes aren't inevitable. Legislation, implemented country by
country, will shape how AI integrates into our lives. In democratic
societies, these will be collective choices. I, for one, am willing to
trade some freedoms for a society with less crime, but that
doesn't mean embracing totalitarianism. (And let's not forget, with
AI, the reasons for crime might even diminish.) A reasonable
balance can be achieved, where we benefit from AI's
advancements without succumbing to the dystopian visions
alarmists predict if we are willing to put constraints on AI’s legal
uses.

Additional fears include AI being used to manipulate public


opinion, control information, and influence elections through
targeted propaganda or deepfake technology. In fact, we are
already seeing Russian interference in the 2024 US election and
it can get much worse with more powerful AI.This could
undermine democracy and create a society where truth is difficult
to discern. However, the fears around manipulation and control
rely on the assumption there would be a single, despotic AI
overlord, which is far-fetched. More likely, we'll see a diversity of
AIs, each serving different interests and thereby preventing the
consolidation of power and influence.

C. Loss of human agency and ethical considerations in AI systems


Concerns about AI making critical decisions in areas like
healthcare, justice, and governance are valid, especially given the
hidden biases in current systems. But these biases originate from
humans, and AI offers a chance to recognize and correct them.
For example, human physicians tend to do more surgeries if they
are paid for surgery and it is hard to argue they are unbiased. AI
will be the only way unbiased care can be provided. AI will
surface the biases and then correct for them. This will create a
world of abundance, and more equitable access, as I further detail
below.
In my view, humans will retain the power to revoke AI's
decision-making privileges, ensuring that AI remains an "agency"
guided by human consensus, not an unchecked force. The
specter of a sentient, malevolent AI is a risk, but it's one we can
mitigate.

As AI reshapes work and ultimately makes decisions in


healthcare, justice, and governance, potentially overriding human
intellect and judgment, we face an opportunity to redefine human
purpose, as well as improve current outcomes. Today, from age
six, we're programmed in school to get an education to secure a
job, which ends up orienting much of our sense of self. But in 25
years, without this pressing imperative, we might teach children to
explore, imagine, discover, and experiment. Liberating people
from survival jobs could redefine what it means to be human,
increasing our "humanness" and expanding the diversity of our
goals.Broader education may be needed, not to train for a job, but
to pursue intellectual pursuits for their own merits, instead of a
“purpose” like a job.

Ultimately, "humanness" will be defined by our freedom to pursue


these motivations, freed from the chains of survival servitude.
More than anything, I hope in a world with less competition for
resources, more humans will be driven by internal motivation and
less by external pressures. Society and individuals will have the
ability to choose which technology they personally want to
leverage, and where they want to spend time. If someone likes
making personal decisions without any AI leverage, then they will
of course be free to go on without any copilots. Nothing would be
forced upon us. AI will not be an overlord but rather a tool
available to fulfill our needs and requests. In a smaller way, the
Amish in the US forsake technology by choice. There may be
thousands of such communities.

Pursuant to the above, there are fears that reliance on AI might


lead to the erosion of human ethical and moral standards. If AI
systems are programmed to prioritize efficiency over ethical
considerations, it could result in decisions that are harmful or
unjust. But these are societal choices, made by humans, not
machines. If we get it wrong, the blame will be ours to bear.

By that same token, when pessimists worry about ethical and


moral degradation as machines lack the nuanced understanding
of human values, ethics, and emotions, I'd suggest this is a much
greater danger with humans in charge. Alignment matters, but the
same could be said for humans orchestrating a group and trying
to make a decision. First getting on the same page matters. Either
AI is powerful enough to understand and follow our direction or it
isn’t. We can't have it both ways. Fully independent AI may pose
other larger risks addressed below but “smart enough AI” not
understanding our directions is not one of them.

D. Loss of creativity and critical thinking


With regard to fears of an erosion of human creativity and critical
thinking, I think that is a narrow-minded view of an AI world.
Critics fear cultural homogenization due to AI algorithms feeding
users a narrow range of echo-chamber ideas. They worry that
over-reliance on AI could diminish human creativity,
problem-solving skills, and critical thinking, as people become
more dependent on machines to make decisions for them.

But I see a world in which someone like myself – endowed with


zero musical talent – can create a personalized song to deliver
the message from the speech I wrote to my daughter for her
wedding. True story. It meant a lot to me. We can expand our
creativity beyond our current confines and capabilities with AI.
And great artists, painters or performers will be able to leverage
these tools even more. I don’t see a loss of humanness but rather
an augmentation and expansion of it, given that AI systems may
be even better (or different) at creative tasks, may soon display
emotion and empathy, and in so doing can complement our own.

E. AI autonomy, existential risk, and supremacy & China


In the most extreme view, doomers warn AI could become
uncontrollable and render humans extinct. The risk of a "sentient,
independent, malevolent AI" is probably the most significant
threat AI poses, and it's one we must take seriously. While the
idea of a "hard take-off"—where AI rapidly surpasses human
control—is real and demands vigilance, it's important to weigh this
risk against the immense benefits AI offers humanity, or against
the risks AI creates in the hands of adversarial nation states.

Yoshua Bengio and Geoffrey Hinton, both widely recognized as


“godfathers of AI” echo these concerns. Alongside Hinton, Bengio
has sounded the alarm on the catastrophic risks of AI misused by
ill-intentioned actors or organizations. The potential for AI to
self-replicate, protect its survival, build systems to be impervious
to human intervention, and/or exploit vulnerabilities in digital
infrastructure could not only destabilize democracies but also
subvert all of humankind. This concern is not only about the AI
itself but about the accessibility of such powerful tools becoming
widespread and falling into the hands of those with malicious
intent. Bengio advocates for international cooperation to regulate
AI development, prevent its misuse, and develop
countermeasures to safeguard humanity. I would argue that
international treaties are pointless here given the use of AI is not
verifiable (unlike with bio or nuclear weapons where their use is
obvious). Max Tegmark similarly focuses heavily on the "control
problem," but even this is being addressed by advancements in
AI safety research. Efforts like OpenAI’s work on reinforcement
learning with human feedback (RLHF) and the increasing broad
research focus on AI interpretability are pushing the field toward
systems that are more transparent and controllable. Paul
Christiano, the head of AI Safety at the US AI Safety Institute,
notes that alignment problems, while real, are not unsolvable and
are gradually being tackled through both technical solutions and
more rigorous oversight frameworks. These include systems that
allow humans to supervise AI learning processes more closely,
ensuring that the goals AI optimizes for remain aligned with
human values.

Moreover, the analogy to nuclear weapons or pandemics as


existential risks is somewhat misplaced. Unlike AI, nuclear
weapons and pandemics have immediate and clear destructive
potential. AI, on the other hand, is a tool that can be designed and
guided to serve specific functions. Stuart Russell, author of
"Human Compatible," has emphasized that with careful planning,
AI can be controlled to ensure it does not pose a threat. He
proposes that AI should be built with uncertainty in its objectives,
ensuring that it constantly seeks human approval for its decisions.
This approach, often referred to as 'value alignment,' makes it
improbable for AI to go rogue in the ways Tegmark, Bengio, and
Hinton have suggested. But it is not automatic or guaranteed so
more research funding is warranted. But slowing down AI
development with regulation is too large a risk to take in our battle
with unfriendly nation states. Falling behind is by far the largest
danger that scares me.

Researchers like Yann LeCun, Chief AI Scientist at Meta, have


pointed out that current AI systems lack the foundational
mechanisms required for self-awareness or autonomy. To them,
fears of sentient AI taking over are vastly overstated. AI, as we
know it, remains entirely reliant on human-generated inputs and
goals, without the capacity for independent motivation or the
agency to set its own objectives. LeCun argues that while AI is
advancing rapidly, the idea of AI developing sentience is still far
beyond our current technological capabilities. (But guess what is
well within a near term possibility? China gaining ground in this AI
race and leveraging the brute force of its regime to leverage it for
domination over the globe’s political and socio economic values.
More on this below.) My synthesis of this doomsday fear
mongering is that the cat is already out of the bag and we have a
forced choice between sitting on our hands and making ourselves
completely vulnerable to bad actors’ use of AI, or charging
forward developing a technology that can fight its malicious
counterparts.

Further, it is likely that we'll have multiple AIs, making it unlikely


that all would turn against humanity simultaneously, even in a
worst-case scenario. Most likely, the growing emphasis on AI
explainability will enhance safety by aligning AI's goals with
human values. Within the next decade, I believe we'll move
beyond the scare-mongering around "black box systems" with no
controllability. However, solving this problem requires a laser
focus on AI safety and ethics.

Investing heavily in AI safety is crucial, and a substantial portion


of university research should focus on this area.The federal
government should invest more in safety research and detection
of AI. Features like “off switches” should be required after
appropriate research and testing. It's also important to remember
that humanity faces many existential risks—pandemics, asteroid
impacts, nuclear war, to name a few. AI is just one risk in a
broader context, and we need to consider the trade-offs between
these risks and the potential benefits AI can bring. In my view, the
risk of falling behind in AI technology to China and other
adversaries is a far greater risk than sentient AI risk. Slowing
down the development of AI could be disastrous for democracies
and the greatest risk we could possibly take.
The nation that emerges as the leader in technology, particularly
in AI, in the coming two decades, will be in the commanding
position for the global distribution of technology, economic
benefits and influence, and thus — values. By far, AI will be the
most valuable technology not only in cyber warfare or national
defense-killer robot -types of application, but in things like free
doctors for the planet, and free tutors. The country that wins this
AI race and related races like fusion will project so much
economic power, hence will project political power and likely
anoint the world's political system. What influences Southeast
Asia, Africa, Latin America, etc is at stake. Democratic values are
at stake in this technology battle, and so we should do whatever
we can to win this battle and beat China. Their view of utopia is
likely different.

I suspect China may take Tiananmen square tactics to enforce


what the CCP thinks is right for their society over a 25 year
period. By contrast, we will have a political process. If democratic
values are to win globally we must approach AI cautiously but not
at the expense of losing the AI race. It is why I believe China’s
14th five-year plan, overseen by President Xi, specifically
declares their intent to win in AI and 5G wireless. The former will
allow for economic power while the latter allows China to surveil
all citizens in 100+ countries by controlling their
telecommunications networks and TikTok. Technological
leadership is an existential priority worthy of wartime mobilization.
Imagine Xi’s bots surreptitiously individually influencing Western
voters with private conversations, free of “alignment constraints”
that worry cohorts of American academics and philosophers. To
address these risks, inadvertent and advertent, we must
substantially increase our research and investment in safety
technologies, but not aggressively regulate AI.

If one is to believe that China will peak in the next decade


because of demographics, slowing growth, and a large debt
burden, we must believe it will get more desperate to win and be
more dangerous in its waning years — the opposite of a
Thucydides Trap. This is why we must not be at their mercy while
we debate hypothetical scenarios and slow down progress with
misprioritized regulation.

We may have to worry about sentient AI destroying humanity but


the risk of an asteroid hitting earth or a pandemic also exists but
the risk of China destroying our system is significantly larger in my
opinion. In the present debate, the doomers are focusing on the
small risks, and not the most obvious one – losing the AI race to
bad actors makes AI dangerous for the west. Ironically, those who
fear AI and its capacity to erode democracy and manipulate
societies should be most fearful of this risk!

F. Corporations vs countries
In an AI world, tech CEOs controlling these technologies could
hold unprecedented sway over global employment, economic
structures, and even the distribution of wealth. Their platforms
might become the primary mediators of work, education, and
social interaction, potentially surpassing the role of traditional
governments in many aspects of daily life. Critics argue that these
executives wield influence that rivals or surpasses that of many
nation-states. They point to the ability of tech platforms to shape
public discourse, influence elections, and even impact geopolitics
as evidence of this outsized power. However, this concern raises
an interesting question, and I go back to my previous framework
of a forced choice between an ascendant and maximalist China
vs our freer society and economy: why should we be more
comfortable with the global influence of unelected leaders like Xi
Jinping than with that of tech CEOs? No tech CEO is likely to own
anywhere near controlling interests or even a material interest
and they will have shareholders and boards to report to. While
both wield immense power without direct democratic
accountability, there's a crucial difference in their incentive
structures. Tech CEOs, for all their flaws, ultimately rely on the
continued support and engagement of their users, customers,
boards and shareholders. They must, to some degree, respond to
market forces and public opinion to maintain their positions –
even if they, themselves, are ill-intentioned characters. In
contrast, authoritarian leaders like Xi disregards public sentiment,
using the state apparatus to suppress dissent and maintain
control. This dynamic suggests that while the power of tech CEOs
is certainly concerning and worthy of scrutiny, it may be preferable
to unchecked authoritarian power in terms of responsiveness to
global stakeholders and having economic vs political/personal
incentives.

3. Utopian view of AI
Part of my motivation to pen this piece is to dispel the dystopian
vision of an AI-first world. First and foremost, it is a cognitively
lazy vision – easy to fall into and lacking all imagination:
large-scale job losses, the rich getting richer, the devaluation of
intellectual expertise as well as physical work, and the loss of
human creativity all in service of our AI overlords. We in the West
have a very distorted view of what dystopia is. The majority of the
authors of this dystopia have the luxury of pontificating from their
ivory towers, already insulated from the drudgery and existential
pressures facing the majority of Americans, today, let alone the
billions of similar people worldwide who face risks of death
everyday and make many very short term trade offs at huge
personal costs. I’m referring to the 40% of Americans who can’t
cover an unexpected $400 expense, or the 100 million Americans
who lack proper primary healthcare, or the half a million citizens
who every year file bankruptcy due to exorbitant medical
expenses.

AI can provide near free AI tutors to every child on the planet and
near free AI physician expertise to everyone on the planet.
Virtually every kind of expertise will be near free from oncologists
to structural engineers, software engineers to product designers
and chip designers and scientists all fall into this camp.
Microprocessors made most electronics and computing near free
if judged by the computational power in a cell phone. AI will make
similar cost reductions apply to many more areas than
microprocessors ever could by making all expertise near free,
most labor very cheap through bipedal and other robots, materials
from metals to drugs, much cheaper through better science
discovery and resource discovery and much more. It will also help
control plasma in fusion reactors and self flying aircraft,
self-driving cars and public transit making all substantially more
affordable and accessible by all. AI will provide personal,
intelligent assistants to every individual, offering help with daily
tasks, personalized fitness and nutrition, and even executive
support. AI-powered tools will generate illustrations, icons, logos,
and art, transforming how creatives work. We will see AI copilot
physicians, AI automating radiology workflows and diagnostics,
while AI financial analysts automate tasks like accounts
receivable management and financial modeling. AI will assist with
drafting contracts, creating video games, and powering fully
autonomous fleets. AI copilots will assist engineers in everything
from formal verification to thermal management in chips, civil
engineering, and interior design. From self-driving MRIs to
personalized audiobooks, we are limited only by what
entrepreneurs can imagine. AI promises to democratize even how
we build enterprises. Programming, for example, will no longer be
siloed to the halls of computer science, because we will soon be
able to program in natural language vs complex programming
languages, creating nearly a billion programmers.

A. Increased efficiency and productivity


I estimate that over the next 25 years, possibly 80% of 80% of all
jobs, can be done by an AI, be it primary care doctors,
psychiatrists, sales people, oncologists, farm workers or assembly
line workers, structural engineers, chip designers, you name it.
And mostly, AI will do the job better and more consistently! The
collaboration between human and AI is likely to start in the form of
the human knowledge worker overseeing 4-5 AI “interns,”
enabling the human to double their productivity. For the next
decade we should expect this to be the dominant mode of
increasing the productivity of human knowledge workers by 2-5X
with full human oversight in most critical work. We already see the
early days of AI taking over mundane, repetitive workflows,
allowing humans to focus on more creative, strategic, and fulfilling
work. Eventually we will decide as humans, what jobs we assign
to humans and what we choose to do ourselves. We also see
copilots synthesizing terabytes of data better than a human
possibly can. Anywhere that expertise is tied to human outcomes,
AI can and will outperform humans, and at near-free prices. Take
an oncologist treating a patient for breast cancer. It’s very unlikely
that they remember the last 5,000 papers on a certain breast
cancer. Note that even though a task or job can be done by an AI,
does not mean all societies will allow it to be done by an AI.
These politico-economic-societal decisions will be made country
by country and likely not by technologists. Further, it is worth
musing: will all labor – farm work to oncologists & engineers – be
valued equally by society where the expertise sits in an AI? Will AI
be the great equalizer?

And what about the natural resources, physical inputs like steel,
copper, lithium and cement, required to underpin much of this
software and hardware? As we witness China's strategic moves
to dominate resource-rich regions like Africa and South America,
particularly in controlling critical mineral supply lines, the
imperative to innovate becomes clear. AI will transform how we
discover and utilize natural resources such as lithium, cobalt, and
copper, such that our resource discovery capabilities outpace
consumption. The current challenge is not a lack of resources, but
a limitation in our capacity to find them – a barrier AI is poised to
help break. Further, AI could help optimize the use of resources
(natural, raw, and otherwise), reducing waste and improving the
efficiency of industries such as agriculture, manufacturing, and
energy. This could lead to a more sustainable economy and better
stewardship of the planet.

B. Improved quality of life


Our physical lives will also be upended for the better. Bipedal
robots have the capacity to transform every vertical from
housekeeping to eldercare to factorie assembly lines & farms.
Few are preparing for how this will radically change GDP,
productivity, and human happiness and free people of the
servitude of these assignments we call jobs. These robots will
create enough value to support the people they replace. In 25
years there could be a billion bipedal robots (in 10 years, a
million) doing a wide range of tasks including fine manipulation.
We could free humans from the slavery of the bottom 50% of
really undesirable jobs like assembly line & farm workers. This
could be a larger industry than the auto industry. But it will be the
humans’ responsibility to not take a lazy and indulgent approach
to life.

AI could also make the physical distance between us smaller. We


could replace the majority of cars in most cities with AI-driven,
autonomous, personal rapid transit systems and last mile
self-driving cars, increasing the passenger capacity of existing
streets by 10X. This will dramatically shrink the auto industry and
reduce nominal GDP while making local, personal transit far more
convenient, faster and cheaper.

It is not just our physical lives that will be transformed. Soon, most
consumer access to the internet could be agents acting on behalf
of consumers and empowering them to efficiently manage daily
tasks and fend off marketers and bots. Tens of billions of agents
representing consumers running 24x7 wouldn’t surprise me and is
on my current wish list. Assuming the vectors of social benefit
prevail, this could be a great equalizer for consumers against the
well-oiled marketing machines that attempt to co-opt the human
psyche to increase consumerism and sell them stuff or bias their
thinking. They might have the smartest AIs protecting their
interests.

I envision a personal AI agent for every individual, designed to act


in their best interest—shielding them from manipulative marketing
and the brain hacks of today where marketers can make
consumers buy or click on things they wouldn’t otherwise. Likely,
we will have powerful, privacy preserving, personalized AI that
represents and protects the consumer like a consumer protection
agency.Think of it as "Spy vs. Spy" in the digital age, where AI
empowers us as consumers and citizens against corporate AIs
with incentives to manipulate us.

C. Enhanced healthcare and longevity


AI could revolutionize healthcare by enabling personalized
medicine, where treatments are tailored to an individual’s genetic
makeup, lifestyle, and environment. This could lead to better
health outcomes and longer, healthier lives. AI could be used to
detect diseases at an early stage, often before symptoms appear,
allowing for more effective and less invasive treatments. This
could significantly reduce the burden of chronic diseases and
improve overall public health.

Quality, consistency, and accessibility in services like healthcare


will improve as they become nearly free. Not only will very broad
primary care, including mental health care and chronic disease
care, become table stakes worldwide, but AI will augment current
biotechnology to create precision medicines that are actually
effective, minimize off-target effects, and scalable/cheap enough
for the globe. More specialized physicians - such as oncologists -
will have access to terabytes worth of the latest research and
data, making it more effective and up-to-date than a human
counterpart. While there will likely be a need for human
involvement – and AI will know when to call in the human doctor
based on patient preference – a 24x7 AI oncologist will provide far
greater touchpoints, and be able to make decisions on diagnosis
and clinical course with far more information synthesized and
outcomes modeled, leaving human doctors free to engage in
more fulfilling activities. The same rings true for other specialties
and all manner of chronic care and diagnostic testing.

The AI dystopia fear-mongering is not not coming from the


patients who struggle to be seen by a grossly inefficient
healthcare system. 150 million Americans live in federally
designated mental health professional shortage areas and over
half of all adults with mental illness cannot receive treatment. We
should be asking those 28 million individuals – not a few
fear-mongering elitist academics – whether they’d welcome the
following news: The first large language model AI approved in the
UK are now doing 40% of NHS mental health clinics doing intake
for behavioral health and they are showing superior outcomes. In
time this trend will lead to near free mental health care. This is the
utopian side of AI – the long awaited technological revolution that
can solve so much of the pain caused by our present-day
systems.

My speculation about Reinventing Societal Infrastructure with


technology, allowing all 7.9 billion people on the planet to live like
the top 10% richest humans, seems much more achievable now
with the unveiling of AI’s ever expanding capabilities. Expanding
basic primary care, chronic care, and specialized care (i.e.,
cardiology, oncology, musculoskeletal, etc) is essential to
improving the health of those living in emerging markets and
preventing disease. Near-free 24x7 doctors, accessible by every
child in the world would be impossible if we were to continue
relying on humans for healthcare. Indeed, the current debate has
painfully failed to focus on the most salient consequence of AI:
those who stand to be most impacted by this AI revolution are the
bottom half of the planet – 4 billion people – who struggle
everyday to survive.

Twenty years ago, The Lancet found that in the 42 nations that
account for 90% of global child mortality, 63% of child deaths
could be prevented through more effective primary care, which
amounts to 6 million lives per year. AI could make this near-free.
In western nations we take for granted the preventability of
diseases like diarrhea, pneumonia, measles, malaria, and
perinatal HIV/AIDS transmission. There is no realistic path for
enough human PCPs to reach and have high touch-point
encounters with every child in the less privileged parts of the
world. Should we charge forward and embrace AI as a society, I
imagine that if I were to visit a village in my birth country of India,
the quality of my care would be greater than if I saw a local PCP
at Stanford, since the village in India will have adopted AI faster,
given potential incumbent friction stateside. Should we worry
about the remote possibility of sentient AI killing 6 million people
or the certainty of six million children’s deaths every year, year
after year?

D. Education and knowledge expansion


AI could create personalized learning experiences that adapt to
each student’s needs, pace, gaps in knowledge, and interests,
leading to more effective education and higher levels of
achievement for all learners. AI-powered platforms could make
high-quality education accessible to people around the world,
regardless of geographic location or economic status. This could
democratize knowledge and empower individuals globally. Public
school district zoning and the zip code into which you were born
will be far less influential in one’s outcomes in life. Coupled with
my 25+ years of vision for a society freed from the “need to work,”
AI tutors and human mentors will allow kids the freedom to
explore and be themselves. That is closer to freedom than the
servitude of today’s jobs kids aspire to.
Globally, AI is our only chance at near-free tutors, available 24x7
across innumerable subjects, for every child on the planet. It
would be hard to overstate the impact this could have in unlocking
opportunity and conferring agency, self-efficacy, passion, hope,
motivation, and gender equity to those who otherwise lack the
resources and live in parts of the world that lack the infrastructure
for such broad, consistent, and accessible education.

E. Environmental sustainability
AI could play a crucial role in addressing climate change by
optimizing energy usage, reducing emissions, and developing
new technologies for renewable energy. AI could also aid in
environmental monitoring and conservation efforts. AI could lead
to smarter, more efficient agricultural practices that increase food
production while reducing environmental impact, helping to feed
the growing global population sustainably. But this is linear
thinking. AI scientists could enable us to have much more
innovative approaches to this defining problem we as humans
have created.

Powering this AI-utopia will require complementary technologies


such as fusion for limitless, cheap power generation. And with the
right political climate, we could replace all coal & natural gas
plants by 2050. My bet is on fusion boilers to retrofit and replace
coal and natural gas boilers rather than building whole new fusion
or nuclear plants. There are additionally promising efforts using
geothermal, solar and advanced battery systems for clean,
dispatchable electric power. Multiple vectors are driving down the
environmental cost of compute. Significant improvements are also
being made in areas like algorithmic efficiency and hardware,
allowing AI systems to achieve far more while consuming much
less power. New techniques and the integration of web search
functions are helping AI scale more effectively without drastically
increasing energy consumption. This push for optimized compute
not only supports the growing energy demands of AI but also
ensures that this technology can expand sustainably without
straining global infrastructure.

F. Enhanced human capabilities (and creativity); new experiences


AI could augment human capabilities, allowing people to tackle
complex problems that are beyond the reach of current human
intelligence alone. This could lead to breakthroughs in science
(i.e., climate breakthroughs), technology, and other fields.

AI could serve as a creative partner, assisting artists, designers,


and innovators in exploring new ideas and pushing the
boundaries of what is possible in art, science, and technology.
Consumer services will become hyper-personalized, enabling
individuals to be the artist, composer, producer, writer, and
consumer all at once. Music, for example, may become
interactive like games, with new formats likely to be discovered
and enabled. Such media is already beginning to pour in, in some
cases generating a greater proportion of hits than human-made
counterparts.

Whereas many artistic aspirations were previously closed off to


those who either lacked the talent, were concerned about a
secure financial future, or simply didn’t have the resources to
make a movie or compose a song, those impediments will
gradually cease to exist. This doesn’t mean that celebrity
entertainers will disappear, but that AI-generated art will offer
complexity and deep textures that belie the music’s artificial
origin. Some will hate this and others will love it. The same is true
of musical genres today from classical to heavy metal.

New kinds of jobs will emerge and new creativity will be


unleashed. Before the film camera, there was no job for a film
producer. Entire industries have erupted. Entertainment has
become more popular, and extreme sports have turned into
income-generating professions for many, like the X Games!
Snowboarding, for example, which was not a profession before,
now is. Platforms like Etsy and eBay have facilitated global
artisans and entrepreneurs, while new technology will likely
enable an entirely new world of professions. Inkitt is enabling a
bevy of new creative writers, and platforms like Splash people
with an outlet for their creativity, allowing them to be more
expressive about their music tastes and individuality.

G. Ethical decision-making and governance


AI could help create more just and equitable societies by ensuring
fair decision-making processes, reducing biases, and promoting
transparency in governance. AI could assist in the development of
evidence-based policies by analyzing vast amounts of data,
leading to more effective and informed governance.
We could have lawyers for every citizen 24x7, that will amplify our
professionals 10-fold, expanding accessibility & affordability.
There will be plentiful AI judges to resolve conflicts expeditiously,
providing justice without the deeply entrenched human biases.
Education, legal, and financial advice will no longer be reserved
for the upper crust of society. In fact, these will be essential and
near-free government services, just as much as roads and
national defense are today.

H. Human flourishing and well-being


In a utopian vision, AI could help shift societal focus from
economic growth to human well-being and fulfillment. Imagine a
world where passions emerge naturally, as people are given the
opportunity early in life to pursue what truly excites them.

I discussed a child’s freedom before but let me elaborate. If we


start teaching children at age six that they don’t need to excel in
school merely to secure a job, but rather to explore what ignites
their passion, it would create a vastly different formative
experience for their developing brains compared to initiating this
conversation at age 40. Professions that are not typically
associated with financial security—except for the top 1 or
0.1%—such as visual arts, music, sports, writing, etc., could soon
be satisfying and achievable for anybody who wants to pursue
them, unconstrained by today’s needs to make a living and
provide for one’s family.
This shift could redefine what it means to be human—no longer
confined by the drudgery of an assembly line job that defines
one's entire existence. As I suggested back in 2000, we might
need to rethink our very definition of being human. After all, is it
truly fulfilling to spend 30 years mounting a single type of wheel
onto cars on an assembly line? Such jobs, like farm work in 100°F
heat, represent a form of servitude, not human flourishing.

But this isn't just about blue-collar work; white-collar jobs might be
the first to go. Take investment banking, for instance—is it
gratifying to spend 16 hours a day hacking away at an Excel
spreadsheet or PowerPoint deck, repeating the same rote tasks?

Life won’t become less meaningful once we eradicate


undesirable, toil-intensive jobs. Quite the opposite—life will
become more meaningful as the need to work 40 hours per week
could disappear within a few decades for those countries that
adapt to these technologies. Keynes, in 1920, postulated a
15-hour workweek! Imagine what could be possible—I imagine
we could be at a 3 day workweek in 10-20 years, providing the
20% of work we may need or want, with massively increased
productivity. I, for one, would be happy to work on my garden one
day a week and spend the rest of my time learning, even at age
69. Finally, I could have enough time for skiing, hiking, or
indulging my many interests.

It is precisely this opportunity to redefine the human experience


that subverts the pessimists’ argument that the ‘humanness’ in
our lives will disappear. We can build a world that bestows much
more agency, self-efficacy, and hope to every human being by
first removing the financial constraints and considerations that
saddle so many with the need for basics for themselves and their
families. AI, by eliminating the burdens of basic survival, offers us
the chance to build a world where people are free to pursue what
truly matters to them. The main fields of human endeavor may
become culture, arts, sciences, creativity, philosophy,
experimentation, exploration, competition of every sort and
adventure. The real question is whether or not everyone will be
able to participate.

I. Potential obstacles to our utopia can be overcome


Of course, much can stand in the way of turning these predictions
into a utopian reality. Incumbent resistance from established
organizations could impede progress (e.g., Screen Actors Guild).
Politicians might exploit public fears for personal or populist gains,
further stoking resistance. Additionally, technical failures or
delays, potentially exacerbated by supply chain issues or global
conflicts, could set back development. The financial market also
poses a risk; economic downturns or poor conditions could lead to
promising ideas falling into a funding gap, described as a 'chasm
of a bridge too far.'

Anti-tech sentiment, including opposition from naysayers and


those who distrust technology, could hinder widespread adoption
of beneficial developments. This sentiment may align with the
concerns of modern-day Luddites, who might co-opt the
discourse, alongside DEI advocates, potentially diverting the
focus from technology's potential benefits. The situation may be
further complicated by a few AI-related negative outcomes
receiving disproportionate media attention, leading to a tainted
public perception of AI. 'Left field' events, those unpredictable and
out of the ordinary, are common and could unexpectedly disrupt
progress. Lastly, the movement may suffer if key instigators and
advocates fail to emerge or effectively champion the cause.

But, I stand by my conviction that an AI-driven utopia is not just an


optimistic possibility, but a very achievable probability with the
right societal choices and technological advancements. The key is
to harness AI's potential responsibly and ensure its benefits are
distributed equitably across society. As the contours of the AI
landscape continue to evolve, it appears likely that there won’t be
one dominant corporation purveying AI and gatekeeping its
benefits.

The fear of AI power consolidating into the hands of a few is


unlikely given how accessible and user-friendly AI tools have
become. Unlike industries where expertise and capital create high
barriers to entry, AI development is becoming more democratized,
allowing individuals and small teams to build, train, and deploy AI
systems with minimal resources. Today, many cloud services
provide the infrastructure needed to train AI models at scale
without requiring specialized hardware or huge financial
investment. And new research from small players is focusing on
radically different approaches to developing AI than today's LLMs.
The path of optimal development is not yet clear to me. Likely,
many of these will be complementary to each other.
In addition, low-code and no-code and natural language platforms
are making it easier than ever for people without deep technical
expertise to create and deploy AI solutions. From chatbots to
machine learning models, these platforms abstract much of the
complexity, making it possible for an average person to develop
AI applications in a fraction of the time it would have taken just a
few years ago. AI-based APIs allow anyone with a basic
understanding of programming to integrate powerful AI into their
apps, tools, and workflows with minimal effort.

As the tools and resources for AI development continue to


become more accessible, the idea of a single company or entity
monopolizing AI becomes less feasible. Instead, we are heading
towards a future where AI development is open to
everyone—from individual entrepreneurs to local
businesses—allowing innovation to flourish from the ground up.
This decentralized model of innovation will help ensure that AI
remains a tool for the many, not the few.

4. New economics in an AI world


In the next five years, life may not feel dramatically
different—changes will be incremental and familiar. However,
between 10 and 20 years from now, we will witness an
acceleration of dramatic transformations reshaping society. I
define utopia as an abundance of goods, services, freedoms, and
a creative explosion, and it lies perhaps 25 years ahead. While it's
still on the horizon, this era of unprecedented prosperity is visible
today, though the rate of transformation is not. And the
intermediate disruptions to societal structure are hard to predict.

A. Capitalism and democracy in the AI era


Western capitalism is by permission of democracy and in many
cases, optimizes for economic efficiency and distribution of
outputs based on individual contributions. Capitalism has
achieved economic growth but in the age of AI we should not
focus on efficiency alone, but add the objective of reduced income
disparity as an equally important outcome, given the role of
equality in human happiness.

Capitalism may need to evolve in the face of AI-driven changes.


The diminishing need for economic efficiency gives us room to
prioritize empathetic capitalism and economic equality alongside
efficiency. Disparity beyond a certain point will lead to social
unrest, so we must enact policy with this in mind and share the
abundance AI creates more equally, despite the job losses. I grew
up a fan of some inequality (read “incentives to work harder”)
provided there were great opportunities for social mobility.
Further, capitalism today has strayed to some new form whereby
demand generation efforts in certain sectors (read advertising and
equivalents) exceed economic efficiency benefits, making us want
things we did not know we wanted. This is not additive to societal
well-being. We’re at a point where an improvement to our current
capitalistic system would be net positive. Unironically, those
societies which choose to embrace this technology (and not all
will equally) to the fullest will have a much greater capacity to
practice empathetic capitalism, by dint of the abundance AI will
unlock for them.

Let’s not slow down the hand of the market or technological


progress but rather realize human labor may be devalued in many
instances, putting downward pressure on wages of both
lower-skilled workers and higher-skilled workers. With less need
for human labor, expertise, and judgment, labor will be devalued
relative to capital and even more so relative to ideas and machine
learning technology. History has shown us, from the advent of
agriculture to the Industrial Revolution and the introduction of
electricity, that technological innovation causes massive
disruptions. For example, agriculture led to the rise of cities and
civilizations, and the Industrial Revolution, while displacing jobs,
ultimately improved living standards and created new forms of
employment. The Luddites, resisting the mechanization of their
craft, were the technophobes of the 19th century—much like
today’s critics who resist AI. But just as history proved the
Luddites wrong, today’s technophobes cannot stop the forward
march of progress.

In an era of abundance and increasing income disparity (as I


speculated in 2014 in my essay on AI), we may need this new
version of capitalism that places greater prioritization on
ameliorating the less desirable side effects of capitalism. The
genie isn’t going back into the bottle. Technology is coming. The
best we can do is be clear-eyed about that and manage the
transition with the least collateral (short-term) damage, just as
past societies learned to navigate through disruptions caused by
technological shocks.

B. Wage compression and job disruptions alongside increased productivity


AI's leveling of skill differences could compress wages both
individually and across various job functions, and value creation
may shift to creativity, innovation, capital or AI ownership,
potentially leading to different economic inequalities. While
productivity gains have historically led to higher wages and
increased consumer spending by increasing demand for the
higher productivity resource, be they engineers, salespeople, or
farm workers, such may not be the case with AI technologies
given their capacity to disintermediate humans from what I predict
to be 80% of the work in 80% of jobs in the next 25 years. There
may be a discontinuity in productivity.

We cannot simply extrapolate past economic history, which


preaches that in each technology revolution, new job
opportunities have outpaced the losses. As someone said, “when
the train of history hits a curve, the intellectuals fall off.” I contend
that this time could be different given that now, the basic drivers of
job creation will change with a technology that does not only
augment human capabilities, but also may surpass them
altogether, leaving education and upskilling – a historical avenue
for retraining and relevance – somewhat impotent. We have seen
large transitions before but never this quickly, making adjustment
a much harder problem. In 1900, 40+% of the U.S. labor force
was employed in agriculture, making it the largest employment
sector. By 1970, it was 4%. But that took three generations. This
AI cycle will be much faster. And hence more disruptive and
uncomfortable.

Artificial intelligence will likely lead to seismic changes to the


workforce as the tsunami of increasing AI capabilities comes at
us, eliminating many professions and requiring a societal rethink
of how people spend their time. Professions have been eliminated
before but never this fast. Changes could hit some people in the
economy more seriously than others, especially in the next
decade or two, even if society as a whole improves. This will likely
be a hard sell for the most affected people. The 10-25 year
transition could be very messy. But that is certainly no reason to
act from a place of fear and ultimately fail to reap the benefits of a
world freed from the constraints of work and with greater access
to the resources currently enjoyed by so few. It will be time to
seriously consider ways of taking care of those affected.

AI forms of expertise (i.e., AI doctors, lawyers, accountants,


financiers, administrators, teachers) can serve orders of
magnitude more expert capacity where needed, increasing not
only access but also quality, while leading to human job loss for
those previously in such positions. AI-powered robotics can and
likely will do the same for manual labor-intensive jobs, maybe
5-10 years behind AI software. And AI tools will increase output
and productivity, often as interns to humans in critical functions,
such that fewer humans are necessary, up to the point, likely in
the next decade, before they autonomously take over the job
functions.
So let’s continue this thought experiment around wage
compression and job disruption using the aggregate cost of
physician salaries in the U.S. healthcare system as a starting
point. It is north of $300 billion dollars, likely closer to $400 billion
(take 1mn doctors each making $300-400k). Predicting the fate of
the $300–$400 billion spent annually on U.S. physician salaries
hinges on supply and demand elasticities in healthcare. Suppose
medical expertise costs drop by 90% due to AI automation of
medical expertise and administrative workflows. Will consumption
increase tenfold to keep the ~$350B spend constant? Hard to
know. People won't start breaking more legs just because
orthopedic care is cheaper, but Jevons Paradox could apply such
that consumption of preventive care, chronic care, mental
healthcare, elective procedures, etc and any other medical
vertical where demand currently outstrips supply, increases once
barriers to access fall.

AI will hyper personalize and possibly commodify high quality


entertainment and media, and any art form will vie for the same
24 hours of user attention each day. Diversity and quality of media
will likely expand dramatically; will consumer spending also
increase? In other areas like accounting even if services become
cheaper through automation, a company won't require ten times
more audits. The demand is bounded by regulatory needs, not
cost.

Each sector will find its own equilibrium between supply, demand,
and elasticity, making precise predictions difficult without a
nuanced, sector-specific analysis for which, today, we have
insufficient data. In the fullness of time, the new AI economy will
find an equilibrium once demand hits the asymptote of total
consumption and time in each sector. This applies across all
verticals. Keep in mind our humanness may prefer “human
centered or human made work product” over technically superior
AI produced work. We already often prefer “hand made”.

Productivity will favor increasing average incomes but job


displacement will do the reverse even as the goods and services
produced increases dramatically. Net - net, will income disparity
increase? Much of this will depend upon the policy approach of
our elected officials and their willingness to tackle the severe
redistribution problem pure capitalism may create. Universal basic
income may be the best equalizer. It should be our goal to ensure
that median income rises along with average income.

C. Deflation and the need for new economic measures


Increased productivity with fewer inputs (i.e., lower or near zero
labor costs – think near zero computation costs and much
cheaper bipedal and other robots) and increased competition
(technical expertise more equally available to many) can trigger
deflation, along with the described job loss. These new dynamics
can increase hiring to leverage the lower effective manpower cost
in spending-limited companies for some time but eventually
supply will exceed demand in most sectors. Beyond labor and
expertise, as we use AI for resource discovery, material supplies
become abundant, costs for physical inputs may also decrease,
adding to this deflationary pressure. Of course, there are
additional nuances such as consumer behavior, business
investment decisions, and central bank responses. However,
given AI’s likeliness to touch every vertical of GDP, albeit in
different time frames, it would be hard to overstate the impact it
could have on our economy as a whole, and I doubt monetary
policy will be as strong a lever as it historically has been in this
new age. Monetary policy has worked for and has been refined to
effect incremental changes to the economy. Marginal economic
changes driving marginal behavior changes may not apply any
longer: the response to a wind is different than to a typhoon, to a
wave different than to a tsunami.

Historically, ‘deflation’ has a negative valence because chronically


falling prices typically lead to decreased profitability for companies
and stagnant or even shrinking economic growth. AI-led
deflationary growth by contrast will likely be concomitant with
increased consumption of goods and services (i.e., effectively
increased consumer spending power) for all the reasons outlined
above if mechanisms like universal basic income (UBI) are
adopted. In the extreme case, AI replaces most jobs, UBI supplies
income (spending quota?), and most goods and services decline
in price. Can universal basic income become a source of
increasing equality in society? Physical inputs like steel, cement
and copper may be the only real constraints but AI led discovery
of new plentiful resources will likely significantly increase supply
of these physical inputs. Is it necessarily bad if the number of
goods and services consumed by citizens increases dramatically
yet spending decreases? Our vocabulary today equates GDP
growth and corporate profits to prosperity; this is a bug of our
current lexicon. GDP measures over the next decades will be a
distortion of prosperity in a deflationary, AI world whereby GDP
could conceivably decrease but overall wellbeing and
consumption of goods and services increases.

From experience we know that the cost of labor or the cost of


capital can be effectively altered by simple changes in rules,
regulations, laws, and tax strategies like capital gains tax or
MLPs; many of these biases have been engineered into today’s
seemingly neutral capitalist economy. More and significant
manipulation will be needed to achieve reasonable income
disparity goals. Income or social mobility is an even harder goal to
engineer into society’s “rules” though AI could be the great
equalizer of knowledge and expertise. I suspect the situation will
become even more complex as traditional economic arguments of
labor versus capital are upended by a new factor many
economists don't adequately credit—the economy of ideas driven
by entrepreneurial energy and knowledge. This last factor may
become a more important driver of the economy than either labor
or capital. Some factors of production, like physical resources
(lithium or copper or steel) may take much longer to adjust to the
changes than others.

D. Policy choices
This new quantum jump in technology capabilities, left to its own
natural adjustment mechanisms we use currently in our capitalist
system, will likely lead to increasing income disparity and
abundance at the same time. It is possible that this time the
technology evolution really is different, because for the first time, it
is not about productivity enhancement but rather exceeding
human intelligence and capability. There is a discontinuity in the
switch from productivity enhancement of humans to substantial
replacement. If this scenario comes to fruition, we will need to
make structural changes in our social and political systems to
optimize for fairness or whatever we determine are our society’s
goals. Democratic processes are ideal for this decision making,
especially since not everyone will be needed to pursue the same
goals.

We face choices: accelerate, slow down, or moderate the


adoption of disruptive technologies, and decide whether to
compensate those displaced, for instance, through economic
support. The dynamics of change can be painful for those who
are disrupted, and to effectively embrace AI and all its positives,
keeping those who are displaced at the center of national policy’s
efforts will be key. Economic policy will need to include not just
economic growth tuning, but also bear in mind the levers and
mitigators of disparity and social mobility. As an unapologetic
capitalist and technology optimist, I advocate for the continued
rapid support and deployment of AI systems. We should not slow
down technological progress but rather adapt to the changes it
brings, including the potential devaluation of human labor and
expertise. These changes pose significant challenges, but they
also offer an opportunity to create in the 25+ year windows a
more empathetic society and a post-resource-constrained world.
We must be thoughtful about the society we live in and the future
we create, and craft policy much more empathetically. This is a
luxury that has been unaffordable in the past but may now be
ours to use.
Structural changes at the national (and international) level will
probably be necessary over the long term in order to solve the
larger side effects of technology exceeding human capability.
Economic policy will need to include not just economic growth
tuning but also be driven by disparity and social mobility biases. In
a global context where countries take different approaches to
adapting AI, dramatic shifts in relative economic wellbeing and
power are likely.

As AI reduces the need for human labor, UBI could become


crucial, with governments playing a key role in regulating AI’s
impact and ensuring equitable wealth distribution.

Given the massive productivity gains on the horizon, and a


potential for annual GDP growth to increase from 2 to potentially
4-6% (or much higher) over the next 50 years, per capita GDP
could hit ~1M (assuming 5% annual growth for 50 years). A
deflationary enough economy makes current nominal dollars go
much further and I suspect current measures of GDP will be poor
measures of economic well being.

In this world, I believe there will be sufficient resources and


abundance to afford UBI. Today, UBI might seem impractical due
to economic constraints today, and indeed, ignoring spending
constraints has led to disasters in countries like Argentina and
Venezuela. But those constraints will gradually become less so.
The abundance and wealth in this future society will create a
generosity and greater willingness to redistribute, as this would
create a better/safer society for all.

A word of caution is necessary in recommending any specific


solutions or premature action at a national scale that may be
drastic or irreversible because much of the dynamics of this
change and timing of technology breakthroughs is highly
unpredictable. Debate and discussion are definitely called for.
Point solutions for those hurt by the increasing income disparity
need to be found. We must watch changes closely and make
continued small policy changes this next decade. Even with
strong AI technology advancement, the actual impact and
adoption may be substantially slower given deliberate adoption
lags and natural human resistance to change.

E. Imagining a consumer utopia


An interesting parallel is China whose entry into the World Trade
Organization (WTO) in 2001 indeed created deflationary
pressures on the United States in the years that followed. This
was largely due to several factors related to trade liberalization
and increased competition from Chinese exports.The movement
of labor overseas has resulted in a loss of tens of millions of
stateside manufacturing jobs, yet little policy was centered around
upskilling or taking care of those whose livelihoods were
upended. With AI, we have the opportunity to free ourselves from
this low-cost labor in countries like China by repatriating
manufacturing stateside, without increasing the cost of goods,
while increasing some AI assisted manufacturing insourcing to
counter the decline in AI displaced jobs. China’s deflationary
influence came with reduced consumer spending power in the US
as jobs moved overseas. AI-led deflationary growth by contrast
will likely be concomitant with increased consumption of goods
and services (i.e., effectively increased consumer spending
power) for all the reasons outlined above. The dynamics of this
change will be hard to predict.

I can imagine a consumer utopia in 25+ years, where we’re not


supply constrained in most areas and deflation is actually a
positive tailwind for access and more equal consumption. Imagine
a world in which housing, energy, healthcare, food, and
transportation is all delivered or at your door, for near-free, by
machines; few jobs in those fields remain. What would be the key
characteristics of that world, and what would it be like to live in it?
For starters, it’s a consumer utopia. Everyone enjoys a standard
of living that kings and popes could have only dreamed of. I
suspect the cost of living at a certain standard in our future
utopian society will further decline, thus buying substantially more
for the individual who today earns $40,000 annually than
someone making $400,000 (as a guestimate) annually can buy.
Happily, technology will be even more deflationary for goods and
services than outsourcing to China has been over the last decade
or two. But my real hope would be with the abundance of goods
and services our citizens start to focus on what gives them more
happiness instead of more consumption and consumption
becomes less of a status symbol.

5. We can build the future we want


The future that happens will be the future to which we as society
decide to guide this powerful tool. That will be a series of
technology enabled policy choices and not technology choices,
and will vary by country. Some will take advantage of it and some
will not. What should be individual person level vs societal
choice? Since our basic needs are taken care of, all human time,
labor, energy, ambition, and goals reorient to the intangibles: the
big questions, the deep needs. Human nature expresses itself
fully, for the first time in history. Without the constraints of physical
needs, we will be whoever we want to be. The increase in GDP
will usher us into a ‘post-scarcity’ society where our basic
relationship with work must be redefined. And traditional GDP
measures will start to become an increasingly inaccurate poor
measure of human progress. And there will be great path
dependence based on the policy and societal choices we make.

Most importantly though, the grand ambition of imparting the rich


lifestyle enjoyed by only 700 million (~10%) people to all 7-8
billion global citizens, is finally within arm’s reach. It would be
patently impossible to scale the energy, resources, healthcare,
transportation, enterprise, and professional services 10x without
AI. That is the necessary force multiplier and the only tool capable
of scaling what the most fortunate currently enjoy. AI is necessary,
but it is not sufficient. Policy that creates fertile conditions for the
concomitant social, political, and economic transitions is required
along with energy and other innovations AI will likely enable.

AI is a powerful tool which, like any previous powerful technology


tool like nuclear or biotechnology, can be used for good or bad. It
is imperative that we choose carefully and use it to construct that
“possible" world guided by societal choices. That we not forsake
the benefits out of fear of the unknown.

I am a technology possibilist, a techno-optimist, but for technology


used with care and caring. Like we say “no wine before its time”,
there is need for regulation but no regulation before its time.
Reflecting on my words in a New York Times interview in 2000,
we will need to redefine what it means to be human. This new
definition should focus not on the need for work or productivity but
on passions, imagination, and relationships, allowing for individual
interpretations of humanity.

You might also like