Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

MoneyGPT: AI and the Threat to the Global Economy
MoneyGPT: AI and the Threat to the Global Economy
MoneyGPT: AI and the Threat to the Global Economy
Ebook273 pages3 hours

MoneyGPT: AI and the Threat to the Global Economy

Rating: 0 out of 5 stars

()

Read preview

About this ebook

“Essential reading” – Forbes

From the New York Times bestselling author of The New Great Depression and Currency Wars, a telling prediction for how AI will endanger global economic markets and security


In November 2022, OpenAI released GPT-4 in a chatbot form to the public. In just two months, it claimed 100 million users—the fastest app to ever reach this benchmark. Since then, AI has become an all-consuming topic, popping up on the news, in ads, on your messenger apps, and in conversations with friends and family. But as AI becomes ubiquitous and grows at an ever-increasing pace, what does it mean for the financial markets?

In MoneyGPT, Wall Street veteran and former advisor to the Department of Defense James Rickards paints a comprehensive picture of the danger AI poses to the global financial order, and the insidious ways in which AI will threaten national security. Rickards shows how, while AI is touted to increase efficiency and lower costs, its global implementation in the financial world will actually cause chaos, as selling begets selling and bank runs happen at lightning speed. AI further benefits malicious actors, Rickards argues, because without human empathy or instinct to intervene, threats like total nuclear war that once felt extreme are now more likely. And throughout all this, we must remain vigilant on the question of whose values will be promoted in the age of AI. As Rickards predicts, these systems will fail when we rely on them the most.

MoneyGPT shows that the danger is not that AI will malfunction, but that it will function exactly as intended. The peril is not in the algorithms, but in ourselves. And it’s up to us to intervene with old-fashioned human logic and common sense before it’s too late.
LanguageEnglish
Release dateNov 12, 2024
ISBN9780593718643

Read more from James Rickards

Related to MoneyGPT

Related ebooks

Finance & Money Management For You

View More

Related articles

Reviews for MoneyGPT

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    MoneyGPT - James Rickards

    Cover for MoneyGPT: AI and the Threat to the Global Economy, Author, James Rickards

    Also by James Rickards

    Currency Wars

    The Death of Money

    The New Case for Gold

    The Road to Ruin

    Aftermath

    The New Great Depression

    Sold Out

    Book Title, MoneyGPT: AI and the Threat to the Global Economy, Author, James Rickards, Imprint, PortfolioPublisher logo

    Portfolio / Penguin

    An imprint of Penguin Random House LLC

    penguinrandomhouse.com

    Publisher logo

    Copyright © 2024 by James Rickards

    Penguin Random House values and supports copyright. Copyright fuels creativity, encourages diverse voices, promotes free speech, and creates a vibrant culture. Thank you for buying an authorized edition of this book and for complying with copyright laws by not reproducing, scanning, or distributing any part of it in any form without permission. You are supporting writers and allowing Penguin Random House to continue to publish books for every reader. Please note that no part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.

    Library of Congress Cataloging-in-Publication Data

    Names: Rickards, James, author.

    Title: MoneyGPT : AI and the threat to the global economy / James Rickards.

    Description: [New York] : Portfolio/Penguin, [2024] | Includes bibliographical references.

    Identifiers: LCCN 2024018674 (print) | LCCN 2024018675 (ebook) | ISBN 9780593718636 (hardcover) | ISBN 9798217043767 (international edition) | ISBN 9780593718643 (ebook)

    Subjects: LCSH: Artificial intelligence—Economic aspects. | Finance—Technological innovations. | National security—Economic aspects.

    Classification: LCC HC79.I55 R53 2024 (print) | LCC HC79.I55 (ebook) | DDC 658/.0563—dc23/eng/20240903

    LC record available at https://lccn.loc.gov/2024018674

    LC ebook record available at https://lccn.loc.gov/2024018675

    Ebook ISBN 9780593718643

    Cover design: Jennifer Heuer

    Cover image: W6 / iStock / Getty Images Plus

    Book design by Tanya Maiboroda, adapted for ebook by Cora Wigen

    pid_prh_7.0a_148717485_c0_r0

    Contents

    Dedication

    Epigraph

    Introduction

    1

    The End of Markets

    2

    Banking Mythos

    3

    Moneyness

    4

    National Insecurity

    5

    Future Failure

    Conclusion

    Acknowledgments

    Notes

    Selected Sources

    Index

    About the Author

    _148717485_

    To the memory of my parents,

    Richard and Sarah,

    with love, and a debt I can never repay

    In the evening you say, Tomorrow will be fair, for the sky is red; and, in the morning, Today will be stormy, for the sky is red and threatening. You know how to judge the appearance of the sky, but you cannot judge the signs of the times.

    —Matthew 16:2–3

    Introduction

    Knowledge in the form of an informational commodity indispensable to productive power is already, and will continue to be, a major—perhaps the major—stake in the worldwide competition for power. It is conceivable that nation-states will one day fight for control of information, just as they battled in the past for control over territory…. A new field is opened for industrial and commercial strategies on the one hand, and political and military strategies on the other.

    Jean-François Lyotard

    , The Postmodern Condition: A Report on Knowledge (1979)[1]

    Artificial intelligence, AI, has been developing since the 1950s, albeit with ancient antecedents and fictional forerunners such as Mary Shelley’s Frankenstein. Yet, GPT (generative pre-trained transformer) technology is genuinely new. It quietly emerged over the course of 2017–22 in versions such as GPT-2 and GPT-3 from OpenAI. Then, like a supernova, it burst upon the scene on November 30, 2022, when OpenAI released ChatGPT, a chatbot available to the public. The chatbot app supported by the new GPT-4 model had 100 million users in two months. The next-fastest app to have 100 million users was TikTok, which took nine months to hit the same goal. Instagram, which had the third-fastest user adoption, took thirty months. The GPT-4 chatbot is not only unique technology, it has had a breathtaking reception among users.

    The intellectual breakthrough that opened the door to GPT applications was the 2017 publication of Attention Is All You Need by Ashish Vaswani and collaborators. This paper proposed a new network architecture called the Transformer, which was able to process word associations needed to generate text in a parallel manner rather than using a prior process that relied on recurrent paths through neural networks. In plain English, this means the Transformer looks at many word associations at once instead of one at a time. The word attention in the title refers to the fact that the model can learn from its training materials and produce sensible word sequences on a self-directed basis without rigid rules. The model does more work in less time with the same processing power. The Transformer was then harnessed to preexisting technologies such as natural language processing (NLP), machine learning, and deep learning (using neural network layers that provide input to higher layers). Suddenly, grammatical text generation in reply to prompts was in reach.

    Progress from GPT-1 (2018) to GPT-2 (2019) and GPT-3 (2020) was largely a function of increasing the parameters, the elements that define a system, that the model could use and expanding the size of the large language models (LLMs)—the volume of text that the model could train on. GPT-1 had 117 million parameters. GPT-2 had 1.5 billion parameters. GPT-3 had 175 billion parameters. GPT-4 is estimated to have 1.7 trillion parameters, making it one thousand times larger than GPT-3. Alongside this exponential expansion of parameters came an equally large expansion in the volume of training materials. GPT-3 and GPT-4 had access to the entire internet (about 45 terabytes in a 2019 sample) through a dataset colllected by an organization called Common Crawl. The volume was so large it had to be pared back to a more useful 570 gigabytes. This scaling of parameters and training materials was accompanied by leaps in the power of graphics processing units (GPUs) such as the Nvidia B200 Blackwell, which can be used for mathematical calculation as well as graphical generation.

    Even by Silicon Valley standards the emergence of GPT was extraordinarily fast and powerful. The world of information processing and human-machine interaction has changed before our eyes.

    That said, it’s worth considering whether GPT produces content aligned with the real world. Beyond existing AI tools such as search, spellcheck, and suggested words in a text, does generative AI produce real value?

    In its Summer 2023 edition, Foreign Policy published a fascinating exercise in the capacity of a GPT-4 chatbot (using a premium version called ChatGPT Plus) to write a geopolitical essay on the conflict in Ukraine with specific reference to Russia’s annexation of Crimea.[2] Alongside the ChatGPT essay was another essay on the same topic written by an undergraduate. Both essays were published with the authors’ identities redacted. The exercise was to read both and see if you could spot the computer-generated essay versus the human essay.

    I was able to identify the GPT-4 version (Essay 1) after one sentence without even reading the human version (Essay 2). The reason was the use of an overworked cliché, In the geopolitical chess game…, as an introduction to Russia’s action. The sentence also referred to Russia’s move as a significant shift in power dynamics. Clichés have their place and I sometimes use them myself, yet two in the first sentence is a dead giveaway that a robot training on millions of pages of geopolitical text had driven into a literary cul-de-sac. In contrast, the human essay began, The Russian annexation of Crimea, a formerly Ukrainian peninsula, comprised the largest seizure of foreign land since the end of World War II. Not exactly electrifying, still it was declarative, factual, and informative. No clichés.

    That said, the robot essay was clearly written, grammatically correct, and informative, although clichés kept coming, including paved the way, domino effect, and power vacuum. Importantly, the robot essay was logical. It began with Russian aggression in Crimea, continued through the weak international response, suggested that this weak response emboldened Russia to expand the conflict, and concluded that this led eventually to the wider war we see today. The common thread was that each step in the sequence was part of a larger pattern of Russian aggression.

    The human essay had a similar thread, but with a broader perspective and more nuanced analysis. It framed the issue by stating that the Russian annexation of Crimea defied a universal, international understanding held throughout the latter half of the twentieth century: Independent countries maintain their territorial integrity. From there, the writer followed the robot version to the effect that the international response was weak, that response encouraged further Russian aggression, and the end result was the full-scale war in Ukraine now underway. The human showed panache by referring to the salami tactics of slowly taking the Donbas region in small pieces before launching a full-scale invasion. The human essay was written from a deeper worldview and with more analytical ability, but the robot essay was entirely passable. Grading according to long-gone standards, one might give the robot a C+ and the human a solid B.

    None of those comparative comments is what’s most interesting about the two essays. The most interesting feature is how badly flawed they both are. Neither essay mentions George W. Bush’s 2008 Bucharest declaration that Ukraine and Georgia will become members of NATO. Neither essay mentions Russia’s invasion of Georgia just four months after the Bucharest summit, demonstrating that Bush and NATO had crossed a red line. There was no recognition of the fact that part of Ukraine is east of Moscow, a city that has not been attacked from the east since Genghis Khan. The CIA-aided Maidan revolt in 2014 that deposed a duly elected Ukrainian president was ignored. Those matters aside, the human author does not reconcile her reference to territorial integrity with the U.S. invasion of Iraq in 2003.

    In short, the war in Ukraine has little to do with Russian expansion, geopolitical ambitions, or salami tactics. The war is a response to fifteen years of Western provocation. How could the robot and the student miss the backstory, ignore U.S. provocation, and misinterpret the sources of Russian conduct? For the student, we can fault the mainstream media. For the robot, we can fault what GPT engineers call the training set. These are the written materials the robot scans online and that then populate the deep learning neural networks its algorithms use to generate output. We can excuse the student because she needs more seasoning as an analyst. There’s no need to excuse the robot. It did exactly what it was programmed to do and offered a humanlike essay. The analytic failures in the essay were not due to the robot or the algorithms. They were due to a badly skewed training set based on reporting by The New York Times, The Washington Post, NBC News, the Financial Times, The Economist, and other leading media outlets.

    The Foreign Policy essay comparison shines a bright light on GPT’s real failure. The processing power is immense. The training set materials are voluminous beyond comprehension. The deep learning neural networks are well constructed. The transformer-style parallel processing needs improvement. Still, it will improve because GPT systems have self-learning features. As noted, the robot essay was grammatical and logical. The problem was that the robot trained on a long line of propaganda by Western media. When a robot trains on propaganda, it repeats the propaganda. One should not expect a different result. This is GPT’s true limitation.

    What Tomorrow May Bring

    AI is established and improving rapidly. GPT, a branch of AI, is new yet powerful and readily accessible by those scarcely acquainted with the science behind it. AI robots such as Siri, Alexa, and your car’s navigation system, all with voice recognition software and an ability to speak to you, are already like friends. Meta’s clunky-looking augmented reality and virtual reality headsets are gaining in popularity. Eyeglasses that stream Facebook straight to your retinas from inside the lens are now available. Your oven, dishwasher, and refrigerator all have AI inside to let you know how they feel. GPT sets itself apart because its output is not limited to temperature settings and YouTube streams. It can write grammatically and at great length and is already used for press releases and newsreader scripts.

    This book picks up the challenge of AI/GPT and considers how it impacts two areas of the utmost importance to everyday Americans—finance and national security. Of course, there are countless ways to apply AI/GPT to both capital markets and banking to increase efficiency, improve customer service, and lower costs to financial intermediaries. Hedge funds have already been launched that use GPT to pick stocks and predict exchange rates. We discuss that in chapter 1, and also look at the dangers in robot-versus-robot scenarios when recursive trading develops that crashes markets in ways participants themselves do not understand. We had a foretaste of this on October 19, 1987, when the Dow Jones Industrial Average fell over 20 percent in one day—equivalent to an 8,000-point drop at today’s index level. That was caused by portfolio insurance that required insurance providers to buy put options to hedge against an initial decline. The option sellers then shorted stocks to hedge their position, which caused more declines, which caused more put buying, and so on until markets were in a death spiral. That crash was not nearly as automated as would transpire today; it came before the implementation of AI. Yet, the dynamic of selling causing selling remains and will only be amplified by AI/GPT systems. Exchange circuit breakers offer a time-out but not more. Robots cannot be tamed as easily as humans.

    Chapters 2 and 3 move beyond capital markets (stocks, bonds, commodities, foreign exchange) to examine banking (loans, deposits, Eurodollars, and derivatives). Both are prone to panics, but the dynamics are different. Capital market panics happen suddenly and are highly visible. Banking panics build slowly and are mostly invisible to depositors and regulators until a full-blown liquidity crisis emerges. At times the two types of panic converge, as when bank failures lead to stock market sell-offs or vice versa. We explain these differences and show how AI/GPT systems may amplify already precarious structures.

    AI dangers are not limited to unintended consequences. Financial markets are a magnet for criminals and malicious actors out to profit from panics they cause. This can be done by spoon-feeding text to social media, PR wires, and mainstream channels. GPT will devour texts as it was trained to do. Parameterization will result in overweighting the most recent or most impactfully worded content. Robots will make recommendations based on imitation news, with predictable market results. The malicious actors will be pre-positioned to profit from immediate market reaction. Still, the process will not stop there. In the robot-versus-robot world, no one has enough information or processing power to predict what happens next.

    The prospect of unintended and intentional chaos in capital markets and banking segues into the national security realm, where the stakes are higher. Both state and non-state actors have greater resources than market manipulators. Malicious intent among state adversaries is a foregone conclusion. Non-state actors have motivations ranging from money to ideology to nihilism. Some non-state actors are thinly veiled proxies for states. The view is inherently opaque. What intelligence professionals call the wilderness of mirrors becomes even more disorienting when seen through smartglasses.

    The evolution from kinetic weapons to financial sanctions as the primary arena of war is well underway. Missiles, mines, and mortars may be on the front lines, but export bans, asset seizures, and secondary boycotts are a critical part of the battlespace. If you can destroy an enemy’s economy without firing a shot, that’s a preferred path for policymakers. Even when weapons are fully deployed, the industrial and financial capacity behind the arsenals is critical in the long run. The linkages between finance on the one hand and national security on the other are dense and can be decisive.

    AI/GPT enters this battlespace on three separate vectors. The first is the application of smart systems to frontline situations involving intelligence, surveillance, targeting, telecommunications, jamming, logistics, weapons design, and other traditional tasks. The second is the use of AI/GPT to optimize financial sanctions by considering second- and third-order effects of oil embargoes, semiconductor export bans, reserve asset freezes, confiscations, insurance prohibitions, and other tools intended to weaken or destroy an adversary’s economic capacity. Finally, AI/GPT can be used offensively not to constrain markets but to annihilate them. The goal would be not to impose costs through sanctions but to destroy markets in ways that would cost citizens trillions of dollars in lost wealth. The impact goes beyond asset values as citizens start to blame their own governments for the financial carnage. We explore this world in chapter 4, with particular focus on the dangers of nuclear war resulting from AI.

    In chapter 5 we consider the most fraught aspects of AI/GPT: censorship, bias, and confabulation—generating content for user convenience that is invented from whole cloth. This flaw runs deeper than what observers call AI hallucinations. Real hallucinations are complex and creative. A better metaphor for what happens in AI/GPT is confabulation, which is a form of mental illness. In confabulation, the speaker utters an invented story with a narcissistic bent and limited relevance. It’s a kind of mental module that can be trotted out (and is, repeatedly) when the speaker is flustered, confronted, or otherwise at a loss for words. It’s not the same as a lie because the speaker doesn’t know he’s lying due to a lack of self-awareness.

    GPT will do the same in the course of writing

    Enjoying the preview?
    Page 1 of 1