The arrival of DeepSeek blows up delusions of AI tech wealth. Credit: Thinkstock Hands up! Who remembers the dot.com bust? I do. It was March 10, 2000. The NASDAQ Composite index peaked at 5,048.62 points. That doesn’t sound like much today, but it was up 800% from 1995. Then people started to get nervous about the true value of the dot.com businesses driving the market, interest rates started to rise, and by October 2002, the NASDAQ had fallen 78% from its peak. We’re not even a month into 2025, and the new Chinese AI program DeepSeek this week sparked Nvidia’s $465 billion rout, the biggest single-day drop in US stock market history. This is just the beginning. It’s not that DeepSeek is so much better than ChatGPT’s OpenAI or any of the other popular generative AI (genAI) tools. Maybe it is, maybe it isn’t. (Just don’t ask it about what happened at Tiananmen Square in 1989.) No, what matters is that DeepSeek requires an order-of-magnitude less computing power to achieve similar results. By programmer Simon Willison’s count, “DeepSeek v3 trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. For comparison, Meta AI’s Llama 3.1 405B (fewer than DeepSeek v3’s 685B parameters) trained on 11x that; it required more than 30.8 million GPU hours and was trained on 15 trillion tokens of data. What does all that mean? As Larry Dignan, editor-in-chief of Constellation Insights, explained, “You can get API access to DeepSeek’s R1 model for 14 cents for a million tokens compared to OpenAI’s $7.50.” In other words, “large language models (LLM) pricing is going to collapse. “The scary part is the LLM giants didn’t have profit margins to begin with,” he continued. “LLMs are going to commodity in a hurry.” That’s bad news for companies such as OpenAI. Sure, the company’s market value stands at $157 billion — but it’s still losing billions of dollars. If OpenAI’s customers decide they don’t want to pay its rates when they can get much the same service for a fraction of the cost from DeepSeek, where does that leave OpenAI? What happens then? Look at it this way: today’s overheated stock market is driven by the Magnificent Seven— companies that with one exception (Tesla) have one thing in common: they’re all heavily invested in AI hype. Even Tesla is heavily invested in what Elon Musk calls real-world AI. It’s not just the Magnificent Seven; pick-and-shovel AI companies such as Oracle, Super Micro, and Nebius (formerly Yandex) are also in a world of trouble. If anyone can build LLMs without a ton of money, why exactly is Nvidia worth trillions? Couldn’t someone disrupt Microsoft, Meta, or Google? Indeed, Perplexity’s AI chatbot is already better than Google at search. It’s not just people like me for whom search is part of what they do for a living; businesses are also turning to Perplexity from Google. Bratin Saha, DigitalOcean’s chief product and technology officer, sees the potential for a relatively small company to become a big player. “DeepSeek AI is the ‘Android moment for AI’ (given that ChatGPT was likened to the ‘iPhone moment for AI’), as it shows groundbreaking AI products from the open-source community,” Saha told me. “DeepSeek AI democratizes AI and cloud computing because it shows you do not need multi-billion dollar investments for compelling innovation,” he said. “It lowers the barrier for small and medium enterprises and individual developers to work with AI.” Exactly. The way Stephen O’Grady, co-founder and industry analyst with RedMonk, sees it, enterprises have two major AI concerns. How trustworthy the technology is and its cost. As O’Grady points out, “Enterprises have been shocked, in many cases, at the unexpected costs – and unclear returns – from some scale investments in AI.” He’s got that right. O’Grady continued, noting that DeepSeek challenges some core assumptions by enterprises. These are: What if enterprises don’t have to rely on closed, private models for leading-edge capabilities? What if training costs could be reduced by an order of magnitude or more? What if they don’t require expensive, state-of-the-art hardware to run their models? This changes everything for businesses that want to use AI and multi-billion dollar companies that want to sell AI to you. It’s a sea change. “What the markets reacted to was DeepSeek’s ability to build a model that rivaled OpenAI’s sophisticated o1 reasoning model and Anthropic’s Sonnet model for pennies on the dollar on a fraction of the compute,” Jim Zemlin, executive director of the Linux Foundation, wrote. “It also appears that DeepSeek did this using well-known techniques. There were no massive algorithmic breakthroughs, just very clever engineering.” DeepSeek, he noted, uses open-source software. Yes, technically, part of it isn’t open source, but one of DeepSeek’s models is under the MIT License. As Stefano Maffulli, the Open Source Initiative (OSI) executive director, pointed out, DeekSeek’s models also fail to be open. Still,”a small team in China took a fresh look at a problem and came up with a novel approach that reduced the cost of chain-of-thought reasoning by 50x (if DeepSeek’s postings are accurate) and then published a paper fully describing their process, allowing the community to benefit from their learnings,” Zemlin said. “We need MORE of this progress, not less.… This is a struggle over open markets between the forces of open and the forces of closed.” That may be good for open source and for AI, but for our stock market, which is largely driven by AI, it’s another matter. The Magnificent Seven accounted for over 50% of the S&P 500’s gains in 2024. If they collapse, a good chunk of the market will follow. I’m not normally a bear, but I’ve seen this rodeo before. Oh, AI will survive, and eventually, it will prove useful and profitable. But, in the meantime, I can see an AI crash coming, and it won’t be pretty. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe