January has been notable for the variety of essential bulletins in AI. For me, two stand out: the US authorities’s assist for the Stargate Venture, an enormous knowledge heart costing $500 billion, with investments coming from Oracle, Softbank, and OpenAI; and DeepSeek’s launch of its R1 reasoning mannequin, skilled at an estimated price of roughly $5 million—a big quantity however a fraction of what it price OpenAI to coach its o1 fashions.
US tradition has lengthy assumed that greater is healthier, and that costlier is healthier. That’s definitely a part of what’s behind the costliest knowledge heart ever conceived. However we have now to ask a really completely different query. If DeepSeek was certainly skilled for roughly a tenth of what it price to coach o1, and if inference (producing solutions) on DeepSeek prices roughly one-thirtieth what it prices on o1 ($2.19 per million output tokens versus $60 per million output tokens), is the US expertise sector headed in the proper path?
It clearly isn’t. Our “greater is healthier” mentality is failing us.
I’ve lengthy believed that the important thing to AI’s success could be minimizing the price of coaching and inference. I don’t consider there’s actually a race between the US and Chinese language AI communities. But when we settle for that metaphor, the US—and OpenAI specifically—is clearly behind. And a half-trillion-dollar knowledge heart is a part of the issue, not the answer. Higher engineering beats “supersize it.” Technologists within the US have to study that lesson.