If you want to understand global work in 2025, start where very few people actually look: the chips. Stephen Witt’s book, The Thinking Machine: Jensen Huang, Nvidia, and the World’s Most Coveted Microchip, which I just finished reading, is part biography, part business history, part supply-chain thriller. It traces how Nvidia—founded in 1993, reportedly brainstormed at a Denny’s—went from a gaming-chip upstart to the hardware backbone of AI, and how Jensen Huang’s strategic bets (notably CUDA and parallel processing) turned GPUs into essential infrastructure. The result is a vivid portrait of how a single firm’s choices can reshape global industries, labor markets, and even geopolitics.
Witt’s story lands in a moment when Nvidia has, at times, eclipsed Microsoft and Apple to become the world’s most valuable public company—a symbol of how central AI hardware has become to the modern economy. That ascent, turbocharged in June 2024, gives the book a live-wire relevance: this isn’t a post-mortem; it’s a field report from inside an ongoing transformation.
What the book does well
-
It demystifies the hardware behind the magic. Witt shows how a choice most non-engineers never heard of—Nvidia’s CUDA software stack—made GPUs programmable for everything from physics simulations to transformer models, and in doing so, locked in a developer ecosystem that compounds with every model trained. In other words, the moat isn’t just silicon—it’s the tools, talent, and code built atop it.
-
It puts a face (and temperament) on strategy. The book’s portrait of Huang—brilliant, demanding, sometimes volcanic—anchors an argument about leadership in frontier markets: conviction plus timing beats incrementalism. But it also hints at the costs and responsibilities of charismatic leadership when your product becomes essential infrastructure.
-
It makes the invisible visible. Chips are everywhere and noticed nowhere. Witt’s narrative surfaces the quiet dependencies—foundry capacity, lithography bottlenecks, export controls—that determine which countries and companies can build “thinking machines” at scale.
The public’s fascination—without deep awareness
I was struck that we are often enthralled by AI demos while remaining strikingly hazy on the plumbing that makes them possible. Evidence suggests broad support for science and technology paired with shallow mental models of how complex systems work. That gap is especially wide with chips and data centers: most people encounter the spectacle of generative AI but not the racks, cooling loops, grid connections, and specialized software that convert electricity into inference. Witt’s behind-the-scenes account helps close that gap.
And the stakes of that awareness gap are growing. Energy agencies now estimate data centers consumed about 1.5% of global electricity in 2024, with demand rising quickly as AI workloads scale. Nvidia itself frames the future not as “data centers” but “AI factories,” explicitly describing facilities that manufacture intelligence by turning power into models and decisions. Leaders who sell the sizzle of AI must also own the infrastructure story—costs, constraints, and trade-offs included.
Lessons for global work—and a new burden on leaders like Jensen Huang
1) Platform bets reshape global labor markets. CUDA wasn’t only a technical choice; it was a labor-market policy. By creating a programming model and toolchain, Nvidia effectively “standardized” a global pool of skills—from Barcelona to Bangalore—around its hardware. Companies that ride these platforms gain access to talent and libraries; those that don’t face recruitment, retraining, and ecosystem penalties. For HR and mobility leaders, that means skills strategies must track platform gravity, not just generic AI skills.
2) Geopolitics is now a first-order business variable. Export controls, market access, and national security are no longer background noise. Nvidia’s China exposure and the evolving U.S. rules on advanced chips show how quickly revenues, partnerships, and supply plans can be re-written by policy. Global leaders need real scenario planning (and country-level talent hedging) baked into product and go-to-market, not just legal compliance after the fact.
3) The energy constraint is a management constraint. If ever more companies become AI factories, then energy strategy becomes core strategy—site selection, power purchase agreements, grid interconnects, thermal management, and sustainability claims all move from facilities to the C-suite. The IEA and others project sharp growth in data-center load tied to AI; boards should be asking for energy budgets alongside model roadmaps.
4) Communicate the plumbing. Public legitimacy for AI will rest on leaders’ willingness to explain the less glamorous parts: why chips matter, why supply chains are fragile, why energy use is rising, and what concrete steps are being taken (efficiency, siting near low-carbon generation, model optimization).
5) Build cosmopolitan teams—and governance that travels. Nvidia’s story is global: U.S. design culture, Asian manufacturing, European tooling, worldwide customers. The work of making frontier tech safe and useful is likewise global—standards, export compliance, privacy regimes, and labor norms vary by jurisdiction. Leaders should invest in portable governance frameworks that localize responsibly without fracturing execution.
Practical takeaways for executives and global mobility leaders
-
Map your dependency stack. Inventory which parts of your AI strategy rely on Nvidia’s ecosystem (or others) across hardware, software, and talent. Where are your single points of failure—in suppliers, skills, or sites?
-
Tie AI roadmaps to energy roadmaps. Treat power as a first-class input: model demand, secure long-term contracts (ideally low-carbon), and budget for efficiency work (quantization, sparsity, scheduling). Report progress publicly.
-
Run geopolitics like a product risk. Assign ownership, build red-team scenarios for export rules or market access changes, and prepare “plan B” configurations that keep your teams shippable across jurisdictions.
-
Upskill for the platform you bet on. If CUDA is central, make it explicit in hiring, L&D, and partner selection. If you’re betting on alternatives, invest enough to avoid being stranded in a CUDA-centric supply of tools and talent.
-
Explain, don’t hype. Borrow a page from Witt’s narrative clarity: tell your workforce, customers, and the public what it really takes to build and run AI systems—chips, code, cooling, and choices—and where your responsibilities begin and end.
Verdict
The Thinking Machine is a brisk, well-reported introduction to the hardware realities behind AI’s soft-focus hype. It gives us a protagonist in Jensen Huang, but it also gives us the less telegenic truths of platform moats, energy budgets, export rules, and globalized work. Read it as a leadership case: how a long-odds bet on parallel computing, married to an ecosystem play, created enormous value—and how that value now carries societal obligations that can’t be delegated to comms teams or regulators.
