Google: The AI Company

⬅️ Back to Podcasts

🎙️ PODCAST INFORMATION

  • Title: 🎙️ Podcast Review: Ben Gilbert & David Rosenthal (Acquired)
  • Series: Acquired
  • Episode: Google: The AI Company
  • Hosts: Ben Gilbert (Co-founder, Pioneer Square Labs) & David Rosenthal (Angel Investor)
  • Guest(s): Primary research interviews with Jeff Dean, Demis Hassabis, Sundar Pichai, Sebastian Thrun, and 15+ Google/DeepMind executives
  • Duration: Approximately 4 hours and 6 minutes

📓 Full Episode Info here

https://www.acquired.fm/episodes/google-the-ai-company

🎧 Listen here

📺 Watch here

🎣 HOOK

The world’s greatest business faces its greatest test. Google invented the Transformer—the breakthrough technology powering every modern AI system—employed nearly all the top AI talent, built the best dedicated AI infrastructure, and deployed AI at massive scale years before anyone else. Yet on November 30, 2022, ChatGPT’s launch caught them completely flat-footed. How does a company that literally wrote the blueprint for the AI revolution find itself playing catch-up to a nonprofit-turned-startup? The answer reveals a masterclass in innovation, hubris, and the most expensive dilemma in business history.

💡 ONE-SENTENCE TAKEAWAY

Despite having invented the core technology, talent, and infrastructure that powers today’s AI revolution, Google faces the classic innovator’s dilemma of whether to fully embrace AI at the risk of cannibalizing their massively profitable search business.

📝 SUMMARY

Who Are the Storytellers?

Ben Gilbert and David Rosenthal, hosts of Acquired, deliver their most ambitious episode yet—a 4+ hour deep dive into Google’s 20+ year AI journey. Unlike typical podcast reporting, they conducted primary research with Google royalty: Jeff Dean (the “engineering equivalent of Chuck Norris”), DeepMind CEO Demis Hassabis, CEO Sundar Pichai, and Sebastian Thrun (founder of Waymo). Their access and meticulous sourcing—drawing from Steven Levy’s In the Plex, Parmy Olson’s Supremacy, and Cade Metz’s Genius Makers—creates an unprecedented oral history.

The Core Paradox

The episode opens with a stark contradiction: while Google defined modern AI through the 2017 Transformer paper, its authors (eight Google Brain researchers including Noam Shazeer), left to create the very competitors now threatening Google’s dominance. The hosts frame this as “perhaps the most classic textbook case of the innovator’s dilemma ever,” where the technology that should secure Google’s future instead weaponizes its rivals.

The Hidden Origins (2001-2011)

The story begins not in 2017, but in a Google micro-kitchen in 2001, where early engineer Gomes Shazeer and George Hickey theorized that “compressing data is technically equivalent to understanding it.” This insight birthed “Phil,” Google’s first production language model in 2003, which powered AdSense and spelling correction. The “cat paper” of 2011 (where a nine-layer neural network taught itself to recognize cats from 10 million unlabeled YouTube frames on 16,000 CPU cores) proved unsupervised learning could work at Google scale, quietly generating hundreds of billions in revenue through YouTube recommendations over the next decade.

The DeepMind Catalyst

The 2014 DeepMind acquisition emerges as the butterfly effect that created OpenAI. When Google DeepMind for roughly $550 million, Elon Musk—who had invested and wanted to merge it with Tesla…went “ballistic.” That fury led to the fateful 2015 Rosewood Hotel dinner where Musk and Sam Altman asked AI researchers, “What would it take to get you out of Google?” Ilya Sutskever’s answer—“nothing” turned into “I’m in” after Google rejected his pleas for independence, triggering the talent exodus that formed OpenAI.

The Transformer & The Great Mistake

In 2017, eight Google researchers published “Attention Is All You Need,” the Transformer paper now cited over 173,000 times (the seventh most-cited paper of the 21st century). Google allowed this despite Shazeer building an internal chatbot (Mina, later LaMDA) that could have launched years before ChatGPT. The episode reveals Google’s internal resistance: fears of brand damage, legal liability, and (most critically) threatening the $140B search ad model. The hosts call this “the greatest corporate decision for humanity and perhaps the worst for Google.”

The Response & Waymo’s Parallel

Post-ChatGPT, Sundar Pichai issued a “code red,” merging Brain and DeepMind and launching Gemini. Yet the episode’s coda reveals Google is winning elsewhere: Waymo, born from the 2004 DARPA Grand Challenge, now operates in five cities with 10+ million paid rides, doing more volume than Lyft in San Francisco and achieving 91% fewer serious crashes than human drivers. This 20-year moonshot—costing ~$10-15B, may become another Google-sized business, proving Google can still deliver world-changing innovation when it has no legacy to protect.

🧠 INSIGHTS

Core Insights

  • The Cat Paper Was the Real AI Big Bang: While AlexNet gets credit for the “big bang of AI,” Google’s 2011 unsupervised learning paper (where a neural network learned “cat neurons” from raw YouTube frames) proved deep learning could scale on Google’s distributed infrastructure, quietly powering YouTube’s recommendation engine and generating hundreds of billions in revenue without public fanfare.

  • Elon Musk as Accidental Google Disruptor: Elon’s fury over losing DeepMind to Google directly catalyzed OpenAI’s creation. His 2015 Rosewood Hotel dinner pitch “let’s build a nonprofit AI lab free from Google and Facebook” only worked because Google had already acquired DeepMind and refused to give it independence, making OpenAI’s mission credible to researchers.

  • Parallelization Is the Hidden Moat: Google’s true advantage wasn’t just data but its mastery of distributed systems. Jeff Dean’s DisBelief framework (2011) and TPU architecture (2015) enabled asynchronous training across thousands of machines; something competitors still struggle to replicate at Google scale.

  • The 5-Turn Safety Limit Reveals Cultural Paralysis: When Google finally launched a public chatbot in 2022, they limited conversations to five turns to prevent dangerous outputs; a stark contrast to OpenAI’s “ship it and iterate” ethos. This reveals how Google’s risk-aversion, born from monopoly protection, became its biggest liability.

  • Waymo Proves Google Can Still Win from Scratch: While Google fumbled the AI chat race, Waymo, a 20-year moonshot with no legacy constraints, now performs 10 million paid rides annually with a 91% reduction in serious accidents. It demonstrates that when Google operates like a startup, it can still create category-defining, self-sustaining businesses.

How This Connects to Broader Trends

  • The Innovator’s Dilemma in Real-Time: This is the first time in history we’ve been able to watch a monopoly live-tweet its own disruption. Google’s public earnings calls, product launches, and the Microsoft partnership drama provide an unprecedented case study in how incumbents respond when their core product becomes obsolete.

  • AI Economics Are Fundamentally Different: Unlike software’s 80% gross margins, AI models operate at ~50% margins due to compute costs. This shifts competitive advantage to the lowest-cost infrastructure provider; a position Google uniquely occupies through TPUs and proprietary data centers, potentially making it the winner in a low-margin token economy.

  • The End of the “Don’t Be Evil” Era: Google’s original motto presumed technology was neutral. But AI’s power to generate dangerous content, displace jobs, and concentrate power forced Google into a defensive crouch. The episode suggests “move fast and break things” has been replaced by “move fast and get sued,” creating openings for less scrupulous competitors.

🏗️ FRAMEWORKS & MODELS

Clayton Christensen’s Innovator’s Dilemma

A theory explaining why great companies fail when faced with disruptive technologies. The framework shows how:

  1. Sustaining vs. Disruptive Innovation: Google treated AI as sustaining (improving search) until ChatGPT made it disruptive (replacing search)
  2. The Profit Paradox: New AI products can’t match search’s $140B annual profit, creating internal resistance to cannibalization
  3. Resource Allocation Dilemma: Google’s best engineers worked on AI, but its best business people protected search ads

Rich Sutton’s “Bitter Lesson”

The principle that in AI history, general methods leveraging computation always win over human knowledge:

  1. Scale Beats Hand-Coding: Google’s early spell-checking language models (Phil) vs. rule-based systems
  2. The Transformer Proves It: Simple attention mechanisms outperformed complex LSTMs by scaling data and compute
  3. Implication for Google: Their infrastructure advantage matters more than their search algorithms

Hamilton Helmer’s Seven Powers (Applied to Google AI)

The strategic forces that create persistent differential returns:

  1. Scale Economies: Amortizing $130M training costs across quadrillions of inference tokens
  2. Cornered Resource: Google.com as the internet’s default front door
  3. Process Power: Jeff Dean’s culture of rewriting entire systems in a weekend
  4. Branding: Public trust in Google vs. skepticism of AI startups
  5. Switching Costs: Potential future lock-in via personalized AI integrated with Gmail/Calendar

💬 QUOTES

  1. “If we had the ultimate search engine, it would understand everything on the web… That’s obviously artificial intelligence.” — Larry Page, 2000, revealing Google’s AI-first vision from day one

  2. “Sanjay thinks it’s a good idea and no one in the world is as smart as Sanjay. So why should Nome and I accept your view that it’s a bad idea?” — Gomes Shazeer, on defying managers to build Google’s first language model, illustrating Google’s early 20% time culture

  3. “The speed of light in a vacuum used to be about 35 mph. Then Jeff Dean spent a weekend optimizing physics.” — Google internal “Jeff Dean fact”, capturing the mythology around their most prolific engineer

  4. “This is the story of how the world’s greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?” — Ben Gilbert, framing the episode’s central tension

  5. “I want people to know that we made Google dance.” — Satya Nadella, Microsoft’s CEO, after launching AI-powered Bing and successfully provoking Google into a reactive posture

  6. “We’re nowhere near doing that now. However, we can get incrementally closer and that is basically what we work on here.” — Larry Page, 2000, showing patience for incremental AI progress that vanished when disruption arrived

  7. “It had never been told what a cat was, but it had seen enough examples… that neuron would then turn on for cats and not much else.” — Jeff Dean, on the 2011 breakthrough proving unsupervised learning works

  8. “Why? What is the technical reason that this is impossible?” — Larry Page, to Sebastian Thrun about self-driving cars, forcing confrontation with fear rather than engineering barriers

  9. “We needed someone crazy enough to fund an AGI company… that liked super ambitious stuff.” — Shane Legg, DeepMind co-founder, describing why they targeted Peter Thiel

  10. “If you don’t have a foundational frontier model or you don’t have an AI chip, you might just be a commodity in the AI market. And Google is the only company that has both.” — Ben Gilbert, summarizing Google’s unique position

🎯 HABITS

Product Development Habits

  • Parallelize Everything: Jeff Dean’s core principle: whether translating sentences, training neural networks, or cooling data centers, break problems into independent chunks that can run simultaneously across Google’s distributed infrastructure
  • Research-First, Product-Second: DeepMind’s “solve intelligence” mission meant refusing product pressure for nearly a decade, allowing breakthroughs like AlphaGo that had no immediate business application
  • 20% Time as Pressure Valve: Gomes and George’s 2001 language model work happened because “everybody was just doing whatever they wanted to do” after Page fired all engineering managers

Leadership Habits

  • Acquire Talent Pre-Revenue: Google’s DNN Research acquisition (2012) for $44M and DeepMind acquisition (2014) for $550M were pure talent/IP plays with zero products or revenue
  • Charter Planes for Key People: When Jeff Hinton couldn’t sit due to back issues, Google chartered a private jet with a custom harness rather than miss the DeepMind diligence; this illustrates “whatever it takes” talent acquisition
  • Issue Company-Wide “Code Reds”: Sundar Pichai’s December 2022 edict instantly reoriented all of Google from treating AI as sustaining to disruptive innovation

Personal Habits

  • Build Fictional Mythologies: Google maintained internal “Jeff Dean facts” (Chuck Norris-style legends) to reinforce cultural values around technical excellence
  • The “Larry 1000” Benchmark: Page created a personal driving test (10 stretches, 1,000 miles) to give Waymo a concrete goal, showing how leaders externalize internal metrics
  • Cross-Domain Obsession: Demis Hassabis’s path from chess prodigy → video game developer → neuroscience PhD → AGI founder demonstrates how breakthrough innovators connect unrelated domains

📚 REFERENCES

  • In the Plex by Steven Levy (2011): Primary source for Google’s early AI work, including the 2001 micro-kitchen conversation that birthed Phil
  • Supremacy by Parmy Olson (2022): DeepMind’s origin story, the 2014 acquisition drama, and the Facebook bidding war
  • Genius Makers by Cade Metz (2021): The DNN Research auction details and OpenAI founding
  • “Attention Is All You Need” (2017): The Transformer paper, now the 7th most-cited paper of the 21st century
  • The Innovator’s Dilemma by Clayton Christensen: The theoretical framework that explains Google’s current strategic paralysis
  • DARPA Grand Challenge (2004-2005): Sebastian Thrun’s Stanford team victory that seeded Waymo
  • AlexNet (2012): The GPU breakthrough that made deep learning practical
  • “The Bitter Lesson” by Rich Sutton (2019): The principle that scale and compute beat human knowledge in AI

✅ QUALITY & TRUSTWORTHINESS NOTES

  • Direct Access to Primary Sources: Hosts interviewed Jeff Dean, Demis Hassabis, Sundar Pichai, Sebastian Thrun, and 15+ other Google/DeepMind executives—rare access for independent journalists
  • Specific Financial Metrics: Cite exact figures: $44M DNN Research acquisition, $550M DeepMind price, $130M for 40,000 GPUs, $10-15B total Waymo investment, $140B Google search profits, 173,000+ Transformer citations
  • Cross-Referenced Sources: Triangulate stories across multiple books (Levy, Olson, Metz) and interviewees, noting where timelines differ
  • Empirical Scale: Document specific compute figures—16,000 CPU cores for cat paper, 2-3M TPUs deployed, quadrillions of inference tokens processed
  • Transparency on Uncertainty: Acknowledge unknowns, such as whether GPT-3.5 had RLHF at launch, the exact Microsoft-OpenAI ownership structure, and future AGI timelines
  • Institutional Context: Place events within broader industry shifts—mobile transition, cloud wars, antitrust cases—showing how external pressures shaped decisions

Crepi il lupo! 🐺