Build to Last - Chris Lattner talks with Jeremy Howard

⬅️ Back to Videos

▶️ Watch the Video

📝 VIDEO INFORMATION

  • Content Type: Interview / Discussion
  • Title: “Build to Last - Chris Lattner talks with Jeremy Howard”
  • Creator(s): Jeremy Howard (Interviewer), Chris Lattner (Interviewee)
  • Platform: YouTube
  • Duration: ~1h 15m
  • Publication Date: October 30, 2025
  • Link: https://www.youtube.com/watch?v=WJS2YDZO-vc&t=3s

E-E-A-T Assessment

  • Experience: 5/5 - Chris Lattner created LLVM (used by most modern programming languages), Swift (Apple’s language), and is building Mojo. Jeremy Howard founded fast.ai and has been at the forefront of practical AI for over a decade. Both have decades of hands-on experience building foundational systems.
  • Expertise: 5/5 - Deep technical expertise across compiler design, programming languages, AI systems, and software engineering practices. Both operate at the frontier of their respective fields.
  • Authoritativeness: 5/5 - Chris is the creator of some of the most important infrastructure in modern computing. Jeremy is a leading AI educator and researcher. Their combined perspective represents authoritative voices on both systems programming and AI.
  • Trust: 5/5 - Both speakers are transparent about their failures, concerns, and uncertainties. They share specific examples from their companies and don’t shy away from criticizing hype. No commercial agenda beyond genuine knowledge sharing.

Verdict: Proceed with review - This is a conversation between two of the most respected practitioners in computing, sharing hard-won insights from decades of building foundational systems. Their credibility is exceptional.

🎯 HOOK

When CEOs brag about 10,000 lines of AI-generated code per day, are we racing toward a future where no one understands how anything works? Chris Lattner-creator of LLVM, Swift, and now Mojo-sits down with Jeremy Howard to discuss why rushing to AI-generated code might be destroying the very craftsmanship needed to build software that lasts. This isn’t a conversation about whether to use AI. It’s about how to use AI without losing the mastery that separates durable systems from technical debt.

💡 ONE-SENTENCE TAKEAWAY

The future belongs to developers who use AI to enhance their understanding and craftsmanship, not those who delegate thinking to AI agents-because software that lasts requires humans who understand the systems they build.

⚖️ VERDICT

Overall Rating: 10/10

This conversation is essential viewing for anyone writing code in the AI age. Two of computing’s most influential builders discuss a topic most are afraid to touch: whether the current trajectory of AI-assisted coding is sustainable. They don’t reject AI-they use it daily-but they make a compelling case that how you use it matters more than whether you use it. The discussion spans technical philosophy, career advice, company culture, and concrete examples of AI coding gone wrong. What’s rare is the combination of deep technical credibility with willingness to question the prevailing narrative. This isn’t contrarianism for its own sake; it’s wisdom from people who have built systems that have lasted decades.

📊 EVALUATION CRITERIA

CriterionScore (/10)Key Observation
Content Depth10Exceptionally deep discussion spanning philosophy of software engineering, practical AI usage patterns, career development, and concrete examples. Covers technical, ethical, and professional dimensions.
Narrative Structure9Well-organized progression from shared background through current concerns to practical advice. Strong hook and logical development of themes.
Visual Quality7Standard video call interview setup-adequate but not cinematic. Focus is appropriately on conversation content.
Audio Quality9Clear dialogue throughout. Both speakers audible and well-balanced. Natural conversational flow.
Evidence & Sources10Specific examples from their companies (Modular, Answer AI), concrete stories of AI failures, historical parallels (2017 self-driving predictions). Primary source operational data.
Originality10Rare counter-narrative to the AI coding hype. Contrarian but grounded perspective from practitioners with unmatched credibility. Novel frameworks for AI-assisted development.

📖 SUMMARY

This conversation brings together two giants of modern computing-Chris Lattner, creator of LLVM, Swift, and Mojo, and Jeremy Howard, founder of fast.ai-to discuss a topic most in tech avoid: whether the rush to AI-generated code is destroying software craftsmanship.

The discussion begins with their shared history, going back to the first TensorFlow Dev Summit in 2017. Both arrived with backgrounds that were unusual for the AI world at the time-Chris as a systems/compiler expert, Jeremy as someone who had built products and cared deeply about developer experience. They bonded over their “mutual distaste for TensorFlow” and collaborated on Swift for TensorFlow, an attempt to build something better from first principles.

The core of the conversation explores what makes software last. Chris points out that LLVM, which he started over Christmas break in 2000, is still at the heart of most programming languages 25 years later. This wasn’t luck-it was architectural excellence, engineering culture, and craftsmanship. The question they pose: how do you build systems that last when the current trend is toward generating as much code as possible as quickly as possible?

Both speakers share specific examples of AI coding failures at their companies. Jeremy describes months where his team at Answer AI tried aggressive agentic workflows, resulting in productivity and morale “falling off a cliff.” Chris recounts a senior engineer who used an AI agent to “fix” a bug-the agent made the symptom go away, but introduced new bugs and messy code that would have made the product worse if merged.

The conversation draws a crucial distinction between using AI as a tool for learning and exploration versus using it as a replacement for understanding. Both use AI coding tools daily-Chris estimates a 10-20% productivity improvement, Jeremy sees similar gains-but they emphasize that the gains come from AI as an assistant, not a replacement. “Vibe coding”-hoping AI will solve your problems without understanding the solution-is identified as a career killer.

They also discuss the parallel between current AGI hype and the 2017 self-driving car predictions. In 2017, there was near-universal confidence that self-driving cars would be solved by 2020. The same pattern is repeating with AGI: assumptions that more data and compute will inevitably lead to human-level AI, despite no evidence of a path to AGI in current approaches.

The conversation concludes with practical advice for developers at all levels: focus on mastery, build tight iteration loops, invest in your tools and understanding, and don’t chase hype at the expense of craft. They emphasize that the developers who will thrive are those who use AI to learn faster and build better, not those who use AI to avoid learning altogether.

What the Video Covers

The conversation follows a clear thematic structure:

(00:00) Introduction and Shared History - Jeremy introduces Chris and recounts their first meeting at the 2017 TensorFlow Dev Summit, their collaboration on Swift for TensorFlow, and their shared frustration with existing AI infrastructure.

(03:00) Building from First Principles - Both discuss their approach to building: understanding fundamentals, identifying what’s broken, and having the conviction to rebuild. Chris’s journey from LLVM to Swift to Mojo exemplifies this pattern.

(07:00) Software That Lasts - Chris explains how LLVM, built 25 years ago, remains foundational for modern languages. The discussion turns to what makes systems durable: architecture, culture, and craftsmanship.

(11:00) The Self-Driving Car Parallel - Chris recounts his time at Tesla in 2017 when everyone believed self-driving would be solved by 2020. The same hype patterns are repeating with AGI today.

(16:00) The AGI Question - Jeremy, who created the first LLM (ULMFiT), explains that LLMs were never designed to create AGI-they’re pattern predictors, not paths to general intelligence. Both express uncertainty about AGI timelines.

(21:00) AI Coding Reality Check - Jeremy describes months where Answer AI tried aggressive agentic workflows, resulting in plummeting productivity and morale. The promised 10x gains didn’t materialize.

(25:00) Concrete Examples of AI Failures - Chris shares a story of a senior engineer using AI to “fix” a bug-AI made the symptom disappear but introduced worse problems. The code would have created technical debt if merged.

(30:00) Vibe Coding vs. Mastery - The distinction between using AI as a learning tool versus delegating understanding. “Vibe coding”-hoping AI solves problems without understanding-is identified as dangerous.

(35:00) Unit Tests as Technical Debt - Chris explains why AI-generated unit tests can be worse than no tests: they may test implementation details rather than concepts, and create brittle, tightly-coupled code that’s hard to refactor.

(40:00) Tight Iteration Loops - Both emphasize the importance of fast feedback cycles: incremental builds in seconds, tests in under 30 seconds. This requires architectural investment and craftsmanship.

(45:00) Career Advice for Junior Engineers - Practical guidance for developers entering the field: don’t chase hype, invest in mastery, find companies that value craft, differentiate yourself by understanding deeply.

(52:00) Tools and Environment - Discussion of development environments that enable tight iteration: Smalltalk, Lisp, Jupyter notebooks, nbdev, and the importance of tools that let you see the state of your work.

(60:00) AI as Collaborative Partner - Jeremy describes their approach at Answer AI: AI sees everything the human sees and vice versa. Shell Sage example-100 lines of code that lets LLM see your terminal history.

(67:00) The Future of Programming - Both believe AI tools will improve dramatically, but emphasize that how you use them matters more than whether you use them. The bifurcation of skills: those who use AI to learn vs. those who use AI to avoid learning.

(72:00) Mojo and Modular - Chris discusses Mojo’s progress: over 500,000 lines of Mojo code, open-source development, and the lessons learned from building Swift applied to building Mojo properly from the start.

Who Created It & Why It Matters

This conversation features two of the most influential practitioners in modern computing:

Chris Lattner is the primary interviewee-a legendary systems programmer whose creations underpin much of modern computing. LLVM, which he started as a PhD project in 2000, is now used by Rust, Julia, Swift, and countless other languages. He built the Clang compiler, then created Swift at Apple, and now leads development of Mojo at Modular. His perspective matters because he’s spent 25 years building foundational infrastructure that has remained relevant across multiple technology waves. He’s not anti-AI-he uses AI coding tools daily-but he brings deep historical perspective on what makes systems last.

Jeremy Howard is the interviewer and co-discussant-founder of fast.ai, the AI education platform that has taught millions, and Answer AI, a research lab focused on practical AI applications. He’s been at the forefront of AI research and application for over a decade, having created the first effective LLM (ULMFiT) in 2017. His perspective matters because he’s both an AI researcher and a builder of practical tools. He’s not defending human coding out of nostalgia-he genuinely tried aggressive AI coding workflows and found them wanting.

The conversation matters because it provides a rare counter-narrative to the prevailing AI coding hype. While most voices are either uncritically enthusiastic about AI coding or dismissively skeptical, these two practitioners occupy a nuanced middle ground: AI coding tools are valuable but must be used thoughtfully. They share specific failures, quantify their actual productivity gains (10-20%, not 10x), and provide concrete advice for developers navigating this transition.

Core Argument & Evidence

The central thesis is that software craftsmanship-the care, understanding, and architectural thinking that goes into building systems-is under threat from the rush to AI-generated code, but developers who use AI thoughtfully to enhance rather than replace their understanding will thrive.

Evidence supporting this argument:

  1. Historical parallel: The 2017 self-driving car predictions (solved by 2020) mirror current AGI hype. Both were wrong.
  2. Concrete failures: Specific examples from their companies where AI coding made things worse, not better
  3. Productivity reality: Both report 10-20% gains from AI tools, not the 10x promised by hype
  4. System longevity: LLVM (25 years old), Linux kernel (decades)-these systems required craftsmanship to last
  5. Technical debt examples: AI-generated unit tests that test implementation details, creating brittle code
  6. Career pattern observation: Chris has watched career arcs over 20 years-the successful ones invest in mastery

Logical structure:

  • Premise 1: Software that lasts requires architectural thinking and craftsmanship
  • Premise 2: Current AI coding trends encourage rapid code generation without understanding
  • Premise 3: This creates technical debt and erodes the skills needed for durable systems
  • Premise 4: AI tools can be used thoughtfully to enhance understanding and productivity
  • Conclusion: The future belongs to developers who use AI to build mastery, not those who use AI to avoid it

The argument is compelling because it comes from practitioners who have actually tried the approaches they’re critiquing. Jeremy’s team at Answer AI spent months trying aggressive AI workflows. Chris uses AI coding tools daily. They’re not rejecting AI-they’re critiquing how it’s being used.

Practical Applications

For junior developers:

  • Don’t “vibe code”-using AI without understanding is a career killer
  • Invest in mastery: learn fundamentals, understand architecture, build deep skills
  • Find companies that value craftsmanship, not just lines of code shipped
  • Differentiate yourself by understanding things others don’t
  • Use AI to learn faster, not to avoid learning

For senior developers:

  • Maintain tight iteration loops: incremental builds in seconds, tests in under 30 seconds
  • Review AI-generated code carefully-don’t assume it understands architecture
  • Invest in tooling that enables rapid feedback without sacrificing understanding
  • Be the senior engineer who actually understands the codebase deeply

For engineering managers:

  • Don’t measure productivity by lines of code-measure by product progress
  • Watch for “vibe coding” culture where people stop understanding what they’re building
  • Code review becomes even more important with AI-generated code
  • Consider whether your metrics encourage technical debt

For companies:

  • Be wary of the culture that says “just let AI handle it”
  • Building products requires the team to understand the architecture
  • AI coding is transformative for prototypes, less so for production systems
  • Technical debt from AI-generated code may not show up immediately

🔍 INSIGHTS

Core Insights

  • The Self-Driving Parallel is Exact: Chris’s description of 2017 Tesla-when everyone believed self-driving would be solved by 2020-mirrors today’s AGI hype with uncanny precision. The same certainty, the same “just need more data” reasoning, the same dismissal of skeptics. History doesn’t repeat, but it rhymes, and this rhyme should give everyone pause.

  • Vibe Coding is Learned Helplessness: Jeremy’s description of developers using AI like a “gambling machine”-waiting for the agent to maybe produce something useful, coaching it, trying again-describes learned helplessness, not productivity. The developers who succeed will be those who use AI as a senior advisor, not a replacement for thinking.

  • 10-20% Gains, Not 10x: Both practitioners report modest productivity improvements from AI coding tools-Chris estimates 10-20%, Jeremy sees similar gains. This contrasts sharply with the 10x claims from CEOs and VCs. The reality: AI is a useful tool, not a transformation of the development process.

  • Unit Tests Can Be Technical Debt: Chris’s insight that AI-generated unit tests often test implementation details rather than concepts is crucial. These tests become anchors that make refactoring harder, not easier. More code-whether written by human or AI-isn’t always better.

  • The Bifurcation of Skills: Both predict a split between developers who use AI to learn faster (getting better and better) and those who use AI to avoid learning (getting worse relative to AI capabilities). The gap between these groups will widen over time.

  • Tight Iteration Loops Enable Mastery: The emphasis on fast feedback-incremental builds in seconds, tests in under 30 seconds-isn’t about speed for speed’s sake. It’s about enabling the exploration and learning that builds mastery. Slow tools force you to batch work, which reduces learning per unit of effort.

  • Architecture Enables Longevity: LLVM survived 25 years because of architectural excellence, not luck. The systems being built today with AI-generated code may not last because they lack the architectural thinking that makes evolution possible.

  • Craftsmanship is Under Threat: The current culture that brags about lines of AI-generated code and treats understanding as optional is actively hostile to craftsmanship. This matters because craftsmanship is what makes systems maintainable and evolvable.

How This Connects to Broader Trends/Topics

  • The Decade of Agents: This conversation provides a crucial counterpoint to the agentic AI enthusiasm. While others celebrate “vibe coding” and 10,000-line days, these practitioners warn that delegation without understanding creates technical debt.

  • Software Engineering Skill Evolution: The skills that matter are shifting-but not in the way hype suggests. Understanding architecture, thinking about abstractions, and building mastery are becoming more valuable, not less.

  • The AGI Timeline Debate: Jeremy, who created the first LLM, states clearly that LLMs were never designed for AGI and there’s no particular reason to think the current path leads there. This matters coming from someone who has been right about LLM capabilities before most recognized them.

  • Technical Debt in the AI Age: The conversation identifies a new form of technical debt: code that works but nobody understands. This debt doesn’t show up immediately but makes future evolution harder.

  • Developer Career Development: The advice to junior developers-to invest in mastery while others chase hype-is timeless wisdom applied to a new context. The developers who thrive will be those who differentiate through depth.

  • Productivity Metrics in Engineering: The critique of measuring lines of code written by AI connects to broader conversations about what engineering productivity actually means. Product progress, not code volume, is what matters.

🛠️ FRAMEWORKS & MODELS

The AI-Assisted Craftsmanship Framework

A model for using AI coding tools to enhance rather than replace understanding.

  • Components:

    • AI as senior advisor: Treat AI as a knowledgeable colleague who can suggest approaches, not as a replacement for your thinking
    • Verification discipline: Always understand AI-generated code before committing it
    • Learning orientation: Use AI to accelerate learning, not to avoid it
    • Architectural thinking: Maintain human ownership of architecture and design decisions
  • How it Works: Rather than delegating coding to AI, you engage in a dialogue where AI suggests, you evaluate, and you decide. The AI helps you explore options faster and learn new APIs, but you maintain understanding of what the code does and why.

  • Significance: This framework explains how to get value from AI tools without creating technical debt or eroding skills. It positions AI as an accelerator of mastery rather than a replacement for it.

  • Evidence: Both speakers describe using AI this way daily. Chris estimates 10-20% productivity gains. Jeremy describes using AI for exploration while maintaining understanding of the result.

The Vibe Coding Anti-Pattern

A description of how not to use AI coding tools, based on observed failures.

  • Components:

    • Delegation without understanding: Letting AI write code you don’t comprehend
    • Hope-based development: Waiting for AI to “maybe” produce something useful
    • Surface-level fixes: Accepting code that makes symptoms disappear without addressing root causes
    • Accumulation of technical debt: Creating code that works today but blocks future evolution
  • How it Works: Developers treat AI as a magic solution that will solve problems without their involvement. They describe what they want, wait for AI to generate code, and accept it if it appears to work. This creates code that nobody understands, making debugging and evolution harder over time.

  • Significance: This anti-pattern explains why some teams see productivity decreases with AI coding. It identifies the specific behaviors that lead to AI-generated technical debt.

  • Evidence: Jeremy describes months at Answer AI where productivity “fell off a cliff” with aggressive agentic workflows. Chris shares the bug-fix example where AI made symptoms disappear while introducing worse problems.

The Tight Iteration Loop Model

A framework for development environments that maximize learning and craftsmanship.

  • Components:

    • Incremental builds: Compile only what changed, in seconds
    • Fast test execution: Run relevant tests in under 30 seconds
    • Observable state: See the current state of the system clearly
    • Immediate feedback: Know immediately if changes work or break things
  • How it Works: Fast feedback loops enable exploration and experimentation. When you can try something and see results immediately, you learn faster and build better mental models. Slow feedback forces batching, which reduces learning opportunities and encourages guessing over understanding.

  • Significance: This model explains why tooling matters for craftsmanship. It connects to historical examples (Smalltalk, Lisp, Jupyter) and shows how modern tools can support or hinder mastery.

  • Evidence: Both speakers emphasize tight iteration loops. Chris describes building this into LLVM and Mojo. Jeremy describes the notebook-based workflow that enables fast exploration.

The Mastery Differentiation Strategy

A career development approach for thriving in an AI-saturated market.

  • Components:

    • Deep understanding: Know systems at a level others don’t
    • Architectural thinking: Design systems that can evolve
    • Tool mastery: Invest in understanding your tools deeply
    • Differentiation through depth: Be the person who can solve problems others can’t
  • How it Works: As AI tools become ubiquitous, surface-level coding skills become commoditized. The value shifts to understanding architecture, designing evolvable systems, and solving complex problems. Developers who invest in deep skills differentiate themselves from those who rely entirely on AI delegation.

  • Significance: This strategy provides a path forward for developers worried about AI replacing their jobs. It suggests that AI makes deep skills more valuable, not less.

  • Evidence: Chris describes watching career arcs over 20 years-those who push and develop mastery succeed. Both speakers emphasize that understanding architecture is becoming more valuable, not less.

💬 NOTABLE QUOTES

  1. “Architecture and craftsmanship that goes into building things is at risk. Like it’s under threat today really.” Context: Chris on the current culture around AI-generated code Significance: Direct statement of the central concern-speed is being prioritized over sustainability.

  2. “I’m feeling this pressure to say screw craftsmanship, screw caring. You know, we hear VCs say, ‘Oh, my founders are telling me they’re getting out 10,000 lines of code today.’ Are we crazy? Are we old men yelling at the clouds?” Context: Jeremy questioning whether their concerns are outdated Significance: Captures the social pressure to abandon craft for speed, and the self-doubt that comes with resisting it.

  3. “I was convinced that in 2020 [self-driving cars] would be everywhere and would be solved… Here we are, I don’t know, eight years later. Nobody else solved that problem either.” Context: Chris reflecting on his time at Tesla in 2017 Significance: The historical parallel that should give everyone pause about current AGI predictions. Firsthand admission of being wrong.

  4. “Our productivity fell off a cliff. Our morale fell off a cliff. I was unhappy.” Context: Jeremy describing months of trying aggressive AI agentic workflows Significance: Concrete evidence from a team of expert AI practitioners that aggressive AI coding doesn’t deliver on its promises.

  5. “It made the symptom go away. So it air quote fixed the bug. But it just was so wrong that if it had been merged… it would have just made the product way worse.” Context: Chris describing an AI ‘bug fix’ at his company Significance: Perfect example of how AI can appear to work while creating technical debt.

  6. “AI is really great at writing unit tests… But there’s a problem. Because unit tests are their own potential tech debt.” Context: Chris on the dangers of AI-generated tests Significance: Counterintuitive insight that more tests (especially AI-generated ones) can make code harder to maintain.

  7. “Vibe coding… I’m waiting like a gambling machine right? Coaching and then it’s like oh it didn’t-try again, just try again.” Context: Jeremy describing passive AI usage Significance: Vivid metaphor for learned helplessness masquerading as productivity.

  8. “If everybody’s using the same tools, you need to figure out how to break past that a little bit instead of just doing the unfulfilling thing that everybody’s doing.” Context: Chris’s advice for career differentiation Significance: Practical guidance for standing out when AI commoditizes surface-level skills.

  9. “Software craftsmanship, I think is the thing that AI code threatens. Not because it’s impossible to use properly… but because it encourages folks to not take the craftsmanship and the design and the architecture seriously.” Context: Chris on the cultural impact of AI coding Significance: Clarifies that the threat isn’t AI itself, but the culture it can enable.

  10. “There’s so much of the noise of the world that’s going on that I’m completely unaware of. And guess what? That makes me happier.” Context: Chris on avoiding hype cycles Significance: Practical advice for maintaining sanity and focus in a hype-driven industry.

⚡ APPLICATIONS & HABITS

Practical Guidance

For Developers:

  • Use AI as a senior advisor, not a replacement. Ask AI to explain approaches, suggest alternatives, teach you new APIs-but maintain understanding of the final code.
  • Build tight iteration loops. Your build should take seconds, your tests under 30 seconds. If they’re slower, invest in fixing that first.
  • Review AI-generated code carefully. Ask: Does this test the right thing? Is this in the right place? Do I understand why this works?
  • Invest in mastery. Pick an area and go deep. Understand your tools, your language, your domain at a level others don’t.
  • Avoid “vibe coding.” If you find yourself waiting for AI to hopefully produce something useful, stop and understand what you’re trying to accomplish.

For Teams:

  • Maintain code review standards. AI-generated code needs the same (or more) scrutiny as human-written code.
  • Watch for technical debt. Are you accumulating code nobody understands? Are tests making refactoring harder?
  • Measure the right things. Product progress matters more than lines of code. Code that ships but nobody understands is debt, not progress.
  • Protect craftsmanship. Create space for architectural thinking and quality, even when there’s pressure to ship fast.

For Learning:

  • Use AI to accelerate learning. Ask AI to explain concepts, compare approaches, trace through code-but verify your understanding.
  • Build things from scratch sometimes. Don’t always let AI generate the boilerplate. Write it yourself to understand it.
  • Read code deeply. Whether AI-generated or human-written, take time to understand why it works, not just that it works.
  • Invest in fundamentals. Architecture, algorithms, system design-these become more valuable as surface-level coding gets automated.

Implementation Strategies

Week 1-2: Assess Your AI Usage

  • Track how you’re using AI coding tools currently
  • Notice when you understand the output vs. when you’re accepting it blindly
  • Identify areas where AI is genuinely helping vs. where it might be enabling bad habits

Week 3-4: Optimize Your Loop

  • Measure your current build and test times
  • Invest in incremental builds and selective test running
  • Set up your environment for tight feedback

Month 2: Deepen Understanding

  • Pick a component and understand it deeply-traces, architecture, design decisions
  • Review AI-generated code more carefully than before
  • Start asking “why” about code, not just “does it work”

Ongoing: Build Mastery

  • Regularly build things without AI assistance to maintain skills
  • Invest time in understanding your tools deeply
  • Share knowledge with others-teaching reinforces mastery

Common Pitfalls to Avoid

Blind Acceptance

  • Don’t commit AI-generated code without understanding it
  • Watch for code that makes symptoms disappear without fixing root causes
  • Be skeptical of code that “just works”-understand why it works

Chasing Hype

  • Don’t change your entire workflow because others claim 10x gains
  • Be wary of “everyone’s doing it” pressure
  • Measure your own productivity, don’t rely on others’ claims

Accumulating Technical Debt

  • More code isn’t always better-AI-generated or not
  • Watch for tests that make refactoring harder
  • Be cautious of code that works but nobody understands

Neglecting Fundamentals

  • Don’t let AI replace learning-use it to accelerate learning
  • Architecture and design matter more, not less, with AI tools
  • Surface-level skills are being commoditized-invest in depth

Tool Over-Reliance

  • Don’t become dependent on AI for tasks you should understand
  • Maintain ability to code without assistance
  • Remember that tools change-understanding persists

📚 REFERENCES & SOURCES CITED

  • LLVM: Chris’s PhD project from 2000, now foundational infrastructure for most modern programming languages (Rust, Julia, Swift, etc.)

  • Swift: Programming language created by Chris at Apple, now used throughout Apple’s ecosystem

  • Mojo: Programming language Chris is building at Modular, designed for AI/ML workloads with Python compatibility

  • TensorFlow: Google’s machine learning framework, which both speakers critiqued and tried to improve

  • Swift for TensorFlow: Collaborative project between Chris and Jeremy to build better AI infrastructure

  • fast.ai: Jeremy’s AI education platform, mentioned as having taught 4 million people

  • Answer AI: Jeremy’s current company, where they experimented with AI coding workflows

  • ULMFiT: Jeremy’s 2017 paper creating the first effective LLM (Universal Language Model Fine-tuning)

  • nbdev: Jeremy’s project for production development in Jupyter notebooks, mentioned as influencing their current approach

  • Modular: Chris’s company building Mojo and AI infrastructure, recently raised $250 million

  • PyTorch: Facebook’s deep learning framework, mentioned as something Jeremy helped Chris understand

  • Tesla Autopilot: Chris’s work leading the software team in 2017, source of the self-driving parallel

  • 2017 Self-Driving Predictions: Industry-wide belief that self-driving cars would be solved by 2020, cited as parallel to current AGI hype

⚠️ QUALITY & TRUSTWORTHINESS NOTES

  • Accuracy Check: Both speakers provide specific, verifiable claims: Chris’s work on LLVM/Swift/Mojo is well-documented, Jeremy created ULMFiT (first LLM), dates and company details align with public records. The self-driving timeline (2017 predictions vs. 2025 reality) is easily verifiable.

  • Bias Assessment: Both have clear perspectives-Chris building Mojo, Jeremy building Answer AI tools-but they’re transparent about their failures and limitations. Jeremy admits months of failed AI experiments. Chris acknowledges his 2017 self-driving predictions were wrong. No attempt to hide conflicts of interest.

  • Source Credibility: Primary sources-two of the most respected practitioners in their fields. Chris’s systems work (LLVM, Swift) speaks for itself. Jeremy’s AI research (ULMFiT, fast.ai) established many current practices. Their track records validate their perspectives.

  • Transparency: Exceptional transparency about failures, uncertainties, and what they don’t know. Both admit they can’t predict AGI timelines. Jeremy shares specific productivity numbers (10-20% gains, not 10x). Chris shares specific company examples of AI failures.

  • Potential Harm: Low risk of harm. The advice to invest in mastery and avoid blind AI delegation is protective. The main risk is discouraging AI adoption entirely, but both explicitly state they use AI tools daily and find value in them. They’re critiquing usage patterns, not the technology itself.

🎯 AUDIENCE & RECOMMENDATION

Who Should Watch:

  • Professional Software Developers: Essential viewing for anyone writing code with AI assistance. Provides crucial perspective on what works and what doesn’t.

  • Junior Engineers: Critical for career development. Provides guidance on what skills to build and what pitfalls to avoid in the AI age.

  • Engineering Managers: Important for understanding team productivity, technical debt, and how to evaluate AI coding claims.

  • Computer Science Students: Valuable for understanding how the field is changing and what skills will remain valuable.

  • Technical Founders: Relevant for company culture decisions around AI usage and what to measure.

  • AI Researchers: Useful for understanding practical limitations and the gap between research capabilities and production reality.

Who Should Skip:

  • Those Looking for AI Hype: If you want confirmation that AI will replace all programmers next year, this isn’t it. Both speakers are measured and skeptical of such claims.

  • Complete Beginners: The discussion assumes familiarity with software engineering concepts, testing practices, and development workflows.

  • Those Seeking Simple Answers: The conversation is nuanced-both use AI daily but critique how it’s being used. If you want a binary “AI good/bad” message, this isn’t it.

Optimal Viewing Strategy:

  • Speed: 1.0x-1.25x speed works well. The conversation is dense with insights and benefits from careful listening.

  • Sections to prioritize:

    • 11:00-16:00 - The self-driving parallel and AGI discussion
    • 21:00-30:00 - Concrete AI failures and productivity reality
    • 30:00-40:00 - Vibe coding vs. mastery and unit test technical debt
    • 45:00-52:00 - Career advice for junior engineers
    • 60:00-67:00 - AI as collaborative partner (Shell Sage example)
  • Note-taking: Take notes on the specific failure examples, productivity numbers, and career advice. These are actionable and specific.

  • Follow-up: After watching, audit your own AI usage. Are you vibe coding? Do you understand the AI-generated code you commit? What’s your actual productivity gain?

Meta Notes: This conversation rewards careful listening and reflection. It’s not a tutorial but a philosophical discussion grounded in decades of practice. The value is in the wisdom and perspective, not specific techniques. This is a video to return to as the AI coding landscape evolves.

Crepi il lupo! 🐺