Matthew Boston

Don't Just Build with AI — Learn Through It

April 14, 2026

The Default Mode Is Wrong

The typical workflow with an AI coding agent goes like this: describe the feature, let the agent generate code, review the output, ship it. It works. But it skips the most valuable step – understanding why the codebase looks the way it does.

When you’re onboarding to a new codebase, the temptation is to let the agent handle everything you don’t understand yet. Need to add a feature in an unfamiliar module? Let the agent figure out the patterns. Don’t know the naming conventions? The agent will match what’s already there. This gets you to a working PR, but it doesn’t get you to understanding.

And without understanding, you’re going fast in the wrong direction.

Pair Programming with Infinite Patience

Here’s what I do instead: I use the agent as an interrogation partner. Before asking it to write anything, I ask it to explain things.

  • Why is this module structured this way?
  • What design pattern is this service using, and what problem does it solve?
  • Why does this test mock this dependency but not that one?
  • What would break if I changed this interface?

No human pair partner has the patience for this many questions. An AI agent does. It’ll explain the same concept ten different ways without sighing. It’ll trace a function call through six files without losing track. It’ll compare two approaches and explain the tradeoffs of each without checking the clock.

This is pair programming without the social cost of asking “dumb” questions.

Ask for Architecture, Not Just Implementations

The questions that build real understanding aren’t about syntax or APIs. They’re about decisions. Every codebase is a fossil record of choices – some deliberate, some accidental, some inherited from a framework the team adopted three years ago. Understanding those choices is what separates someone who can modify the code from someone who truly knows the system.

Ask the agent to explain the architecture. Ask why the team chose this database over that one. Ask what the test strategy is and whether it’s consistent. Ask about the error handling patterns and whether they match across services.

This is the research phase applied to your own learning – and it compounds just as fast. Every question builds context. Every answer connects to the next question. Within a few sessions, you’ll have a mental model of the system that would have taken weeks to build through code reading alone.

The Learning Compounds

There’s a second-order effect here. Once you understand the codebase deeply, you become a better collaborator with the agent. You write better prompts because you know the vocabulary. You catch mistakes faster because you know the patterns. You capture what you learn in SKILL.md and CLAUDE.md files, which makes the agent smarter in future sessions.

Understanding begets better tooling begets faster understanding. It’s a flywheel – but only if you invest in the learning side, not just the building side.

Understanding Is the Job

The engineers who get the most from AI aren’t the ones who delegate the most. They’re the ones who learn the most. Every question you ask the agent is an investment in your own judgment – the one thing AI can’t replace.

Don’t just use AI to write code you don’t understand. Use it to build understanding you couldn’t get any other way.