technical debt refactoring AI coding tools engineering leadership delivery velocity

Forest Thinning: Simplify Code Before AI Buries You

Kent Beck's forest thinning metaphor is the antidote to AI-generated code sprawl. Here's how startup founders and technical leads apply it without endless refactors or risky rewrites.

The fastest way to slow down your engineering team is to let the codebase grow denser than anyone can navigate — and AI coding tools are about to make that problem catastrophic. Kent Beck's recent Forest Thinning post on his Tidy First Substack puts a name to something technical leaders feel but rarely act on systematically: codebases, like forests, need periodic thinning to stay healthy. Not a clearcut. Not a rewrite. Thinning. This article is for founders and technical leads who are watching AI agents dump raw, untested code into already-overgrown repos and wondering why velocity keeps dropping even as output appears to go up.

Why Overgrown Codebases Kill Velocity Before You Notice

The density problem is insidious. A codebase doesn't feel slow until it suddenly does. Features that used to take a sprint now take three. PRs sit in review because no one is confident about blast radius. Onboarding a new engineer takes months instead of weeks.

The underlying cause is almost always the same: the codebase has more structure than anyone can hold in their head at once. Dead code paths, redundant abstractions, modules that were split for reasons no one remembers, and config files that contradict each other. Every new feature gets threaded through this thicket, adding more knots.

AI coding tools accelerate this dynamic in a specific and dangerous way. They generate syntactically correct, locally coherent code that is globally incoherent. An AI agent doesn't know that you already have three ways to handle authentication, or that the service it's extending was supposed to be deprecated last quarter. It adds a fourth path, confidently, and the PR looks fine on the surface.

The pattern we're seeing across teams: AI-assisted development increases raw code output significantly — while simultaneously increasing the rate at which that code needs to be revisited. The forest gets denser faster. For a deeper look at why AI tools produce structurally weak code, see Why LLMs Write Bad Code (And How to Fix It).

What Beck Actually Means by Forest Thinning

Beck's metaphor is precise and worth unpacking. In forestry, thinning isn't about removing the biggest trees or the weakest ones indiscriminately. It's about removing enough density that the remaining trees get more light, more water, more room to grow strong. The goal is a healthier forest, not a smaller one.

Applied to code, this means:

  • Remove code that competes with itself — duplicate logic, parallel implementations, feature flags for features that shipped two years ago
  • Clarify boundaries that have blurred — modules that have grown tentacles into each other's internals
  • Delete the scaffolding — temporary abstractions that became permanent, workarounds that outlived the bugs they worked around

The contrarian insight here is that thinning is not the same as refactoring, and conflating them is why most teams never do either. Refactoring implies restructuring — moving things around, changing how they fit together. Thinning is subtraction. It's asking: what can we remove without changing behavior? That's a much smaller, safer, faster operation.

This distinction also matters for how you talk about the work with non-technical stakeholders. The same principle applies to Rob Pike's rules on simplicity — complexity accumulates when teams add rather than subtract.

A Practical Thinning Playbook for Startup Teams

This is not a one-time project. It's a practice embedded in your delivery cadence. Here's how to operationalize it.

Step 1: Audit for Density, Not Just Debt

Most teams do tech debt audits that produce a list of things to fix. That's the wrong frame. Instead, audit for density signals — places where the codebase is harder to navigate than the underlying problem warrants.

Density signals to look for:

Signal What It Looks Like Why It Matters
Parallel implementations Two functions that do nearly the same thing AI will use both, inconsistently
Dead feature flags Flags for features that are 100% rolled out Every new dev has to learn to ignore them
Orphaned abstractions Base classes with one subclass Adds indirection with no payoff
Stale comments Comments that contradict the code Actively misleads AI and humans
Redundant config Same value set in three places Creates merge conflicts and confusion

Run this audit in a single working session. Don't try to fix anything yet. Just map the density.

Step 2: Thin on the Way Through, Not in a Separate Sprint

The biggest failure mode for thinning initiatives is scheduling them as a separate project. They get deprioritized, scoped up, turned into rewrites, and abandoned.

The practice that works: thin on the way through. When a developer touches a file to implement a feature, they have a standing mandate to remove one piece of density from that file before they open the PR. Not refactor it. Not redesign it. Remove one thing that shouldn't be there.

This is Beck's core insight applied operationally. The forest doesn't get thinned in a single weekend. It gets thinned continuously, by the people already walking through it.

The rule of thumb we're seeing work across teams: the thinning commit should be smaller than the feature commit, and it should be a separate commit. This keeps it reviewable, keeps it safe, and keeps it from becoming a refactor.

# Good: Two commits, clearly separated
git log --oneline
a3f9c12 feat: add webhook retry logic
7b2e441 tidy: remove dead payment_v1 handler (unused since 2022)
# Bad: One commit that mixes both
git log --oneline
d1a8b33 feat: add webhook retry + cleanup payment module

Step 3: Build an AI-Specific Thinning Gate

If your team is using AI coding tools — Copilot, Cursor, or any agent-based workflow — you need an additional gate that standard code review doesn't provide.

AI-generated code has a specific density failure mode: it introduces new patterns without knowing about existing ones. The reviewer needs to ask a question that isn't on most PR checklists:

Does this code introduce a pattern that already exists elsewhere in the codebase?

If yes, that's not a merge blocker — but it is a thinning trigger. Before the PR merges, the author should either use the existing pattern or document why a new one is warranted. This single gate prevents the most common AI-assisted density accumulation.

For teams running agent-based workflows, sandboxing AI agents is a complementary control that limits how much damage a runaway agent can do before review.

This is exactly the kind of structural gate our engineering delivery model helps teams build into their review process — before density becomes a velocity crisis.

Step 4: Make Thinning Visible in Your Delivery Metrics

What doesn't get measured doesn't happen. Most delivery dashboards track features shipped, bugs closed, and cycle time. None of them track density reduction.

A simple proxy metric: track the ratio of lines deleted to lines added per sprint. A healthy codebase in active development should trend toward a ratio above 0.3 — for every 10 lines added, at least 3 are removed. Teams buried in density often run ratios below 0.1 for months at a time.

This isn't a hard rule. It's a signal. When the ratio drops and stays low, it's a leading indicator that the forest is getting denser and velocity is about to suffer.

How to Know If Your Team Is Ready to Thin

Use this scorecard to assess where your team stands today:

  • We can identify the five densest files in the codebase right now
  • Developers have explicit permission to delete code without a feature ticket
  • Our PR template asks whether new patterns duplicate existing ones
  • We track lines deleted as a delivery metric
  • AI-generated code goes through a pattern-duplication check before merge
  • We have a list of dead feature flags with owners and removal dates

If you checked fewer than three of these, your codebase is accumulating density faster than you're removing it — and AI tools will accelerate that gap.

Why the Rewrite Trap Is the Expensive Alternative

Teams that don't thin eventually reach a threshold where the codebase feels unredeemable. The conversation shifts to rewrites. Rewrites are expensive, risky, and almost always take longer than estimated. They also tend to recreate the original density within a year or two if the underlying practice hasn't changed.

Forest thinning is the alternative to the rewrite trap. It's not glamorous. It doesn't show up in a product demo. But it's the difference between a team that ships predictably at month 18 and a team that's paralyzed by its own history.

The teams that get this right treat thinning as a first-class engineering practice — not a cleanup task, not a tech debt sprint, but a continuous habit embedded in how code moves from idea to production. That's the kind of engineering culture that compounds over time.

If your codebase is already dense and your team is starting to feel it in cycle times and merge conflicts, the place to start is the audit in Step 1 — not a rewrite proposal. Map the density first. Then thin on the way through.

For teams where the density problem has already started affecting delivery predictability, this is exactly the kind of structural work we address through 10ex's embedded engineering model. The goal isn't to clean up the codebase once — it's to build the practices that keep it navigable as the product and the team grow.

Get dev leadership insights

Tips on optimizing your dev team, shipping faster, and building products that scale.

TenEx

© 10ex.dev 2026