Why Software Takes Time: Reset Your Delivery Expectations
Flask creator Armin Ronacher's viral post explains why software always takes longer than expected — and what founders can do right now to forecast delivery honestly and stop the blame cycle.
Your engineers aren't slow. Your timeline was wrong before the first line of code was written. That's the uncomfortable truth at the center of Armin Ronacher's recent post — the Flask creator's meditation on why software development resists compression. It earned 597 points on Hacker News not because it's new information, but because it names something practitioners feel every day and founders rarely want to hear. If you're a founder who's ever asked "why is this taking so long?" — this article is for you. You'll walk away with a cleaner mental model for forecasting delivery and a practical way to audit your next sprint before it slips.
The Physics of Software Delivery That No One Explains to Founders
Ronacher's core argument is deceptively simple: some things in software just take time, and no amount of pressure, tooling, or headcount changes that. He identifies three structural reasons delivery runs long — hidden complexity, unavoidable unknowns, and the iteration cost of getting things right.
Hidden complexity is the one that bites hardest. A feature that looks like a two-day task from the outside often sits on top of a decade of accumulated decisions, undocumented behavior, and implicit dependencies. The engineers know this. The estimate they give you already has some buffer baked in — and it's still wrong, because the full shape of the problem only becomes visible once you're inside it.
Unavoidable unknowns are different. These aren't things your team missed; they're things that genuinely couldn't be known until work began. Third-party API behavior, edge cases in real user data, infrastructure constraints that only surface under load — these aren't planning failures. They're the nature of building on top of systems you don't fully control.
Iteration cost is the one founders most often discount. Software isn't manufactured; it's discovered. The first version of anything is a hypothesis. Getting from hypothesis to working product requires cycles of build, test, observe, and revise. Compressing those cycles doesn't eliminate them — it just makes each one lower quality.
Why Startups Feel This Delivery Pain More Than Anyone
Mature engineering organizations have institutional memory around these dynamics. They've shipped enough to know that "two weeks" means "two weeks if nothing surprising happens" — and they've learned to communicate that clearly.
Startup teams are operating without that history. Founders are often setting timelines based on competitive pressure, investor commitments, or intuition about how hard something "should" be. Engineers are estimating based on incomplete specs and optimistic assumptions. The gap between those two realities is where delivery pain lives.
We're seeing this pattern consistently across startup engineering teams: the timeline conversation happens too early, with too little information, and the number that comes out of it becomes a commitment before anyone has done the work to validate it. By the time the slip is visible, the founder is frustrated, the engineers feel blamed, and trust erodes on both sides.
This is the founder time tax in its most corrosive form — not just the hours spent chasing status updates, but the relationship damage that accumulates when expectations and reality diverge repeatedly. It's the same dynamic that makes scope creep and the Iron Triangle so persistently painful for startup teams.
A Framework for Forecasting Delivery Without Hype
The goal isn't to make timelines longer. It's to make them honest. Here's a practical approach to auditing your next delivery commitment before it becomes a problem.
Step 1: Separate Discovery Work from Delivery Work
Before any estimate is meaningful, the team needs to understand what they're building. Discovery work — spec clarification, technical spike, dependency mapping — is not the same as delivery work. Treat them as separate phases with separate timelines.
| Phase | Goal | Output | Who Owns It |
|---|---|---|---|
| Discovery | Understand the problem | Scoped spec, risk list | Tech lead + product |
| Estimation | Forecast delivery | Range with confidence level | Engineering |
| Delivery | Build and ship | Working software | Engineering |
Skipping discovery and going straight to estimation is the single most common source of timeline failure. The estimate is only as good as the understanding behind it.
Step 2: Estimate in Ranges, Not Point Commitments
A single number is a false promise. A range is an honest forecast. Ask your team to give you three numbers for any significant piece of work:
- Best case: Everything goes smoothly, no surprises
- Likely case: Normal friction, one or two small unknowns surface
- Worst case: A meaningful unknown surfaces and requires a decision
If the gap between best and worst case is more than 3x, that's a signal the work isn't scoped tightly enough yet. Don't commit to a date until that gap narrows.
Step 3: Name the Unknowns Explicitly Before the Sprint Starts
Before any sprint begins, run a five-minute exercise: ask the team to list every assumption the estimate depends on. Not risks — assumptions. Things that have to be true for the timeline to hold.
A list that looks like this is healthy:
- "Assumes the payment API supports webhook retries"
- "Assumes we don't need to migrate existing user records"
- "Assumes design is finalized before dev starts"
Each assumption is a potential unknown. When one of them turns out to be false, you have a documented reason for the slip — not a blame conversation.
Step 4: Build in Iteration Budget, Not Just Buffer
Buffer implies padding for delays. Iteration budget is different — it's explicit time allocated for the cycles of refinement that are structurally required to ship quality software. Ronacher's point about iteration cost is that it's not waste; it's the work.
A rule of thumb that holds up in practice: for any feature touching user-facing behavior, plan for at least one full revision cycle after the first working version. That cycle isn't a failure. It's the job.
This is exactly the kind of forecasting discipline that separates teams with strong technical leadership from those perpetually surprised by slippage.
The Contrarian Take: Slipping Timelines Aren't an Engineering Problem
Here's the reframe that's hard to sell but worth sitting with: most timeline slips are a scoping and communication problem, not an execution problem.
When a team misses a deadline, the instinct is to look at what went wrong in delivery. But the failure usually happened weeks earlier, in the conversation where a number was committed to before the work was understood. The engineers who built the thing often knew it was going to be hard. The signal was there — it just wasn't surfaced in a way the founder could act on.
This is why technical authority matters so much in early-stage companies. When the person closest to the code doesn't have the standing to push back on a timeline, the timeline wins — until reality overrides it. Weak technical authority doesn't just create bad estimates; it creates a culture where bad estimates can't be corrected until it's too late. The willingness to surface hard truths early is one of the most underrated traits in an engineering leader.
What Good Delivery Forecasting Looks Like
Teams that forecast well share a few observable behaviors:
- Estimates come with explicit assumptions attached, not just numbers
- Discovery is treated as a deliverable, not a tax on velocity
- Slips are diagnosed, not just reported — the team can tell you which assumption broke
- Stakeholders understand the difference between a commitment and a forecast
None of this requires a large team or a mature process. It requires a shared language around uncertainty — and a founder who's willing to hear "we don't know yet" as useful information rather than a failure.
Sprint Assumption Audit: Where to Start Before Your Next Deadline
Before your next sprint kicks off, run this audit:
Sprint Assumption Audit
□ Is the spec complete enough to estimate against?
□ Does the estimate include a range (best/likely/worst)?
□ Are the key assumptions written down?
□ Is there explicit time for iteration, not just buffer?
□ Does the team have standing to flag scope changes mid-sprint?
If you can't check all five boxes, you're not ready to commit to a date. That's not a failure — that's the work of forecasting honestly.
Ronacher's post resonates because it gives engineers language to explain something they've always known. The opportunity for founders is to hear it not as an excuse, but as a model. Software takes time because discovery, iteration, and complexity are structural — not because your team is underperforming.
If your engineering delivery still feels like a black box — where timelines appear, slip, and get explained after the fact — that's a structural problem, not a personnel one. It's the kind of thing that 10ex's embedded technical leadership model is built to address: bringing the forecasting discipline, stakeholder communication, and technical authority that turns engineering from a source of anxiety into a predictable part of your business.