· 5 min read · Selling Software

How AI Completely Changed Software Estimation

AI has fundamentally disrupted how we estimate development effort. Here's what that means for agencies and developers.

AI estimation project-management software-development
How AI Completely Changed Software Estimation

It’s always been extremely difficult to estimate the effort required to build software.

And AI just turned everything we thought we knew on its head.

The Old Way of Estimating

For decades, software estimation followed a predictable pattern:

Step 1: Break down the project into tasks Step 2: Estimate each task based on similar past work
Step 3: Add buffer for unknowns (usually 20-50%) Step 4: Multiply by your team’s velocity Step 5: Add more buffer because you know you’re wrong

The result? Estimates that were still wrong, but at least consistently wrong.

A “simple” feature might take:

  • 2 hours for the happy path
  • 4 hours for edge cases
  • 3 hours for testing
  • 2 hours for code review
  • 1 hour for deployment

Total: 12 hours

And that’s if nothing goes wrong. Which it always does.

What AI Changed

AI didn’t just make coding faster. It completely disrupted the relationship between complexity and time.

Before AI:

  • Simple features = predictable time
  • Complex features = exponentially more time
  • Novel features = ¯_(ツ)_/¯

With AI:

  • Simple features = almost instant
  • Complex features = still take time, but way less
  • Novel features = surprisingly feasible

The curve flattened dramatically.

The New Estimation Challenge

Here’s the problem: Your old estimation framework is now completely broken.

Example: Building a REST API endpoint

2022 Estimate:

  • Route handler: 30 min
  • Database query: 45 min
  • Input validation: 30 min
  • Error handling: 30 min
  • Tests: 1 hour
  • Documentation: 20 min

Total: 3.5 hours

2024 with AI:

  • Describe the endpoint in natural language: 2 min
  • AI generates everything: 1 min
  • Review and adjust: 15 min
  • Run tests: 2 min

Total: 20 minutes

That’s a 10x reduction.

But not for everything.

The AI Paradox

Here’s where it gets weird.

AI is incredible at:

  • Boilerplate code
  • Standard patterns
  • Well-documented frameworks
  • Clear requirements
  • Isolated components

AI still struggles with:

  • Novel architecture decisions
  • Debugging complex integration issues
  • Understanding business context
  • Handling ambiguous requirements
  • Making strategic trade-offs

So now you have features that are 10x faster right next to features that are only 2x faster.

How do you estimate that?

The New Estimation Framework

After a year of building with AI tools, here’s what actually works:

1. Split Features Differently

Old split: Frontend vs Backend vs Database vs Testing

New split:

  • AI-Friendly: Clear requirements, standard patterns (10x faster)
  • AI-Assisted: Complex but decomposable (3-5x faster)
  • AI-Adjacent: Requires human judgment (1.5-2x faster)

2. Estimate AI-Specific Overhead

AI doesn’t eliminate work, it shifts it:

  • Prompt engineering time: Getting AI to understand requirements
  • Review time: Catching AI mistakes (which are often subtle)
  • Context management: Keeping AI on track with architecture
  • Cleanup time: Refactoring AI-generated code to match standards

This overhead is real and easily underestimated.

3. Account for the “Almost Done” Trap

AI gets you to 80% incredibly fast. The last 20% still takes time.

Old estimation: Linear progress
New estimation: Rapid progress, then normal pace

4. Track Your Own AI Multiplier

My team’s actual multipliers after 12 months:

  • CRUD APIs: 8x faster
  • UI components: 6x faster
  • Complex business logic: 2.5x faster
  • Integration work: 2x faster
  • Architecture decisions: 1.2x faster
  • Debugging production issues: 1.5x faster

Your multipliers will be different. Track them.

What This Means for Agencies

The Good News:

You can deliver faster, take on more projects, and improve margins.

The Bad News:

Your competitors can too. And clients now expect AI-level speed.

The Weird News:

Project scopes are becoming more ambitious. Clients think “if AI makes it 10x faster, we can do 10x more features for the same price.”

They’re not entirely wrong.

The Real Answer

So how do you estimate effort in the AI era?

Honestly? You don’t.

Not precisely, anyway.

What works better:

  1. Time-box instead of estimate - “We’ll spend 2 weeks building features, prioritized by value”
  2. Iterate faster - Weekly demos instead of monthly milestones
  3. Build flexibility into contracts - Fixed weekly rate, flexible scope
  4. Over-communicate uncertainty - “This might take 2 days or 2 weeks, depending on how AI-friendly it is”

The Uncomfortable Truth

AI made software estimation harder, not easier.

Because now the answer to “how long will this take” is:

“I genuinely don’t know - somewhere between 2 hours and 2 weeks, and I won’t know which until I start building.”

That’s terrifying for agencies that sell fixed-price projects.

But it’s the new reality.

The agencies that figure out how to sell value instead of hours, outcomes instead of timelines, and partnerships instead of projects - those agencies will win.

The rest will still be trying to estimate effort while their competitors are already shipping.


How is your agency handling AI-era estimation? I’d love to hear what’s working (and what’s not) for you.