Software Consulting

How to Scope a Software Project Without Getting Burned

Bad scoping is the root cause of most software project failures. Here's a practical guide to the discovery process, writing a spec that protects both sides, and avoiding the traps that blow up timelines and budgets.

J

Justin Hamilton

Founder & Principal Engineer

project management software scoping requirements discovery consulting

Most software projects that go sideways weren’t ruined by bad code. They were ruined by bad scoping. The requirements were unclear, the edge cases weren’t discussed, the assumptions weren’t written down, and both sides meant something different by “done.”

This is fixable. Here’s how to scope a software project in a way that protects both sides and dramatically increases the probability of a good outcome.

Why Scoping Fails

The most common scoping failure mode: the client describes what they want in broad strokes, the developer estimates based on their interpretation, and neither party realizes they had different pictures in their heads until the first demo.

The second most common failure mode: scope creep without renegotiation. “While we’re at it, can we also add…” happens on every project. Without a documented baseline to reference, there’s no defensible way to say “that wasn’t in scope.”

The third: technical assumptions that aren’t surfaced. The client assumes the integration with their accounting software will be straightforward. The developer discovers it’s a custom API with no documentation, written in 2009, that changes behavior unpredictably. The estimate is meaningless.

Good scoping prevents all three.

The Discovery Phase

Before any estimate is meaningful, you need a discovery phase. This is typically a paid engagement (2-4 weeks for most business applications) where you dig into the actual requirements before committing to a full build.

Discovery includes:

Stakeholder interviews. Talk to everyone who will use or be affected by the software — not just the person writing the check. The warehouse manager who will use the inventory system has different requirements than the CFO who requested it. Gaps between stakeholder views surface here.

Process mapping. Walk through the current process step by step. What triggers the process? What data exists at each step? What decisions are made? What happens with exceptions? The exceptions — the edge cases — are where software projects die. Surface them early.

Data inventory. What data exists in the current systems? What format is it in? Who owns it? What will need to migrate to the new system? What integrations are required, and have those APIs been examined?

Constraint identification. Security and compliance requirements. Performance expectations. Browser and device support. Integration requirements. Infrastructure constraints. These affect estimates and architecture significantly and need to be documented before work begins.

Success criteria. What does a successful outcome look like, in specific measurable terms? “The system should be faster” is not success criteria. “Invoice processing time should decrease from 4 hours to under 30 minutes” is.

Writing a Spec That Protects Both Sides

A good specification document doesn’t have to be long, but it has to be precise about the things that matter.

User stories with acceptance criteria. Each feature described as a user story (“As an accounts payable manager, I can approve invoices over $10,000 with a second-level sign-off”) with specific acceptance criteria (the exact conditions that must be true for the story to be considered complete).

Out-of-scope documentation. Explicitly list what is NOT included. This is as important as what is included. “Mobile app is out of scope.” “Data migration from legacy system is out of scope.” “Email notifications are out of scope.” If it’s not listed out of scope, someone will assume it’s in scope.

Integration specifications. For every external system the software needs to connect to, document the integration method (API, file exchange, database connection), the data that flows, the frequency, and who is responsible for the integration credentials and test data.

Data model definitions. For business applications, document the core data entities and their relationships before the first line of code is written. What is a “customer” in this system? What attributes does an order have? Disagreements about data model are much cheaper to resolve in a document than in code.

Non-functional requirements. Performance expectations (response time under X ms for Y concurrent users). Browser support. Accessibility requirements. Data retention and backup requirements. Security requirements.

Estimating From a Spec

With a solid spec, estimates become meaningfully more reliable. Without one, any estimate is a guess dressed up in a spreadsheet.

Good estimates:

  • Break work into small, discrete tasks (no single estimate should be more than 2 weeks)
  • Include buffer for integration work (integrations almost always take longer than expected)
  • Explicitly call out high-uncertainty areas (anything involving an undocumented API, legacy system, or third-party service)
  • Separate development from testing, deployment, and documentation

A range estimate (not a single number) is more honest. “This will take 8-12 weeks” is more useful than “this will take 9 weeks” when the spec has open questions.

The Change Management Process

Your contract and your working relationship need a clear process for scope changes. A simple one:

  1. Change request is submitted in writing (email is fine)
  2. Developer estimates the impact (timeline and cost)
  3. Client approves or declines in writing
  4. Work proceeds only after approval

This doesn’t need to be adversarial. Most scope changes are legitimate — requirements evolve, clients learn things during development. The goal isn’t to prevent changes, it’s to make changes explicit and price them appropriately.

Red Flags in a Vendor or Client

Red flags from a vendor:

  • Rapid-fire estimates without discovery questions
  • Reluctance to put requirements in writing (“don’t worry, we’ll figure it out as we go”)
  • No discussion of out-of-scope items
  • Estimate with no range or no uncertainty acknowledgment

Red flags from a client:

  • “We’ll know it when we see it” as the requirements definition
  • Resistance to a paid discovery phase
  • “It should only take a couple days” before seeing any requirements
  • No identified decision-maker for scope questions during development

Both sides have responsibilities here. Good software projects are collaborations, not transactions.

We run a discovery process on every significant engagement before providing a final estimate. It protects the client’s money and our team’s time. If you have a project you’re trying to scope, let’s start with discovery.

Let's Build Something Together

Hamilton Development Company builds custom software for businesses ready to stop fitting themselves into someone else's box. $500/mo retainer or $125/hr — no surprises.

Schedule a Consultation