
Key takeaways
• Software gets built in seven phases: discovery → planning → design → architecture → development in sprints → QA → release → support. Each phase has its own failure mode; skipping any of them is the cheapest way to burn budget.
• Most projects fail from the same two causes. Standish CHAOS: 31% succeed, 50% are challenged, 19% fail outright. Root causes: unclear requirements and weak stakeholder involvement. A proper discovery phase cuts downstream cost by up to 30%.
• Agile wins by default in 2026. 75% of US teams are Agile, 87% of those use Scrum, 56% use Kanban for ops/support, and 31.5% run a hybrid. Agile projects succeed 70% of the time vs Waterfall’s 50% — the gap is real.
• AI-accelerated engineering is the new baseline. McKinsey/GitHub see 55% faster task completion and 30% faster time-to-market. DORA 2025 confirms the speedup but warns it raises change-failure rates unless QA and observability keep pace.
• Your job as a non-technical founder is oversight, not coding. Attend sprint demos, read release notes, track the four DORA metrics, and protect the backlog from scope creep. That is 90% of what good product owners actually do.
Why Fora Soft wrote this playbook
Since 2005 we have shipped 625+ software products — from one-person indie apps to multi-tenant SaaS platforms like BrainCert ($10M ARR) and live-streaming platforms like TradeCaster (46K+ users). Our teams run two-week sprints, deploy every few days, and pair senior engineers with AI coding tools that have compressed typical project timelines 20–35% versus 2022.
Most founders we meet have never watched a software product being built end-to-end. That is not a problem in itself — but it becomes one if the agency you hire quietly skips phases, hides decisions, or calls a half-built prototype an “MVP.” This playbook is the honest version of how modern software gets made so you can tell the difference between a disciplined process and a plausible one.
We start from the pyramid top: the full seven-phase pipeline in one glance, then how we adapt it by project size, then the role breakdown, then the metrics that matter. By the end you will know exactly what to ask your engineering partner at every checkpoint.
Want to see this process on your product?
Book a 30-minute call and we will walk through the exact plan we would run — phase by phase — from today to launch.
The seven phases of software development in one table
Every competent agency runs this same pipeline. The labels differ — “inception” vs “discovery,” “build” vs “development” — but the phases do not.
| Phase | Typical duration | Deliverables | How it fails |
|---|---|---|---|
| 1. Discovery / scoping | 1–4 weeks | Requirements doc, feature list, priority grid, rough estimate | Fuzzy specs → rework at up to 150× cost (Boehm) |
| 2. Planning & team formation | 1–2 weeks | Gantt / sprint plan, staffed team, kickoff ritual | Wrong team skills → slow delivery throughout |
| 3. Design (UX/UI) | 2–4 weeks (overlaps) | Clickable prototype, design system, accessibility review | Late handoff to devs → rebuild cost |
| 4. Architecture | 1–2 weeks | Tech stack, API contract, DB schema, non-functional requirements | Wrong stack choice → 6–12 months of drag |
| 5. Development (sprints) | 4–12 weeks (2-week sprints) | Working code, unit tests, CI passes, demo every sprint | Tech debt → brittle product 12 months in |
| 6. QA & testing | Continuous + 1–2 week hardening | Test plan, automation suite, perf report, bug log | Late QA → defect escapes in production |
| 7. Release & support | 1–2 weeks + ongoing | Runbook, rollback plan, monitoring, SLA, hotfix pipeline | No monitoring → silent failures, customer churn |
For a fast-scoped MVP these phases telescope into 3–5 calendar months. For an enterprise platform with compliance overhead they stretch to 9–14 months. We assign weekly percentages in planning; the ratios are rarely 1/1/1/1.
Phase 1 — Discovery: turning a napkin into a plan
Discovery is the cheapest risk reduction you can buy. It is where "we think users need X" becomes "we measured that these five user flows matter and here is the priced scope." Standish CHAOS has said the same thing for 30 years: clear requirements are the #1 success factor.
Fora Soft runs two flavours: Primary Analytics (free, 4–7 days, directional ±30% estimate) for fast go/no-go decisions, and Comprehensive Analytics (2–4 weeks, paid, wireframes + user stories + ±15% estimate) for locking the scope before a fixed-price contract. Our scoping process page has the deeper walk-through.
Deliverables you should insist on: a one-page product summary, a must/should/nice priority grid, a top-5 risk log, a clickable prototype if you need investor materials, and a directional estimate with a stated band. Anything vaguer is storytelling, not discovery.
Phase 2 — Planning and team formation: the right builders, not just builders
Once scope is locked, we staff the team. For a typical consumer SaaS MVP that is 1 PM, 1 BA, 2–3 full-stack developers, 1 QA, 1 designer (part-time), and 1 DevOps (fractional). Boston Consulting Group found teams with above-average diversity earn 19 percentage points more from innovation than homogeneous teams — we staff accordingly.
The six seats every project needs
1. Project Manager (PM). Owns timelines, stand-ups, stakeholder comms. Reads as your single point of contact. Learn more about what a good PM actually does in our guide to what a technical project manager does.
2. Business Analyst (BA). Carries requirements through from discovery. Makes sure the story gets built, not the dream.
3. Developers. 2–5 people depending on scope. Senior-plus-mid pairs beat full-senior for cost efficiency and mentorship.
4. Quality Assurance (QA). Test planning, manual passes, and automation. We involve QA from sprint one, not as a final phase.
5. Designer. UX/UI for new flows, tweaks for existing ones. Often fractional after the first two months.
6. DevOps. CI/CD, infrastructure, monitoring. Part-time for MVPs; dedicated for scale.
Reach for a dedicated team when: the product horizon is >6 months and you need senior people to carry domain context. Staff augmentation fits better for a short gap or a specialist role — more detail on our dedicated team service page.
Phase 3 — Design: why UX comes before architecture
Design-led projects ship 85% faster and cost 75% less than design-afterthought projects (Forrester). The reason is simple: designing the interaction exposes data and flow decisions that quietly reshape the architecture. Build architecture first, and every design surprise turns into rework.
Our design pass produces wireframes first (20–40 screens depending on scope), then a clickable prototype, then the visual design system in Figma. Accessibility (WCAG 2.2) gets checked at the prototype stage, not after launch. Designers sit in sprint planning the whole way through.
Phase 4 — Architecture: the decision that compounds for years
The stack and architecture choices made in week 3 shape your cost structure for years. Picking the wrong database or the wrong deployment model can add six figures of unnecessary spend and months of rework before you spot it.
Stack choice. We bias toward boring-but-proven: TypeScript + React / React Native on the client, Node.js or Python on the server, PostgreSQL for the primary database, Redis for cache, S3-compatible object storage. For AI features, a FastAPI service in front of open-source models (Llama, Whisper) or a thin wrapper over OpenAI/Anthropic.
Infrastructure. Docker containers orchestrated with Kubernetes for stateful platforms; serverless (AWS Lambda, Google Cloud Run) for bursty workloads. Most MVPs start on a single managed host (Hetzner, DigitalOcean) and migrate to multi-region as traffic grows.
Non-functional requirements. Decide performance budgets, availability targets (99.5% or 99.9%), data residency, and compliance class (GDPR, HIPAA, PCI, SOC 2). Retrofitting these onto a running system is painful — the architecture phase is where they get baked in cheaply.
Phase 5 — Development in sprints: the rhythm that ships products
Development is not a monolith; it is a rhythm of two-week sprints (59% of industry teams agree per Atlassian). Each sprint starts with planning, ends with a demo, and keeps a fixed backlog in between so the team can actually finish what it commits to.
A two-week sprint in our shop
Day 1 — Planning. Backlog refinement, story sizing, capacity check. Team commits to the sprint scope.
Days 2–9 — Build. Daily 15-minute stand-up, feature branches, PRs reviewed by a peer and usually by an AI reviewer, automated tests in CI, QA exploring as features land.
Day 10 — Demo + retro. Built features demoed to the client in a 30–45 minute call. Retrospective focuses on what to change next sprint. Short, specific, and blameless.
This cadence is not ritual for the sake of ritual. It creates the predictable delivery rhythm that DORA metrics (lead time, deploy frequency, change failure rate, time-to-restore) actually measure.
Comparing agencies or team structures?
We will look at your draft plan on a 30-minute call and flag the staffing, sprint cadence, or tooling decisions most likely to burn budget.
Agile, Waterfall, and the hybrid truth
In 2026, Agile dominates: 75% of US teams, 70%+ globally. 87% of Agile teams use Scrum; 56% use Kanban for ops/support; 31.5% run an explicit hybrid. Agile projects hit 70% success rates vs Waterfall’s 50% — not because Agile is magical, but because it forces conversation and feedback.
Pure Scrum. Fixed 2-week sprints, daily stand-ups, demo/retro cadence. Best for product teams building new features against a roadmap.
Kanban. No fixed iterations; work flows through a board with WIP limits. Best for support queues, incident response, and small focused teams.
Shape-Up. 6-week cycles with appetite-driven scope. Niche but growing in startups where leadership wants fewer status checks.
When Waterfall still wins. Regulated environments (healthcare device approval, certified financial software), hardware integrations, fixed-price contracts with compliance sign-offs. Even then, we run Agile inside the Waterfall phases — it is cheaper than it looks.
Phase 6 — Quality assurance: continuous, not bolted-on
QA has moved from a gate at the end of the pipeline to a continuous discipline. Modern CI runs unit, integration, and end-to-end tests on every pull request; feature branches get previewed in ephemeral environments; security scans (SAST, DAST) and accessibility checks run in the same pipeline.
Our tier targets: 70%+ code coverage on critical paths, <0.5% change-failure rate, <1 hour mean time to restore after production incidents. These line up with DORA’s “elite performer” bands and are achievable with disciplined sprints rather than heroics.
If you want the deeper version of this topic, our software testing guide breaks down the test pyramid and the automation ROI.
Phase 7 — Release and support: ship early, ship often
The DevOps market sits at $14.95B in 2025, growing 25.6% to $18.77B in 2026 (Microsoft / DevOps Market Report). The reason: release cadence is now a competitive advantage. Elite teams deploy multiple times per day; top performers like Etsy run 50+ deploys daily. A healthy mid-size product team targets at least one deploy per week in sprint mode.
Deployment patterns we use
Blue-green. Two identical environments; cut traffic from blue to green instantly; roll back in seconds. Used for our zero-downtime releases.
Canary / progressive. Release to 5–10% of traffic, watch the error rate for 5–30 minutes, expand in stages. Standard for high-traffic consumer products.
Feature flags. Deploy code dark, enable per user segment. Decouples “ship the binary” from “launch the feature” — essential for modern release cycles. LaunchDarkly, Unleash, and ConfigCat are the common managed options.
Observability. Metrics, logs, traces, and error-tracking (Sentry, Datadog, Grafana). Without these, production failures become silent.
AI-accelerated engineering — the 2026 baseline
The biggest change since 2022 is not the methodology; it is that engineers now work alongside AI coding assistants, AI code reviewers, and spec-driven agents. McKinsey/GitHub research shows 55% faster task completion and 30% shorter time-to-market in teams with good AI adoption. DORA 2025 confirms the speedup and adds a caveat: AI raises both deploy frequency and change-failure rate if QA and observability do not scale with it.
How we use it. AI pair-programmer inside the IDE for boilerplate and refactors; spec-to-code agents for large rewrites; AI reviewer on every PR catching test gaps; automated test generation from specifications. Outcomes in our shop: 20–35% shorter delivery compared to 2022 timelines, and tighter feedback loops on every pull request.
What it is not. Not autonomous shipping. Not a replacement for senior review. Not a license to drop design or QA phases. Teams that went all-in on “let the AI decide” have higher rework rates per Thoughtworks’ Radar v34 (April 2026). We stay human-in-the-loop on every decision that touches production.
Mini case — BrainCert from MVP to $10M ARR
Situation. BrainCert started as a lean MVP virtual-classroom app. The founder needed multi-tenant SaaS, 100-participant WebRTC classrooms, LTI integrations, and a self-serve billing funnel — none of which the MVP had.
12-month plan. We ran a Comprehensive Analytics phase (3 weeks), staffed a dedicated team of seven, and moved to 2-week sprints with demo days on Fridays. Architecture was re-done to support multi-tenant isolation with row-level security; WebRTC replaced a legacy SFU; CI/CD went from weekly to daily with blue-green deploys; a feature-flag system let the sales team roll out features to specific schools without an engineering ticket.
Outcome. BrainCert crossed $10M ARR. The process did not look exotic — it looked like the seven phases above, done without skipping any of them. Our portfolio has two hundred more short case studies that follow the same pattern. Want a similar assessment? Book a 30-minute scoping call.
Your role as a non-technical founder — five concrete habits
1. Attend sprint demos. Every two weeks, 30–45 minutes. Bring three questions, one piece of feedback, and one business update. Miss demos and you lose the feedback loop that makes Agile work.
2. Protect the backlog. Every “oh and also” gets sized against the priority grid from discovery. If it beats something already committed, something else comes out. If it doesn’t, it lands in next sprint.
3. Read the DORA metrics monthly. Lead time, deploy frequency, change-failure rate, time-to-restore. You do not need to run them, just understand which direction each is moving.
4. Review the risk log quarterly. Every project has a live top-5 risk list. If it never changes, the PM is not doing the work; if it changes too often, the scope is unstable.
5. Stay close to users. Run user interviews every six weeks. The engineering team can only build what you tell them to build; without fresh user signal they build last month’s plan.
Five pitfalls that wreck software projects
1. “Let’s just start coding.” The single biggest predictor of failure. A 7-day discovery pass saves months. Standish CHAOS has made this point for three decades and every year the data repeats.
2. Sales-driven estimates. When the salesperson commits to a timeline before engineers sign off, the project is already late. Always require the estimate to be authored by the team that will build.
3. QA at the end. Testing added after the fact finds 10× more defects 10× later. Wire QA into sprint one, even if the product has no users yet.
4. No observability in production. If you cannot see errors, latency, and usage, you are running blind. Sentry and a basic dashboard cost less than one week of engineer time and pay back monthly.
5. Scope creep without a grid. Changes are fine; changes without a priority-grid trade-off are poison. Every new request should force something out of the sprint, or wait for the next one. See our cost-cutting notes for the patterns that actually save money.
Need a sanity check on your current team’s process?
We audit engineering processes as a friendly second opinion — no contract required. If there is a gap, we tell you. If there isn’t, we tell you that too.
KPIs that actually separate elite teams from average ones
Quality KPIs (DORA). Lead time from commit to production under 24 hours; deploy frequency at least weekly (ideally daily); change-failure rate below 5%; mean time to restore below one hour. Elite teams hit all four.
Business KPIs. Sprint commitment vs completion above 85%; estimate accuracy within 15% after discovery; backlog health (no stale >90-day items) above 90%; cost per feature trending flat or down quarter over quarter.
Reliability KPIs. p95 API latency under 500ms; crash-free sessions above 99.5%; database CPU below 70% at peak; error budget honoured (SLO breaches tracked). These numbers age well — teams that hit them in year one rarely slip in year three.
When not to build software from scratch
Not every problem is a software project. If the workflow fits a template in Airtable, Notion, or a no-code platform, use that. If the integration lives in an existing SaaS product (HubSpot, Salesforce, Shopify), configure rather than build.
Skip bespoke development when: the audience is under a few hundred users and monetisation is unproven, the use case is solved by three off-the-shelf tools stitched together, or you have no budget for the first twelve months of support. Software is an ongoing cost centre — build only when the business case survives a skeptical quarterly review.
FAQ
How long does a typical software project take?
Lean MVP: 3–5 months. Cross-platform MVP with subscription billing and admin: 5–8 months. Scale-ready product: 9–14 months. Anything shorter is usually a prototype; anything longer is scope creep. AI-accelerated engineering has compressed the ranges 20–35% in the last three years.
Should I go fixed-price or time-and-materials?
Fixed-price works only after Comprehensive Analytics has locked the scope. Before that, you are asking the vendor to price unknowns, which they will pad by 30–50%. Time-and-materials with a sprint cap is usually cheaper and faster for early-stage builds.
Do I need a CTO before I hire an agency?
No. You need a PM who can translate engineering updates into business-speak. A part-time technical advisor (2–4 hours a month) is usually sufficient for the first year. A full-time CTO makes sense once headcount crosses 10 engineers or you take institutional investment.
What’s the difference between an MVP and a prototype?
A prototype tests if a design feels right — it does not have a backend or billing. An MVP is a real product with the minimum feature set you need to get paying users. Prototypes take weeks; MVPs take months.
How do I know if my agency is any good?
Five signals: they run a real discovery phase (not a sales discovery), they demo every sprint, they provide DORA metrics monthly, they have a named QA person on the team, and their risk log changes over time. Missing any two is a yellow flag; missing four means you should interview replacements.
How accurate are software estimates really?
After a quick scoping pass, ±30%. After a full Comprehensive Analytics, ±15%. Without any scoping, estimates overrun 200–300% on average. Our software estimation guide breaks the numbers down further.
Does AI replace the need for senior engineers?
Not yet and not soon. AI assistants raise throughput on boilerplate, refactors, and test generation. They also raise change-failure rates if nobody senior reviews the output. DORA 2025 and Thoughtworks Radar v34 both flag the same risk. Senior engineers remain the bottleneck and the safety net.
What happens after launch?
Support, iteration, and feature work on a smaller team. Expect ongoing spend at 15–25% of the initial build cost per year for maintenance, security patches, and minor features. Our maintenance service page covers the retainer patterns.
What to Read Next
Scoping
Primary Analytics — a 7-Day Scoping Pass
The discovery method that feeds the seven-phase pipeline.
QA
The Importance of Testing in Software Development
The test pyramid, automation ROI, and the elite-team bar.
Estimation
A Founder’s Guide to Software Estimating
How we price scope before writing a single line of code.
Cost
Mobile App Development Costs Guide
Line-item budgets and the levers that actually move them.
Team
What a Technical Project Manager Actually Does
The role that protects scope, schedule, and sanity.
Ready to build software the disciplined way?
Software gets built in seven phases. Skip any of them and the statistics catch up with you: 66% partial or total failure, 200–300% cost overruns, dead codebases that nobody wants to maintain. Run them in order — discovery first, QA from day one, release early and often, AI-accelerated but human-reviewed — and the numbers flip the other way.
Our job as your engineering partner is to keep that discipline when the pressure is on to cut corners. 625+ shipped products since 2005 have taught us where those corners hide. If you want a team that ships fast and ships well, the 30-minute kickoff call is the fastest way to see whether we fit.
Let’s map the seven phases onto your product
Bring your current plan (or a napkin). In 30 minutes we will come back with the phase-by-phase shape of the work ahead.


.avif)

Comments