
Key takeaways
• Most products fail on launch, not in launch. Industry research consistently puts new-product failure rates at 70–80% and software-project goal-miss rates near 66%. The usual root cause is not a bug — it is no beta, no rollout plan, no activation funnel.
• Launch is a pipeline, not a date. Alpha → closed beta → open beta → soft launch → phased rollout (1/5/25/100%) → GA. Skipping phases converts marketing wins into support disasters.
• Mobile and web have different mechanics. iOS and Google Play in 2026 enforce staged rollouts, tighter review on payments and data, and ASO metadata that changes CVR by 20–40%. Web gets feature flags + canary + instant rollback. Plan both tracks separately.
• Day 7 / 30 / 90 metrics make or break the launch narrative. Activation, D1/D7/D30 retention, time-to-first-value, conversion to paid, NPS, CSAT, support-ticket share — these are the numbers investors, boards and partners care about after the launch-day dopamine.
• Budget conservatively and keep a war room. Plan 10–15% of total build budget for launch + first 90 days. Keep a live runbook, rollback button, status page, and on-call rotation. The teams that nail launches are the ones that rehearsed failure.
Why Fora Soft wrote this launch playbook
At Fora Soft we have taken video-conferencing, OTT, telemedicine and video surveillance products from first commit to public launch for two decades. This page is the opinionated, 2026-current version of how we do it — the one we wish every founder had before they shipped.
The scale keeps us honest. BrainCert, our WebRTC virtual classroom, serves 100,000+ customers and has accumulated four Brandon Hall awards — not achievable without a disciplined launch process. Worldcast Live streams HD concerts to 10,000+ concurrent viewers at sub-second latency — a capacity you cannot open up on day one without phased rollout. MyOnCallDoc and CirrusMED had to pass HIPAA diligence before a single beta user touched them. Each of those launches informs what follows.
This playbook assumes you have a working build. If you are still wrestling with requirements or the build itself, start there first. And if QA is still an open question, read the QA playbook before you think about dates.
About to launch and not sure the plan will survive first contact with users?
30 minutes with a senior Fora Soft engineer — we will pressure-test your launch plan and flag where the P0 is most likely to come from.
Why most software launches fail (and a few succeed)
The numbers are stark and stable across two decades: roughly 70–80% of new products fail within their first two years, and around 66% of software projects miss at least one of their headline goals. The failure modes cluster into six patterns we see again and again:
- No user validation before the launch date. The team went from internal build to press release without a real beta cohort telling them the product works.
- Messaging that does not match the product. The landing page promises something the app does not deliver; activation collapses in the first 72 hours.
- Binary rollout. 0% users, then 100%, no staged rollout. When the launch spike hits, every bug hits every user simultaneously.
- No rollback plan. Finding a P0 in the first hour is normal; having no way to back out of it is not.
- Support not staffed for the spike. A 5× ticket volume in week 1 is standard; a team that cannot handle it burns trust fast.
- No activation funnel. Nobody instrumented the first-session flow; you cannot tell whether the people who signed up actually got value.
The products that do not fail share the opposite: a beta with real users, an activation event defined before build, a staged rollout, a rehearsed rollback, support capacity planned for 5× steady-state, and instrumentation running on day -1.
The full launch pipeline — from alpha to GA
Treat launch as a staged pipeline, not a single date. Each stage has a specific purpose, a cohort size, a duration and an exit criterion. Skipping one is what breaks most launches.
| Stage | Purpose | Cohort size | Duration | Exit criterion |
|---|---|---|---|---|
| Alpha (internal) | Prove core flows work on real data | Team + 10–30 friendlies | 2–4 weeks | Zero P0 in critical paths |
| Closed beta | Real users on controlled cohort; tune activation | 50–300 invited | 3–6 weeks | Activation ≥ 30%, NPS ≥ 20 |
| Open beta | Stress the system, capture edge cases | 1,000–10,000 public | 3–8 weeks | P0-free 14 days, SLO green |
| Soft launch | Region/geo-limited GA to validate scaling | One country or vertical | 2–4 weeks | Revenue target met, CSAT ≥ 4 |
| Phased rollout | Progressive delivery to full audience | 1% → 5% → 25% → 100% | 1–3 weeks | SLOs green at each step |
| GA + post-launch | Public launch, PR, GTM push | Full audience | Ongoing | 90-day metrics vs plan |
Figure 1. Launch pipeline stages. Durations shorten for B2B internal tools, lengthen for regulated or consumer-scale products.
Reach for a compressed pipeline when: you are shipping an internal tool to a known audience or a non-regulated B2B product where an enterprise customer has already committed. Skip no stages on consumer, regulated or high-concurrency products.
App Store & Google Play launch specifics in 2026
Mobile launch is a different discipline from web. Apple and Google have both tightened review and payment enforcement since 2024; ignore the details and you get rejected in hours, not days.
App Store (iOS)
Average review time in 2026 sits around 24–48 hours for updates and 2–5 days for first submission; expedited reviews exist but are rationed. Plan your launch date with a 7-day buffer for back-and-forth rejections. Staged release (phased release for automatic updates) now rolls 1%→2%→5%→10%→20%→50%→100% across 7 days; pause when you see crash-free users drop. TestFlight still caps at 10,000 external testers — use it for open beta, not closed alpha.
Payment enforcement is the trap. Apple’s 2026 guidelines continue the DMA carve-outs in the EU while keeping strict rules elsewhere. Anything that charges users for digital content accessible inside the app must go through In-App Purchase or qualify for a narrow reader-app exemption. Mis-classify and you get rejected within hours.
Google Play
Play Console reviews are typically faster (hours to 3 days) but policy enforcement in 2025–2026 got stricter on permissions, background activity, and the new Play Integrity API. Staged rollout through Play Console goes 0.5%→2%→5%→10%→20%→50%→100% and can be halted instantly. New apps require a closed testing cohort of at least 12 testers for 14 consecutive days before production, a change that has surprised more than one founder in the last year.
ASO basics that move conversion 20–40%
- Icon and first screenshot. The CVR-dominating assets. A/B test at least three variants of each in the relevant store’s experiment tool.
- Keyword stack. Apple uses the keyword field + title + subtitle; Google uses title + short description + long description. Seed with 40–60 candidates; trim to the top-performing 20 post-launch.
- Short video preview. Raises conversion 15–25% on consumer apps; negligible on B2B tools.
- Review velocity. In-app review prompt at first-success moments (not on app open). Target ≥ 4.5 stars within 30 days of launch.
Mobile UX craft is downstream of ASO — if you want the deep-dive, see our mobile app UX best practices.
Reach for staged store rollout when: your app has >10,000 daily active users or any critical business dependency. For small betas or early-stage consumer apps, instant 100% rollout is acceptable if your crash-reporting is well instrumented.
Web launch — feature flags, canary and blue/green
Web launches give you a superpower that mobile launches do not: you control the rollout in real time. Three techniques, usually layered:
1. Feature flags. Every non-trivial change ships behind a flag (LaunchDarkly, Split, Unleash, Flagsmith, or a home-grown table). Flags let you dark-launch code before user exposure, A/B test variants, and kill broken features without redeploying.
2. Canary deployments. Route 1% of traffic to the new version first. SLO monitors (error rate, p95 latency) auto-rollback if thresholds breach. Progress to 5%, 25%, 100% over hours to days based on signal.
3. Blue/green. Two identical production environments; traffic switches from blue to green atomically. Best for legacy stateful services where canary is hard. Rollback is swapping back to blue.
The combo we run by default: feature flags in code + canary for deploys + status page + SLO auto-rollback. Time-to-rollback < 5 minutes is the capability that lets you launch aggressively without taking aggressive risk.
Reach for blue/green over canary when: you are launching a backend with complex database migrations, shared caches, or stateful services where partial traffic is harder to reason about than a full switch.
Pre-launch marketing and GTM essentials
A technically perfect launch with no audience is still a failed launch. The minimum GTM stack:
- Waitlist, 60–90 days before launch. Typed.so, Tally, or a custom form. Promise something specific (early access + a concrete perk). 5,000 real emails beats 50,000 scraped.
- Positioning + messaging doc. One page. What is it, who is it for, what is the before/after. Everyone on the team memorises it before launch day.
- Landing page with one CTA. Not three. One. Track the conversion event obsessively.
- Content pre-launch. 4–6 pieces published 2–6 weeks before launch: use cases, comparisons, founder story, behind-the-scenes. They seed SEO and give journalists something to link to.
- Product Hunt (if relevant). Consumer / prosumer tools benefit; deep B2B rarely does. Schedule for Tuesday–Thursday, line up hunters and commenters, prepare 1–2 FAQ responses.
- Launch-day comms plan. Pre-written announcements (X/Twitter, LinkedIn, newsletter, Slack communities), press embargo list, sample media pitches, two-paragraph customer quotes.
- Reference customers. 3–5 real users quoted with role + measurable outcome. Logos without quotes convert worse than named quotes without logos.
Need a launch-day runbook mapped to your actual stack?
We will walk your infrastructure, cohort plan and GTM — and hand you a concrete runbook you can rehearse before Day 1.
Launch-day operations — the war room
Treat launch day like an incident that has not happened yet. The non-negotiables:
- War room. Physical or Slack + Zoom; engineering, support, marketing, founder, on-call ops. One channel is the source of truth; others are muted.
- Runbook. A checklist that covers T-24h, T-2h, T-0, T+1h, T+6h, T+24h. Who deploys, who announces, who watches which dashboard, who handles press, who answers the first 50 tickets.
- Rollback criteria pre-agreed. “Error rate > 2% for 5 min → auto-rollback.” “Signup CVR < 10% of baseline after 10k visits → review landing page.” Written down before launch day.
- Status page. Statuspage.io, Instatus, or self-hosted Cachet. Pre-draft “investigating,” “identified,” “monitoring,” “resolved” templates.
- Support scaled 3–5× normal. Pre-warm FAQ docs, canned responses, a pinned Slack channel for devs to triage ticket-generated issues fast.
- SLO dashboards on big screens. Golden signals (latency, error rate, saturation, traffic). One dashboard per critical service.
- Do not cut a release into launch day. Code-freeze at least 24h before. Hotfix only for launch-blocking bugs, with a buddy review.
What to measure day 7, day 30, day 90
Launch-day traffic numbers are a vanity metric. The real health check happens over 90 days. Here is the baseline we report back to clients:
| Window | Metric | Healthy band (consumer SaaS) | Why it matters |
|---|---|---|---|
| Day 1–7 | Signup → activation CVR | ≥ 30% | First-value delivery check |
| Day 1–7 | Crash-free users (mobile) | ≥ 99.5% | Store algo will downrank below |
| Day 1–7 | Support-ticket quality share | ≤ 20% tickets = real bugs | Signals build quality |
| Day 30 | D7 retention | 25–40% | Product-market-fit leading indicator |
| Day 30 | NPS | ≥ 20 (great ≥ 40) | Viral loop potential |
| Day 30 | Conversion free → paid (if applicable) | 3–8% for self-serve | Monetisation health |
| Day 90 | D30 retention | 15–25% | Cohort stickiness |
| Day 90 | Gross revenue retention (B2B) | ≥ 90% | Expansion readiness |
| Day 90 | CAC payback | < 12 months SaaS | Capital efficiency |
Figure 2. Post-launch metrics and healthy bands for consumer/prosumer SaaS. B2B enterprise numbers shift; the structure of the dashboard does not.
Compliance and store-review gotchas
1. HIPAA. US healthcare products need BAAs with every subprocessor (hosting, analytics, payment), PHI encrypted in transit and at rest, audit logs, and documented test evidence. Launch blocked if any BAA is missing. Our telemedicine launches (MyOnCallDoc, CirrusMED) include a pre-launch compliance gate that reviews all of these 2 weeks before the target date.
2. GDPR / UK GDPR. Legal basis per data-processing activity, a DPA with every processor, cookie consent (real consent, not “by continuing”), data-subject-request flow. Fines are real — plan like they are.
3. Apple in-app purchase. If users can access digital content purchased elsewhere, your app must either: (a) not mention the purchase path inside the app, (b) use IAP, or (c) qualify as a reader app under narrow criteria. The 2025–2026 EU DMA carve-outs add a link-out option but only in the EU.
4. Google Play Data Safety form. Must declare every data collection. The 2024–2025 Play policy sweep rejected thousands of apps for inaccurate declarations. Write this with the engineering team, not the legal team alone.
5. Accessibility. European Accessibility Act is fully enforceable in 2025; WCAG 2.2 AA is the practical target. Ship accessibility checks in CI (axe-core / Pa11y) before launch, not after the first complaint.
Block the launch when: any BAA/DPA is missing, any critical data-processing declaration is inaccurate, or any WCAG AA blocker is known and not fixed. These are not polish — they are operating licences.
Five launch pitfalls we keep watching teams step on
1. Staging confidence bias. “It works in staging” means “it works for 5 QA users on seed data.” Production traffic, real devices, real networks, real third-party outages break it differently. Do a real load test and a chaos drill before GA.
2. Database and third-party connection limits. Postgres caps at ~100 connections by default; Stripe/Twilio/Sendgrid rate-limit. A launch spike bottlenecks on whichever limit is tightest. Raise them or add pooling (PgBouncer, rate-limited queues) before launch.
3. Support not warned. Marketing ships the launch post; support sees their ticket volume 10× from a blog they did not know about. Brief support 72 hours before every launch.
4. No rollback rehearsal. A rollback button that has never been pressed is a theoretical rollback button. Run one dry-run rollback in staging every week leading up to launch.
5. “We will fix it post-launch.” Items in the pre-launch bug list that get punted rarely get fixed — the team moves to the next thing. Ship a smaller product with a clean bug list or see our piece on what bug cleanup actually costs later.
Launch cost — realistic ranges without the padding
For a mid-size product (MVP shipped, 10–30k first-quarter users targeted), we budget launch + first 90 days at roughly 10–15% of total build cost. Dominant line items:
- Launch engineering (rollout tooling, feature flags, SLO dashboards, load tests, runbook): 3–5 weeks of senior engineering effort.
- Launch QA (regression sweep, device matrix, compliance verification): 2–4 weeks QA effort, heavier for regulated.
- GTM + content (landing, positioning, 4–6 content pieces, launch comms): 3–6 weeks of marketing effort or an external package.
- Support scale-up (FAQ, canned responses, temporary coverage): 1–2 weeks + a war-room headcount.
- Observability and infra (APM plan bump, load-test spend, pre-warmed capacity): a one-time spend plus 10–20% infra bump for 60 days.
Because Fora Soft leans on agent-assisted engineering, our launch engineering and regression bring-up come in faster than a purely manual shop on comparable scope. We still price conservatively; we do not promise numbers we cannot defend on paper.
Mini case: launching a real-time video platform at scale
Situation. A live-streaming platform needed to go from private beta to a public launch targeting 10,000+ concurrent viewers at sub-second latency. No room for launch-day incident; live concerts do not pause.
12-week plan. Closed beta on two live events (capped at 500 concurrent). Open beta across four events (capped at 2,500). Load tests at 2× target concurrency on dedicated infra. Feature flags on every new pipeline component. SLO auto-rollback tied to buffer ratio and start-time. Status page, war room, and a pre-rehearsed dry-run of a rollback the week before. Similar to the kind of launch we plan on custom software development engagements where scale is the primary risk.
Outcome. Public launch event peaked above 10,000 concurrent viewers with sub-second latency held; zero customer-impacting incidents in the first 7 days; D7 retention for returning viewers above 40%. See Worldcast Live for the shipped product. The pattern generalises: the combination of staged beta, feature flags, SLO-driven rollback and a rehearsed war room removes most of the launch-night variance.
Planning a launch you cannot afford to redo?
We will sketch a 12-week launch pipeline for your product — beta cohorts, rollout tooling, war-room runbook, the whole thing — in a single working session.
A decision framework — right-sizing your launch in five questions
1. What is the blast radius of a bad launch? 100 internal pilot users vs a 50,000-waitlist consumer app vs a hospital ward. Answer sets how many stages of the pipeline you can skip (usually: none).
2. Are you regulated? HIPAA, GDPR, PCI, MDR — each adds compliance gates, documented test evidence and BAA/DPA chains that must be complete before GA.
3. Mobile, web, or both? Mobile adds store review, phased rollout mechanics, ASO; web gives you live control via flags and canary. Plan the two tracks separately; launch in sequence if dependencies exist.
4. How confident is your rollback? Under 5 minutes and practiced → you can launch aggressively. Over 30 minutes → add another week of beta and gate the launch.
5. What are your day-90 success metrics? If you cannot write them down, you are not ready to launch. Define activation, retention and revenue targets in advance and instrument them before launch.
Launch KPIs that survive executive review
1. Quality KPIs. Crash-free users ≥ 99.5% (mobile); error rate < 0.5% per request (web); SLO breach count in first 30 days ≤ 2; support-ticket quality share ≤ 20%.
2. Business KPIs. Activation rate ≥ 30%; D7 retention 25–40%; D30 retention 15–25%; NPS ≥ 20; free→paid conversion 3–8% self-serve; CAC payback < 12 months.
3. Reliability KPIs. MTTR P0 < 60 min; time-to-rollback < 5 min; change-failure rate < 15%; status-page uptime ≥ 99.9%.
When NOT to launch yet
- Activation < 20% in closed beta. You do not have a product that delivers first value fast enough. Fix that before scaling.
- Crash-free users < 99% on the target device matrix. Store algos will punish you and reviews will sink.
- No written rollback criteria or no rollback rehearsal. Launch risk is asymmetric — a bad Day 1 costs more than a two-week delay.
- Expectations out of sync with the team’s capacity. Read expectations vs reality before you set a date.
- Non-functional requirements unconfirmed. If you cannot name the latency, concurrency and availability targets, you are not ready. See non-functional requirements.
FAQ
How long does a full launch pipeline take?
For a typical consumer or prosumer SaaS: 2–4 weeks alpha, 3–6 weeks closed beta, 3–8 weeks open beta, 2–4 weeks soft launch, 1–3 weeks phased rollout, then GA. Total roughly 11–25 weeks depending on regulatory load and cohort confidence. B2B internal tools run shorter; healthcare and safety-critical run longer.
What percent of total build cost should the launch itself take?
Plan 10–15% of total build budget for launch engineering, QA sweep, GTM content, support scale-up and infra bump across launch + first 90 days. Regulated products can run 15–20% with the extra compliance verification. Under 8% usually means something is being skipped.
How long do Apple and Google app review take in 2026?
App Store Connect averages 24–48 hours for updates and 2–5 days for a first submission; plan a 7-day buffer. Google Play Console is typically faster — hours to 3 days — but enforces the 14-day / 12-tester closed-testing rule before a brand-new app can reach production. Both stores support staged rollouts that you can pause on crash-rate signals.
Is Product Hunt still worth it in 2026?
For consumer and prosumer tools, yes — it still generates a day-one spike and qualified signups. For deep B2B (compliance, vertical SaaS, enterprise), the effort rarely pays back. Schedule for Tuesday–Thursday, line up your hunters and commenters a week in advance, and have a response-ready FAQ for the first six hours.
What is a “soft launch” and do I need one?
A soft launch is a GA restricted to one country, vertical or partner, with no PR push. It lets you validate end-to-end operations (payments, support, scale, onboarding) at real but bounded volume before the global launch. You need one whenever the full-audience launch depends on ops you have not exercised — which is most consumer products.
Should I use feature flags on a first launch?
Yes, even if only home-grown. Flags let you dark-launch, kill a broken feature without redeploying, and A/B test variants post-launch. A very small startup can run a flag table in Postgres; paid tools (LaunchDarkly, Split, Unleash, Flagsmith) earn their keep once you have >20 flags and multiple teams.
What is a “good” D7 retention for a new SaaS?
For consumer/prosumer SaaS, 25–40% D7 is healthy; above 40% is a strong PMF signal. B2B SaaS retention is less useful at D7 because usage is often weekly or monthly — track weekly active accounts and feature-adoption depth instead.
What breaks most often on launch day?
In order: database connection pools, third-party rate limits (Stripe, Twilio, Sendgrid), CDN cache configuration, email deliverability (new sending domain hitting spam traps), and mobile-specific crashes on devices the team does not own. A load test at 2× target and a pre-launch email warm-up handle most of them.
What to Read Next
Process
Product development step-by-step
How we go from idea to shipped product at Fora Soft.
QA
Why every software project still needs QA
The business case, the pyramid, the budget — explained.
QA at every stage
QA at every stage of product development
How testing fits into the SDLC, not just the last week.
Mobile UX
Mobile app UX design best practices
The UX patterns that move conversion on store listings and first-session.
Monetisation
How much revenue can your app realistically make?
The monetisation reality check founders ask us about weekly.
Ready to ship your product without launch-day drama?
A great launch is less about heroics and more about discipline. Staged pipeline. Beta cohorts. Written rollback criteria. Feature flags and canary. Compliance gates. Pre-warmed support. Instrumented activation funnel. Five or six KPIs you will actually report on at day 7, 30 and 90.
Most of the 70–80% of products that fail did not fail because the code was wrong. They failed because there was no plan for the first thousand users, and no plan for the first P0. This playbook fixes both.
If you want a launch plan that maps to your exact stack, cohort and compliance shape, we can help — whether that is one review session or running the launch itself with you.
Want a launch your board can trust?
30 minutes with a senior Fora Soft engineer — we will map your launch risks and hand you a concrete runbook for the next 12 weeks.


.avif)

Comments