
Key takeaways
• Hiring discipline is the strongest predictor of project outcome. Industry attrition sits at 13–18% per year and average developer tenure is just 2.3 years — if your vendor cannot keep people, your roadmap will stall.
• A 2% offer rate is the quality floor, not a marketing number. Fora Soft interviews ~500 candidates, screens 100, tests 40, assesses 12, and hires 10 — the same funnel shape you should demand from any shortlisted vendor.
• Probation is where the real filtering happens. An 80-hour paid pet project on the vendor’s dime catches slow, careless, or passive engineers before they ever touch your code.
• Pick the engagement model that matches your leadership bandwidth. Staff augmentation for capacity gaps, dedicated team for evolving roadmaps, project outsourcing for fixed scope — getting this wrong is the #1 reason outsourced projects drift.
• Agent Engineering compresses the cost curve. Our hiring standard combined with AI-assisted delivery means senior-quality output at rates closer to mid-market nearshore, not onshore agency rates.
Why Fora Soft wrote this playbook
A dream team is not a lucky draw — it is a selection machine. A properly assembled team is half of shipping a product on time; the other half is a team that stays long enough to ship the next quarter. Fora Soft has spent 21 years shipping 625+ multimedia products — video conferencing, AI video recognition, AR/VR, telemedicine, IPTV, video surveillance — and every one of them succeeded or failed at the hiring step long before a single line of code was written. This playbook is how we think about that step, and what you should demand when you evaluate any vendor, including us.
When BrainCert asked us to turn their virtual-classroom concept into a production platform, the win was not the interactive whiteboard or the WebRTC stack — it was that the same engineers stayed on the roadmap long enough to deliver 500M+ minutes across 10 datacenters and four Brandon Hall awards. That continuity came from a hiring funnel that rejects 98% of candidates and a probation system that weeds out another 10–15% before they ever touch paying code. See it yourself on the BrainCert case study.
Our CEO Nikolay puts the philosophy in one sentence: “If you don’t love what you do, you’ll have to compete with those who do — and that’s a losing game.” Translated into hiring: we screen for passion and curiosity first, then hard skills, then fit. The rest of this guide is the mechanics.
Need a team that stays through your whole roadmap?
We’ll walk you through who would work on your project, how they were hired, and what they’ve shipped — before you commit to anything.
The hiring funnel math — what a 2% pass rate actually buys you
Most agencies hire 15–25% of candidates they interview, because their economics require bodies on billable seats. That number is the single easiest tell for cut-rate outsourcing — and the fastest way to predict whether your dream team will hold together six months into the roadmap. Here is the funnel Fora Soft uses to select top developers, benchmarked against the industry average:
| Stage | Fora Soft candidates | Pass rate | Typical agency | What it filters |
|---|---|---|---|---|
| Resumes reviewed | 500 | 100% | 500 | Baseline pool |
| HR phone screen | 100 (20%) | 20% | ~40% | English, passion, base knowledge |
| Behavioral interview | 40 (40%) | 40% | ~60% | Teamwork, ownership |
| Technical + test task | 12 (30%) | 30% | ~55% | Code quality, debugging, system thinking |
| Final CEO interview | 10 (90%) | 90% | ~90% | Values alignment, red-flag override |
Compounded, our offer rate lands at 2%. A typical offshore agency lands near 12–18%. That 6–9× difference is not marketing — it’s the gap between engineers who read documentation on weekends and engineers who need three Slack pings to answer a question.
Question to ask any vendor: “How many candidates do you interview per hire, and what’s your 90-day attrition rate?” If they can’t answer in numbers, they don’t measure it.
The 5-stage interview stack — what every software hire should pass
The average software engineer in 2026 goes through 4.2 interview stages before an offer — up from 3.1 in 2021. We use five, because the extra stage is a paid test task that catches failures the live interview cannot. Here is what each stage actually tests, and the question you should ask any vendor about their own version.
Stage 1 — Resume and portfolio filter
What we look for. Real code, not titles. We ignore years-of-experience stats and read commit histories, open-source contributions, and past projects. A candidate with two years of shipping production WebRTC beats a candidate with five years of Jira tickets, every time.
Typical reject signal. Copy-paste skill lists, no public code, vague project descriptions (“worked on backend”), gaps with no narrative.
Stage 2 — HR screen for passion, English, and baseline knowledge
What we test. Three things in 30 minutes: (1) can they articulate a technical concept clearly in English? (2) Do they have a side project, blog, or GitHub they light up when talking about? (3) Do they understand the basics of the stack they claim (e.g., explain WebRTC signalling without googling)?
Why this stage kills 80% of the pool. Most developers can code. Fewer can explain. Even fewer are genuinely curious. The funnel collapses here.
Stage 3 — Technical interview with department lead
What a department head or senior engineer runs. 60–90 minutes, split between (a) a deep dive into one project from the resume — what they decided and why, what they would now do differently; (b) a debugging exercise using real production-shaped code, not leetcode; and (c) a small system-design sketch relevant to the job family (e.g., “design the ingress for a 10k-viewer live stream”).
AI policy. Candidates can use ChatGPT, Claude, or Copilot — this reflects how the job is actually done in 2026, where Meta and others now pilot AI-aware coding rounds. We evaluate how the candidate uses the tool, not whether they memorised binary-tree traversals.
Stage 4 — Paid test assignment with written feedback
Why this is the single most important stage. A live interview tells you how someone performs under pressure. A test task tells you how they work when nobody is watching — which is how 95% of your project will be delivered.
What we grade. Correctness, code structure, naming, tests, README quality, Git hygiene, and the questions they ask before starting. Every candidate gets detailed written feedback with specific pages from books or docs to read next — regardless of outcome. This alone generates ~30% of our best referrals.
Stage 5 — Final interview with the CEO
Not a formality. Nikolay personally approves every offer. The CEO stage is a values check and a red-flag override — the point at which gut is allowed to overrule a passed technical interview. About 10% of candidates who pass Stage 4 still get declined here.
Reach for this 5-stage stack when: you’re hiring for a multi-quarter engagement and a senior mistake would cost 4–8 weeks to unwind. For short staff-augmentation stints, stages 1–3 are usually enough.
The probationary system — why it matters for your project
A 5-stage interview catches ~85% of bad hires. The probationary period catches the remaining 15% — the people who interview well but don’t ship. Done right, this is paid entirely by the vendor, not the client. Done wrong, it is billed to your project and you never know it happened.
The Fora Soft probation has three stages that every dedicated-team vendor should offer: a sandboxed pet project, a real-project mentorship, and a 30-day evaluation with a 12-month development plan.
The 80-hour pet project — catching risk before it reaches you
Every new engineer at Fora Soft builds a small but production-shaped product — typically a text and video chat on our media-server stack — in a 80-hour “Trial by Fire” sandbox before touching client code. We evaluate three variables, deliberately named:
1. Speed of execution. Did they hit the 80-hour cap? Missing the deadline is a hard no. A paying client cannot afford to learn this later.
2. Quality. Code reviewed against our internal standard: readable naming, tests where sensible, Git history that tells a story, no TODO drift.
3. Initiative. Did they ask sharp questions, propose alternatives, notice missing requirements? Or just wait to be told? The difference compounds into months of hidden slippage once they’re on your project.
What to demand from any vendor: ask whether new hires build a sandboxed project before billing your account. If the answer is “we onboard on the client’s codebase,” you are paying for their training.
The mentorship model — how we lock in velocity
After the pet project, the new engineer joins a real project under a dedicated mentor — a Middle-or-above engineer who has been with us 6+ months and personally went through the same probation. Mentors track four numbers, not vibes:
1. Quality. Pull-request acceptance rate, review comments per PR, escaped defects over 30 days.
2. Speed. Story points or hours delivered versus committed; estimate accuracy within ±20%.
3. Teamwork. Async communication hygiene, unblocking others, standup clarity, code-review participation.
4. Initiative. Bugs spotted and filed without being asked, infra ideas proposed, process improvements suggested.
Mentorship is not a cost center — the mentor usually learns as much as the mentee, especially on cross-stack questions. That cross-pollination is why our mid-level engineers ship at senior speed on narrow multimedia topics like WebRTC and LiveKit.
Evaluation and the 12-month development plan
After ~30 days on a real project we run a formal evaluation: what the engineer has absorbed, where the gaps are, and what the next 12 months should look like. The plan prioritises technologies the client roadmap actually needs — a common mistake is sending people to generic “cloud architect” courses when your account is shipping LiveKit agents.
At the end of probation (often before the standard three months), the team fills an anonymous 360 questionnaire. A final interview presents the results; the engineer decides whether to keep working with the mentor or graduate. Most choose to keep the mentor — the mentorship converts into long-term velocity, not hand-holding.
Want to audit a vendor you’re already working with?
We’ll walk your current team’s hiring and delivery process against the benchmark in this article — no sales pitch, just a second opinion.
Red flags when you evaluate any dev agency
The cheapest signal that an outsourcing engagement will fail shows up in the first 30 minutes of the first sales call. These are the five we see most.
1. “Yes, we can do all of that.” You list 25 features. They answer yes to every one without a single follow-up question. Reject — you’re buying bodies, not product thinking.
2. No public code. A serious agency has a GitHub org, at least a few open-source contributions, and is happy to show sanitised code samples from past projects. “NDA” is not an answer in 2026 — everyone signs NDAs; everyone can still show something.
3. Refusing named references. Ask for three named clients you can call directly. An agency that can’t produce them has churned every client or never had repeat business.
4. You never meet the engineers. If the sales engineer on the call is not on your project, demand to meet the people who will actually ship your code. Agencies that refuse usually rotate juniors in after contract signing.
5. Fixed-price bid on a fuzzy scope. Any agency quoting fixed-price for a scope you haven’t written down is either going to miss the budget, the deadline, or the quality bar — usually all three. A disciplined vendor insists on a short planning phase first.
Security and compliance — what to audit before signing
For multimedia, fintech, health, and education products, the vendor’s security posture is part of your product’s risk profile. A disciplined agency will volunteer most of this; the rest you should ask for in writing.
1. Written access and data-handling policies. Who has access to client code, what data leaves the VPN, how long it is retained. Two-factor on every tool. Rotate credentials on engineer departure within 24 hours.
2. GDPR, HIPAA, SOC 2 posture. Even if the agency is not SOC 2-certified, they should be able to answer a 30-question security questionnaire without silence. HIPAA engagements require a BAA and a named security lead.
3. AI and third-party model dependencies. In 2026 this is the newest compliance trap: if the agency silently sends your customer data to a third-party LLM, you inherit that exposure. Demand a data-flow diagram for every AI feature.
4. Code ownership and IP assignment. Master agreement must assign all work product to the client, including derivative materials, docs, and test assets. Check the fine print on “vendor-retained libraries.”
How to verify references and past work — the 2-hour audit
The single highest-signal pre-signing step, and the one most buyers skip. Block two hours on your calendar and do this before you send an MSA.
1. Three named references, called directly. Not email. Ask: would they hire the vendor again for the same project? What broke, and how was it fixed? Has the assigned engineering team rotated? A silent beat after any of these is a tell.
2. Working demo of a similar product. Not slides, not screenshots — a live session where the agency drives a past product end-to-end. Watch how confident they are navigating their own code.
3. Public footprint cross-check. Clutch reviews (look for recent ones), GitHub org activity, engineering blog depth, conference talks. Fora Soft’s own footprint lives at our portfolio of 239+ shipped products.
4. LinkedIn background of the proposed team. How long have the assigned engineers been at the agency? < 6 months on average is a risk — you may be buying into a revolving door.
The 12-week ramp — what good onboarding looks like
A good team on a bad ramp still misses your first quarter. This is the cadence we run on every new dedicated-team engagement — and the cadence you should copy even if you hire a different vendor.
Weeks 1–2 — Discovery. Requirements interviews, workflow mapping, architecture sketch, risk register. Deliverable is a short discovery doc you could hand to a second vendor if you wanted to.
Weeks 3–4 — Environment and first slice. Dev, staging, CI/CD provisioned. First vertical slice shipped to staging to prove the toolchain, not deliver feature value. Any vendor that skips this is setting up a first-feature disaster.
Weeks 5–8 — First customer-facing feature. One feature shipped end-to-end to prod, including monitoring, alerting, and a one-page runbook. Your velocity baseline is measured here.
Weeks 9–12 — Cadence lock. Two-week sprint cadence with planning, demo, retro. Written commitment on velocity, quality, and incident response. From week 13 onward the team is in steady-state delivery.
Staff augmentation vs dedicated team vs project outsourcing
Choosing the wrong engagement model is the single biggest reason outsourced projects drift. The three common options trade control for accountability and speed of ramp.
| Dimension | Staff aug | Dedicated team | Project outsourcing |
|---|---|---|---|
| Who manages the team | You do | Vendor PM, you set priorities | Vendor end-to-end |
| Pricing shape | Hourly T&M | Monthly run-rate | Fixed price or phased T&M |
| Client PM overhead | High (15–25%) | Medium (5–10%) | Low (<5%) |
| Best for | Capacity gaps, specific skill | Evolving roadmap, continuity | Fixed scope, hard deadline |
| Rate range (nearshore senior) | $50–$80/hr | $55–$90/hr blended | $60–$100/hr effective |
| Ramp time | 1–2 weeks | 3–4 weeks | 4–8 weeks (discovery) |
Reach for staff augmentation when: you have strong internal engineering leadership and a written backlog, and you need 1–2 missing skills (e.g., a WebRTC or LiveKit expert) for < 6 months.
Reach for a dedicated team when: your roadmap is evolving, you’re past MVP, and you need a stable squad of 3–8 people who stay on the product for 6+ months. This is the most common Fora Soft engagement.
Reach for project outsourcing when: scope is fixed, deadline is hard, and you’re willing to pay a 10–20% risk premium for the vendor to own end-to-end delivery.
Mini case — how this hiring pipeline shipped BrainCert
Situation. BrainCert came to us with a virtual-classroom concept: HD video, interactive whiteboard with LaTeX and Wolfram Alpha rendering, live polls, proctored exams, SCORM/xAPI documents, white-label branding, and cross-platform clients (iOS, Android, Web). A single junior-heavy team would have bled 12–18 months just understanding WebRTC reliability on unreliable education networks.
What we did. We staffed a dedicated team of senior multimedia engineers out of our 2% hiring funnel. Every one of them had shipped at least one production WebRTC product before touching BrainCert. We ran a two-week planning phase, split the architecture into media server, signalling, whiteboard CRDT, and client SDK layers, and put a mentor on every new engineer we added over the 2-year engagement.
Outcome. BrainCert now runs 500M+ minutes across 10 datacenters, serves 100,000+ customers, and has won the Brandon Hall Award four times. The CEO called the work “outstanding in every aspect.” Want a similar independent read on your stack? Book a 30-min review.
Cost math — what disciplined hiring actually costs (and saves)
A 2% hiring funnel is not free. Fora Soft spends roughly 40–60 hours of lead and HR time per offer, plus the 80-hour paid pet project that does not bill clients. At fully-loaded internal rates, that is ~$6–$10k per hire, absorbed by the agency, not your invoice.
The saving for you is on the other side. A bad senior hire on a 12-month roadmap typically costs the client 4–8 weeks of slipped delivery before being replaced — at a mid-market dedicated-team run rate of $30–$50k/month for a small squad, that is $30–$100k of invisible waste you never see itemised.
We layer Agent Engineering (AI-assisted delivery using Claude, Copilot, and Cursor, with a human senior in the loop) on top of this hiring standard. That combination lets our senior engineers ship code at 1.3–1.8× their old velocity on the tasks AI does well (scaffolding, test generation, boilerplate migration, refactoring), which keeps our effective rate competitive with cheaper agencies that cut quality instead of compounding it. For a deeper walkthrough of how we estimate cost, see our software estimating guide.
A decision framework — pick a dev team in five questions
Q1. What is the shape of my roadmap for the next 12 months? Fixed scope and hard deadline → project outsourcing. Evolving and multi-phase → dedicated team. One specific skill gap → staff augmentation.
Q2. Do I have a senior engineering lead on my side? Yes → staff aug or dedicated team with you setting priorities. No → dedicated team with vendor PM or full project outsourcing.
Q3. How narrow is my technology? Multimedia, real-time video, AI agents, computer vision are specialist fields with small talent pools — vet deeply or hire a specialist shop. Generic CRUD SaaS → most competent agencies can deliver.
Q4. What is my regulatory and compliance envelope? HIPAA, GDPR, SOC 2, PCI — demand written policies, pen-test history, and a named security lead from day one. An agency that answers vaguely is automatically disqualified.
Q5. What is the cost of a 6-week slip? If the answer is “missed funding round” or “missed customer contract,” pay the premium for a proven senior team. If the answer is “annoying,” cheaper offshore can still work.
Stuck on which engagement model fits your roadmap?
A 30-minute call with our CEO typically shortens vendor shortlisting from six weeks to one — with no obligation to hire us.
KPIs to track once the team is assembled
Quality KPIs. Escaped defects per release < 3, PR review comments per PR 4–8, average PR lifetime < 24h. If PRs take days to merge, communication is broken — not the code.
Business KPIs. Scope delivered per sprint against commitment > 85%, estimate accuracy within ±20%, demo-to-prod cycle < 2 weeks for new features. These map directly to your predictability with customers.
Reliability KPIs. Team attrition < 10% on your engagement (vs the 13–18% industry baseline), same-day incident acknowledgement 24/7 for production systems, P1 MTTR < 4h. If your vendor cannot commit to these in writing, you don’t have a reliability contract.
When NOT to hire an external dev team
Outsourcing is not always the answer — and a vendor that pretends otherwise is selling you.
Your product is your only asset and the codebase is the IP. If you are a product-led startup and the software itself is your core differentiator, invest in an in-house senior or two first. Outsource around them, not instead of them.
You don’t yet know what you want. If your spec is “something like Uber but for…” and there is no written requirements doc, start with a one- to two-week paid discovery engagement, not a full team. A disciplined planning phase saves 5–10× its cost in avoided rework.
You cannot commit a decision-maker to weekly reviews. Even the best vendor stalls without an empowered client lead. If you cannot dedicate 3–5 hours a week, delay the engagement.
FAQ
How long does the Fora Soft hiring process take from application to offer?
About 3–4 weeks for a senior role. Resume review is same-day, HR screen within a week, technical interview the following week, a 5–7 day test task, and a final CEO interview shortly after. We deliberately don’t compress further — the test task needs real thinking time.
Can I interview engineers before they are assigned to my project?
Yes. Every client gets a CV pack and a 30-minute call with each proposed engineer before the contract is signed. You can reject a match with no penalty. Any vendor that refuses this should be removed from your shortlist.
What’s the actual cost of a Fora Soft dedicated team?
It depends on squad shape. A typical 4-person dedicated team (1 lead + 2 engineers + 1 QA) runs in the nearshore-senior rate range, which translates to a predictable monthly run-rate your CFO can budget against. We provide a written estimate on the first call — no charge.
What happens if the engineer on my project underperforms?
We swap them within 2–3 weeks at no cost, with full knowledge transfer. Our probationary system catches most mismatches before they touch client code, but when one slips through, replacement is our responsibility, not yours.
Do you specialize in any specific tech stacks?
Our deep specialties are multimedia and real-time communication: WebRTC, LiveKit, Agora, Twilio, Wowza, custom media servers, AI video recognition, and telemedicine. We also staff standard web/mobile stacks (React, Node, Swift, Kotlin, Flutter) to round out product teams around those specialisms.
How do you handle data confidentiality and IP?
Mutual NDA is signed before the first technical call. The master services agreement assigns all work product, including code and derivative materials, to the client. Every engineer signs an individual confidentiality agreement and completes annual security training. We can support GDPR and HIPAA engagements with appropriate controls.
What’s your engineer attrition rate on client engagements?
Below the industry baseline of 13–18%. Our retention is driven by narrow specialisation (engineers work on interesting multimedia problems, not generic CRUD) and the mentorship-plus-development-plan structure. Most engineers who join us on a project stay through the full 12–24 month roadmap.
How does Agent Engineering change what I pay per feature?
AI-assisted delivery compresses the scaffolding, boilerplate, migration, and test-generation portions of most features — typically 20–40% of total effort — by roughly 1.5×. Net effect on invoice: mid-single-digit-percentage savings on small features, double-digit savings on large refactors, no change on R&D-heavy unique features where the human engineer is the bottleneck.
What to Read Next
Estimating
A Guide to Making Estimates for Software Projects
How we turn a fuzzy brief into an accurate cost and timeline in five days.
Quality
Ensuring Quality: Testing at Every Stage
The QA stack that complements the hiring funnel — how we keep defects escaping below 3 per release.
AI Delivery
AI in the Software Development Process
How Agent Engineering compresses our delivery cost without trading off quality.
Company
21 Years of Fora Soft: 625+ Products Shipped
The longer story of how our hiring and delivery machine came together.
Ready to assemble a team that actually ships?
A dream team is not a lucky hire. It is a 2% hiring funnel designed to select top developers rather than fill seats, a paid 80-hour pet project, a mentor who tracks four numbers, a 12-month development plan tied to your roadmap, and an engagement model that matches your leadership bandwidth. Get those five pieces right and your project shifts from “hope it lands” to “predictable sprint velocity.”
Fora Soft has been running this exact machine for 21 years across 625+ shipped products, from BrainCert to TradeCaster to V.A.L.T. If you’re shortlisting vendors for a multimedia, video, or AI product, the fastest way to see whether we fit is a 30-minute call — we’ll walk you through the engineers who would join your project, how they were vetted, and what they have shipped.
Meet the team that would actually build it
A no-obligation 30-minute call with our CEO — initial requirements, architecture sketch, and an honest cost estimate, free of charge.



.avif)

Comments