Remote video production platform for Netflix, Apex Legends, and Electronic Arts

Key takeaways

Speed Space replaces 4–6 tools with one platform. Pre-Speed Space, Revo Studio juggled Zoom + radios + separate recorders + spreadsheets + Slack + Frame.io for shoots that ship to Netflix, HBO, Apex Legends, EA, Paris Fashion Week and Live Nation Urban. Speed Space collapses all of that into a single browser tab.

The streamlining is operational, not just visual. Setup time per shoot dropped materially, post-production cycles shortened, frame loss in masters went to zero, and crews now run 25-participant sessions with zero downtime. The win is workflow, not chrome.

The stack: Next.js + Node + WebRTC + LiveKit + MongoDB + AWS. JavaScript / Next.js for the producer console, Node + Express for the API layer, socket.io for real-time crew comms, WebRTC + LiveKit for the live conference and SFU, MongoDB for project state, AWS for post-production storage.

Built around four roles, not one user type. Admin, Production Member, Talent and Representative all get exactly the controls they need — no more. Role-based access is the unsung feature that prevents on-set chaos a generic conferencing tool will not protect you from.

This is a workflow piece. For the buyer’s playbook (cost ranges, SaaS comparison, decision framework), see the companion article: Speed Space: Custom Remote Video Production Platform. This piece walks the actual shoot, end to end.

Why Fora Soft built this article (and Speed Space)

Fora Soft has spent 21 years shipping real-time video, streaming and AI products — 625+ products across telemedicine, e-learning, video surveillance, OTT and live entertainment. Speed Space is the platform we built with Revo Studio — a Southern California video production agency that ships for Netflix, Apex Legends, Electronic Arts, HBO, Paris Fashion Week and Live Nation Urban. This article is part of our project series — we walk through what Speed Space does, how it works, and the streamlining mechanics that turn a chaotic remote shoot into something that looks like a real studio.

If you are a producer evaluating whether your team should keep duct-taping Zoom + Frame.io + Slack, or a CTO at an agency considering a custom build, the playbook below is the one we hand clients on day one. For the cost-and-comparison version of this story, see the companion piece: Speed Space: Custom Remote Video Production Platform. For the project page with screens and capabilities: forasoft.com/projects/speed-space.

Below is the video overview Revo Studio uses to demo the platform internally:

Figure 1. Speed Space overview — the producer console, multi-stream switching and role-based crew controls.

Want this kind of workflow for your studio?

Thirty minutes with a senior video engineer: walk through your current tooling, surface the streamlining opportunities, scope what a custom build looks like.

Book a 30-min scoping call → WhatsApp → Email us →

What Speed Space is in one sentence

A web platform that lets a distributed crew run a multi-camera, broadcast-quality video production as if everyone were in the same studio — no installs, no separate recorder, no radio chatter, no spreadsheet of camera-to-take cross-references.

Speed Space is used to ship content for some of the most demanding brand and broadcast deliverables in the world: Netflix, HBO, Apex Legends, Electronic Arts, Paris Fashion Week and Live Nation Urban. The streamlining is the product.

If you have ever opened five tabs to run a single remote shoot — Zoom for the talk, OBS for the record, Slack for the crew, Frame.io for the proxy, a shared Google Doc for the shot list — this is the experience Speed Space replaces.

Before vs after: how the shoot day actually changes

The flat “here are the features” tour misses the point. The point is what changes for the crew on shoot day. Side by side:

Phase Before Speed Space After Speed Space
Pre-call setup Three tools to launch, two device dials per participant, radios paired, shared docs opened. Producer creates a set; talent click an invite link; crew join a single tab.
Live take Producer alt-tabs between Zoom, OBS, Slack and notes; talent self-records on phone; crew direct via radio. Producer cuts between cameras live in the console, pushes overlays, draws on shared screen, talent looks one place.
Recording quality Network-degraded conference capture; frame loss visible in master; talent self-record sometimes forgotten. Local recording on each device at 1080p / 8 Mbps (5× standard); zero frame loss in master regardless of network.
Post-production handoff Files merged from 5+ devices, naming conventions guessed, time-code drift hunted manually. All masters auto-uploaded to AWS with a consistent naming scheme; editors pull a single bundle.
Crew coordination Radio cross-talk, Slack DMs to specific crew, Zoom chat for everyone, lost context. Built-in chat scoped per role; producers private-message talent reps, talent doesn’t see crew chatter.
Talent UX Install desktop client, sign in, configure mic, hope for stable Wi-Fi, panic if anything breaks. Click invite link, allow camera/mic, look at one button. Crew handles every other control from their side.

None of this is hypothetical. It is the day-to-day shape of a Revo Studio shoot before and after the platform went live. Below we walk each piece in detail.

The producer console: the surface where the streamlining happens

The producer console is the heart of Speed Space. Everything a director or production member needs during a take is one click away, with no tab-switching.

Multi-stream switching

Producers cut between any participant’s camera feed live, the same way a control room does. Layouts change in one click — full-screen on talent, side-by-side talent + interviewer, picture-in-picture for B-roll. The active layout is what records to the master, so the cut you direct is the cut the editor receives.

Recording controls (resolution, FPS, codec, bitrate)

Per session, the producer picks the resolution (up to 1080p in default deployments), the frame rate (24 / 25 / 30 / 60), the codec (H.264 default, AV1 for high-spec talent devices), and the file extension. Default 1080p / 8 Mbps is roughly 5× the bitrate of generic conferencing capture and the colour-grading headroom editors expect.

Backgrounds, animations, text, drawing tools

Producers push background fills, looped animations, lower-thirds, image overlays and on-screen drawing into the live feed. Drawing is particularly powerful for direction: the producer literally circles the spot where they want talent to look, and talent sees the circle on their preview.

Screen sharing and crew chat

Crews share screens to walk talent through scripts, storyboards, lighting reference photos. The text chat sits next to the call, scoped by role — producer-to-talent-rep, producer-to-crew, all-hands — so coordination doesn’t leak into the talent’s view of the shoot.

Reach for the producer console pattern when: the director currently uses three or more browser tabs during a live take. The streamlining win compounds with each tool you can fold in.

Four roles, four control surfaces

Generic video conferencing has one user type: “participant.” Speed Space splits that into four, each with a deliberately different surface area.

Admin — the platform owner

Manages the whole studio: creates studios and sets, invites and assigns roles, configures recording defaults, manages billing, audits download history. The admin’s console is the broadest; usually a single agency principal or studio head.

Production Member — the working crew

Creates and runs sets, handles recording sessions, controls live audio and video streams, invites talent. This is where most of the day-to-day operation lives. Production Members can’t change platform-level config — they live inside the sandbox the admin set up.

Talent — the on-camera performer

Joins via a unique invite link, no account creation. Their console is bare: a preview of their own camera, a “raise hand” button, and a “leave” button. They cannot change recording configs, switch layouts, or accidentally screen-share. Friction-free for the talent, and impossible to break the production.

Representative — the talent rep / observer

Talent agents, brand reps and clients watching the shoot get an observer seat — they see the live feed but cannot mute talent, change cameras or interfere. They can chat privately with the producer. This is the role no generic conferencing tool offers, and it is one of the most-asked-for features by Revo’s clients.

Reach for a four-role model when: non-crew people (talent reps, brand-side observers, post-production reviewers) regularly join shoots and need to watch without interfering. Generic conferencing puts them in the same bucket as crew — that’s where on-set chaos starts.

Studios and sets: the project model that doesn’t leak

A common pain in remote production is project state living in five places: shot list in Google Sheets, footage in Frame.io, briefs in Slack, takes in someone’s desktop. Speed Space’s answer is a two-level project model.

Studios. A studio is a virtual workspace, like a Google-Drive-shaped folder. Each studio holds its own assets, crew memberships, recording defaults and post-production files. An agency might have one studio per ongoing client (Netflix, HBO, EA), and a producer flips between them like switching client folders.

Sets. A set is one shoot configuration inside a studio — specific recording params (codec, FPS, resolution), participant list, scheduled time, intended deliverable. The same studio can have multiple sets in flight: today’s 4-camera interview, tomorrow’s product b-roll, next week’s live-stream rehearsal.

Up to 25 participants per set, with zero downtime in production sessions. The set boundary keeps recordings, chat history and crew assignments tidy — nothing leaks across.

Building a multi-tenant production tool of your own?

Studios + sets + role-based access is the model that holds at scale. We’ll walk you through the trade-offs we made for Speed Space and adapt them to your shape.

Book a 30-min architecture call → WhatsApp → Email us →

Reach for studios + sets when: your agency runs concurrent productions for multiple clients and you need crisp boundaries between them. Mixing client A’s chat history into client B’s post-production folder is a contractual problem, not just a UX one.

Technologies we used (and why)

A production-grade remote stack rewards picking boring, battle-tested tech over fashionable choices. The stack we shipped:

Layer Technology Why
Frontend JavaScript, Next.js SSR for the marketing surface; client-side rendering for the producer console where state lives in WebRTC tracks and DOM canvases.
API / backend Node.js, Express One language across the stack; Express keeps the surface tight and the codepath debuggable.
Real-time messaging socket.io Crew chat, signalling fallbacks, and producer command channel (start record, change layout) over a single persistent connection.
Live video / SFU WebRTC + LiveKit Sub-second preview latency, native multi-party support, recordings via local capture rather than server-side mixing. (See our Agora alternatives playbook for why we ship LiveKit-class SFUs.)
Persistence MongoDB Documents-first model fits studios → sets → recordings → participants tree without rigid schema migrations.
Storage AWS (S3-compatible) Multipart uploads from each talent device; lifecycle to Glacier for cold archives. Cheap and reliable at the volumes Revo runs.
Browser capture MediaRecorder API + IndexedDB chunk store Local recording at full quality with crash-resilient chunk replay if the talent’s tab refreshes mid-take.

For the deeper architectural “why” behind the SFU choice and how this stack scales past 25 participants, the companion piece Scalability in Video Streaming and Conferencing walks the cascade pattern.

A real shoot, step by step

Walk through what a typical 60-minute Revo shoot looks like inside Speed Space:

T-24h. Producer creates the set. Inside the client’s studio, the producer creates a new set, picks the recording params (1080p, 30 fps, H.264, MP4), schedules the time, and generates unique invite links for each talent and representative. Crew members are added by role.

T-30 min. Crew joins, runs hardware checks. Production Members open the producer console; the platform runs an automatic camera, microphone and bandwidth probe. Anything red gets flagged with a fix-it hint (close other tabs, switch from Wi-Fi to Ethernet, raise camera height).

T-5 min. Talent click their invite link. Single browser permission prompt, automatic mic and camera probe, friendly hold screen. Producer pings the talent rep on private chat to confirm both arrived. No accounts created.

T-0. Recording starts. Producer hits record. Each participant’s browser silently begins local recording at 1080p / 8 Mbps to IndexedDB. The conference stream pushed to the SFU is a separate, lower-bitrate signal for live preview.

During the take. Producer cuts between cameras, pushes overlays, draws on shared screens, sends private text guidance to talent reps, drops a lower-third on-screen for branding. Talent looks at one button and one preview. The crew chat scrolls quietly in the corner of the producer console.

End of take. Producer stops recording. Each device begins a multipart upload of its local capture to AWS — the talent can close their browser as soon as the upload completes (or pause and resume later). Files arrive named by participant + set + timestamp.

Post-shoot. Editors open the studio in their NLE handoff workflow. Masters live in AWS, organised under the studio → set → recording tree. Search, sort and download via the producer console. Frame loss in the deliverable is zero, regardless of network conditions during the take.

The streamlining wins, quantified

Speed Space went live and quickly became core to Revo Studio’s daily operations. The wins, in order of how often clients ask about them:

Frame-loss elimination in masters. Local recording on each participant’s device means the file delivered to editors is the same crisp 1080p / 8 Mbps capture, regardless of internet wobble during the take. The single biggest editorial pain in remote production goes to zero.

Setup time per shoot. Pre-Speed Space, opening five tools, dialling devices, pairing radios and confirming everyone was online ate roughly 20–30 minutes per shoot. Post-Speed Space, the producer creates a set, sends invite links and the crew is live in minutes.

Post-production cycle time. Editors used to spend hours reconstructing master files from five-plus device recordings, hunting time-code drift, and matching takes. The single AWS-hosted asset tree makes the handoff a no-op — cycle time shortened materially.

Crew clarity. Role-based access stops the “who is doing what” chaos that breaks generic-conferencing setups at the worst times. Producers see the producer view; talent see the talent view; reps see the rep view. There’s no “wait, who muted the mic?” on a Netflix shoot.

Tool-cost consolidation. Pre-Speed Space stack: Zoom + OBS + Slack + Frame.io + radios + spreadsheets ≈ six tools and licenses. Post-Speed Space: one platform. The hard-cost saving is meaningful and the soft-cost saving (training, license sprawl, support) is bigger.

Production analytics: what producers see after the take

A surface most generic conferencing tools omit is the post-shoot debrief. Speed Space surfaces every metric a producer needs to learn from a take and prove to a client that the shoot was clean.

Per-participant connection telemetry. Drawn directly from WebRTC’s getStats() API: bitrate over time, packet loss, jitter, RTT, codec used, hardware acceleration in/out. If a take looked rough on preview, the producer sees exactly which participant’s link was the problem.

Recording integrity report. For every uploaded master: file size vs expected, duration match, NTP-synced timestamp range, chunk gap detection. Anything red triggers a re-upload prompt before the talent leaves the session.

Studio-level operational metrics. Hours of content recorded per week, average session duration, active producers, support tickets per 100 sessions, post-production cycle time vs baseline. Agency principals use these to demonstrate streamlining ROI to clients during renewals.

Integrations: how Speed Space talks to the rest of the production stack

No tool wins by being an island. The integration surfaces that matter for a production-grade platform:

NLE handoff. Adobe Premiere Pro, DaVinci Resolve and Avid Media Composer pull in masters via S3-compatible URLs, with AAF / EDL / XML exports keeping time-code and stem isolation intact. Editors do not have to babysit naming or sync.

Frame.io / cloud review. When the editorial side already lives in Frame.io, proxy generation auto-uploads watermarked H.264 / H.265 files to a Frame.io project, keeping review tooling as the team knows it.

Live broadcast egress. When producers want a feed pushed to traditional broadcast (vMix, OBS, AWS MediaLive, social platforms), Speed Space exposes NDI on LAN and SRT on WAN. RTMP fallback covers anything that doesn’t speak modern protocols.

Identity / SSO. SAML and OIDC for crew accounts; magic-link invites for talent with no account creation. Enterprise studios can plug into their client’s identity provider for enterprise sessions without onboarding friction.

Reach for a custom integration layer when: your editorial team uses a specific NLE handoff path (AAF for Avid, XML for Final Cut), your broadcast pipeline depends on NDI / SRT / SMPTE 2110, or your enterprise client mandates SSO. SaaS tools rarely cover all three at once.

Five operational pitfalls Speed Space removes

1. Talent forgetting to start their local recorder. The most common cause of an unusable take in self-record setups. Speed Space starts and stops local recording from the producer’s console — talent has nothing to forget.

2. Crew chatter leaking into the talent’s view. Generic group conferencing puts everyone in the same chat. Role-scoped chat keeps producer-rep coordination invisible to talent.

3. Time-code drift across takes. Local recordings are stamped with NTP-synced timestamps and chunk metadata; reconciliation is automatic in post.

4. Lost takes from a refreshed tab. IndexedDB chunk store means a mid-take browser refresh replays the captured chunks, not loses them. Generic browser recording silently drops.

5. File-naming spaghetti. Every recording arrives at AWS named by participant + set + timestamp + take number. No more “final_v3_FINAL_real.mp4” in twenty places.

KPIs we track on a Speed-Space-class build

Quality KPIs. Master frame-drop rate (target: 0%), local capture bitrate vs configured (95th percentile within 5%), audio sync drift across tracks (< 30 ms), proxy generation time (< 2× recording duration). The first one is the headline; if it’s not zero, nothing else matters.

Reliability KPIs. Session uptime (target 99.9%), upload success rate post-shoot (> 99.5%), p95 preview latency (< 500 ms intra-region), SFU CPU per participant (< 20%).

Operational KPIs. Setup time per shoot, post-production cycle time vs pre-platform baseline, tool-license count per active studio, support tickets per 100 sessions. These are where streamlining shows up on the bottom line.

Security and compliance in 30 seconds

Production teams handle pre-release content under NDA, talent contracts, child-talent age verification, brand-sensitive material. The shortlist:

End-to-end encryption on the conferencing channel for sensitive shoots. Encryption at rest with KMS-managed keys for AWS storage. Watermarking on proxies sent to external editors. Audit trails on who downloaded which master, when, and from where.

SOC 2 Type II for any enterprise customer. SSO via SAML / OIDC for crew accounts. Data residency — if a Netflix EU shoot needs masters in eu-west-1, architect the bucket layout for it on day one.

When NOT to streamline with a custom build

A custom Speed-Space-class build is a poor fit when:

  • You produce under ~20 hours/month and the deliverable is podcast or interview — Riverside or Zencastr will save you six figures.
  • Your output is single-platform live streaming with light editing — StreamYard out-competes any custom MVP for years.
  • Your crew is <5 stable people — the marginal value of role-based access doesn’t justify the build cost.
  • Editorial review (not capture) is the bottleneck — fix Frame.io C2C / Adobe workflow first.
  • You don’t have a partner with deep WebRTC, SFU, MediaRecorder and AWS Media experience. This stack is unforgiving for generalist teams.

Evaluating any remote-production tool: the four-test runbook

1. The 30-minute frame-drop test. Run a multi-participant session at 1080p with deliberate network impairment. Compare local master to streamed recording. If the platform doesn’t do double-ender, the loss is visible.

2. The role-permission walkthrough. Have a producer, a talent and a representative join. Try to make talent do producer-only things. The platform should refuse cleanly.

3. The post-production handoff. Export to Premiere / DaVinci / Avid. Check time-code accuracy, AAF / EDL / XML fidelity, audio stem isolation. If editors can’t pull clean tracks, it isn’t shippable.

4. The crash-recovery test. Mid-take, tell talent to refresh their browser. The recording should resume from the last good chunk, not lose the take. If it does, you cannot ship serious productions on it.

Want this runbook on your stack?

We’ll run the four tests on your current setup on a 30-minute call — or use them to scope a custom Speed-Space-class build for your studio. Either way, you walk away with a prioritised gap list.

Book a 30-min audit call → WhatsApp → Email us →

What’s next on the Speed Space streamlining roadmap

Three areas where we are actively iterating with Revo Studio:

1. AV1 capture as an opt-in. AV1 reduces bitrate ~30% at the same quality — storage and CDN savings on every shoot. We ship it for high-spec talent devices and fall back to H.264 elsewhere.

2. WebGPU-based local noise removal and background replacement. Krisp-class processing on the talent’s device, not in the cloud. The master stays clean without adding latency. See our AI Video Quality Enhancement playbook.

3. Real-time multilingual captions and dub tracks. Embedding LiveKit-class multimodal agents is the obvious next layer for international productions — the pattern we use is documented in our LiveKit Multimodal Agents Guide and AI Agents on WebRTC.

The architecture stays the same: WebRTC SFU for live, double-ender for the master, role-based access on top, AWS for storage. Everything else is icing.

FAQ

What does Speed Space actually do, in one sentence?

It is a web platform that lets a distributed crew run a multi-camera, broadcast-quality video production from one browser tab — replacing Zoom + OBS + Slack + Frame.io + radios + spreadsheets in a single tool with role-based access and zero frame loss in the master.

Which tools does Speed Space replace?

Typically Zoom (live conference), OBS / vMix (recording and switching), Slack (crew chat), Frame.io (project storage), radios (crew comms), Google Sheets (shot list and crew assignments). One platform, one tab, one source of truth.

What stack is Speed Space built on?

JavaScript + Next.js for the UI, Node + Express for the API, socket.io for real-time messaging, WebRTC + LiveKit for the live conference and SFU, MongoDB for project state, AWS for post-production storage, and the browser’s MediaRecorder API + IndexedDB for double-ender local recording.

How many participants can a session hold?

Up to 25 simultaneous participants per session with zero downtime in production. The same SFU + double-ender architecture scales to 50–100+ participants by cascading SFUs — that is the standard custom-build path for broadcasters.

What roles does Speed Space support?

Four: Admin (full platform control), Production Member (running shoots), Talent (on-camera, joins via invite link), Representative (observes the shoot without interfering). Role-based access prevents on-set chaos — the unsung feature that makes the platform usable for serious productions.

Does Speed Space work on mobile?

Talent join via mobile browsers without issue (iOS Safari and Android Chrome both support the MediaRecorder API and getUserMedia we rely on). The producer console is desktop-first — running a multi-camera switch from a phone isn’t practical, but every other role works fine on tablet or phone.

How does Speed Space handle a mid-take browser crash?

Local recordings are written to IndexedDB in chunks, not as a single file. If the talent’s tab refreshes or crashes, the platform replays already-captured chunks and resumes from the last good one rather than losing the take. Generic browser recording silently drops — a regular cause of unusable masters in DIY remote setups.

Can we white-label or build a Speed-Space-class platform for our studio?

Yes. A custom build gives you a branded domain, custom UI, embedded experiences and SSO with your client’s identity provider. For cost ranges (pilot / production-grade / broadcast-SLA), see the companion piece: Speed Space: Custom Remote Video Production Platform.

Companion piece

Speed Space: Custom Remote Video Production Platform

The buyer’s playbook — cost ranges, SaaS comparison, decision framework for custom remote production.

WebRTC architecture

Agora.io Alternative in 2026

LiveKit, mediasoup, Jitsi, Janus — the SFU choices behind Speed-Space-class builds.

Scaling

Scalability in Video Streaming and Conferencing

SFU cascading, CDN egress and storage strategies for real-time video at production scale.

Low-latency video

Real-Time Video Streaming: Low-Latency Solutions

Latency budgets, codecs and protocols behind sub-second remote production preview.

Project page

Speed Space — Project Page

The Fora Soft project page with screens, capabilities and the full client list.

Ready to streamline your own remote production stack?

Speed Space took Revo Studio from a six-tool, radio-coordinated, frame-loss-prone production stack to a single browser tab where the producer cuts cameras, the talent clicks one link, and the editor pulls clean masters from AWS — on shoots that ship for Netflix, HBO, Apex Legends, Electronic Arts, Paris Fashion Week and Live Nation Urban events. The streamlining is the product, and the architecture (Next.js + Node + WebRTC + LiveKit + MongoDB + AWS, with double-ender local recording on every device) is the boring, battle-tested choice that lets it work.

If your team is duct-taping Zoom + OBS + Frame.io + Slack today, the win on day one is the tool consolidation. The win on month three is editorial cycle time. The win on year one is the SaaS-license budget that disappears. We have shipped this pattern for one of the most demanding production agencies in the field. We’d like to ship it for yours.

Let’s scope your remote-production streamlining

Thirty minutes, a senior video engineer, and a one-page plan: workflow audit, tool-consolidation map, custom-build cost range, frame-loss strategy. No slideware.

Book a 30-min scoping call → WhatsApp → Email us →

  • Cases