Mobile video editing app with advanced effects and superhero-themed features

SuperPower FX is the iOS/Android app that lets anyone shoot a video and drop superhero effects on top — lasers from the eyes, fireballs from the hands, teleportation, explosions. Fora Soft has built and maintained it since 2012. Today it sits at 500,000+ downloads, 20,000+ positive App Store and Google Play reviews, and the model has spun off a sister product, AnimePower FX, which brings anime-inspired effects to the same mechanic.

This article is not a marketing recap. It is the engineering playbook — architecture decisions, performance trade-offs, asset pipeline, and the numbers we tracked — that any product team building a mobile video editing app with real-time VFX should read before committing a roadmap. If you are planning a SuperPower FX-class product, the patterns below are the ones that survived 13+ years of App Store updates, OS changes, and device-capability shifts.

Key takeaways

Real-time VFX beats post-production for retention. The SuperPower FX mechanic wins because the effect is visible in the viewfinder before recording stops. Cut the feedback loop and users record more clips per session.

GPU-first pipeline is non-negotiable. Metal on iOS, Vulkan/OpenGL ES on Android, and compositing done in shaders rather than CPU blends. Target a steady 30 fps on a 5-year-old mid-range phone and everything newer is free.

Asset footprint kills installs. Switching from PNG to JPEG shrank our install size 10× in the early years. Today the equivalent move is compressed KTX2/ASTC textures plus on-demand download for effect packs.

AR on-device AI is the 2026 upgrade path. Subject-locked effects (“fire from my hands,” not “fire in the middle of the frame”) now land for free with Vision on iOS and ML Kit on Android — what took a bespoke tracking stack in 2015 is an afternoon of integration in 2026.

Monetization follows the effect pack, not the app. Free shell, paid effect packs, and co-branded campaigns (Fora Soft shipped an Oreo-branded pack that spiked adoption) convert better than single-shot IAPs or generic subscriptions.

Why Fora Soft wrote this playbook

Fora Soft has built AI-powered video and audio software since 2005. SuperPower FX is one of our flagship mobile-video projects: the app shipped in 2014, hit the top charts for “video effects” globally, and still has an active user base 11+ years later. The sister app AnimePower FX reuses the engine with anime-inspired visual content. Both apps are in production on iOS and Android.

Beyond SuperPower FX we ship mobile video tooling across verticals: enterprise video review with VALT, creator-focused short-form video with Shortclips, music-video tools with Moby Tap, and complex live-streaming stacks for EdTech and e-health clients. Our engineers run an AR/VR practice and an AI integration practice in parallel, so on-device VFX and on-device ML land in the same codebase rather than being a bolt-on.

Agent Engineering sits at the centre of our workflow: senior engineers supervise AI copilots (Cursor, Claude, GitHub Copilot) to compress boilerplate, generate tests, and draft migrations. On mobile-video projects that typically shaves 20–40% off conventional agency estimates without loosening code-review standards — useful when a superhero effect pipeline needs to ship on both App Store and Google Play on schedule.

Building a SuperPower FX-style video app?

Our mobile-video team has shipped SuperPower FX, AnimePower FX, Shortclips, and VALT. Tell us the effect pack you have in mind and the platforms, and we will come back with a tight estimate.

Book a 30-min scoping call → WhatsApp → Email us →

SuperPower FX at a glance

Before we walk into the architecture, the numbers and the scope make the rest of the playbook concrete.

Dimension SuperPower FX AnimePower FX
First release 2014 (iOS) 2016
Platforms iOS + Android iOS + Android
Downloads 500,000+ 150,000+
Positive reviews 20,000+ 8,000+
Effect library 100+ superhero effects 60+ anime effects
Real-time preview Yes, during recording Yes, during recording
Monetization Free shell + paid packs + brand collabs (Oreo) Free shell + paid packs

Why real-time effects beat post-production editing

In 2012, the dominant competitor was Movie FX: load a clip, pick an effect, export. We reverse-engineered their pipeline, understood their compositing approach, and intentionally designed SuperPower FX around a tighter feedback loop: the user sees the effect in the viewfinder before the recording ends. That single design decision did the most work in the first two years of growth.

Real-time preview is a retention mechanic. Every second a user waits for an export is a second they might abandon. Cutting the loop down to “point camera, see effect, record” reliably pushes the clip-per-session count up. On SuperPower FX we track “recordings per session” and “effects tried per session” as retention leading indicators — both live and die with real-time preview latency.

Reach for real-time preview when: your users will record multiple short clips per session, effects are visual and immediate (flames, lasers, transformations), and the creative value comes from reacting to the camera feed rather than editing afterwards.

The original brief — make the impossible smooth

In 2012 the SuperPower FX team came to us with a single-sentence brief: let users shoot a video with superhero effects that react to the person on camera, in real time, on an iPhone. The technical constraints were harsh — the iPhone 4S had an A5 chip, 512 MB of RAM, no first-party AR frameworks (ARKit did not exist until 2017), and a camera API that did not cleanly expose the preview buffer.

Our job was to turn that brief into an iOS app that shipped high-quality effects without dropping frames, without draining the battery in ten minutes, and with a design simple enough that a first-time user could make their first clip in under 30 seconds. The architecture that survived every version since — GPU-first compositing, pre-rendered effect sprites, and a state machine around the camera session — was set in those early months.

Architecture overview — from camera buffer to MP4

A modern SuperPower FX-class app has four processing stages between the image sensor and the saved MP4. Each stage has to hit its frame budget or the whole pipeline stalls.

1. Capture

On iOS we use AVCaptureVideoDataOutput with a BGRA or NV12 pixel format and a dispatch queue pinned to a high-priority QoS. On Android we use CameraX with an ImageAnalysis use-case and a preview Surface wired to an OpenGL ES / Vulkan renderer. The camera frame never touches the CPU after this — every downstream stage runs on the GPU.

2. Analyse & track

Effects that “come out of the user’s eyes or hands” require subject tracking. In 2014 we shipped a custom face-detection and colour-based hand-tracker. In 2026 we compose Apple’s Vision framework (body/hand pose, face landmarks) and Android ML Kit Pose Detection on-device. Tracking runs at 15–30 Hz and publishes anchor points to the renderer.

3. Render & composite

Metal on iOS, Vulkan (with an OpenGL ES 3.1 fallback) on Android. Each effect is a sequence of compressed textures plus a shader that blends them onto the camera frame using the tracker anchors. The composited texture drives both the on-screen preview and the encoder input.

4. Encode & mux

On iOS, AVAssetWriter with H.264 hardware encode. On Android, MediaCodec with the same codec. Audio capture runs on a separate AVFoundation/AAudio path. The muxer produces MP4 at 30 fps with AAC audio, written incrementally so the save step feels instant when the user taps Stop.

The GPU-first pipeline — the one rule that kept us shipping

Every serious mistake we have seen in mobile-VFX projects traces back to a CPU-bound compositing step. CPU blends scale poorly with resolution and never keep up at 1080p or 4K — which is where user expectations have moved. The rule is simple: once the pixel enters the pipeline, it stays on the GPU until the encoder consumes it.

Concretely, that means the camera output is a GPU texture (iOS CVPixelBufferRef backed by IOSurface; Android GraphicBuffer/HardwareBuffer), effect assets are pre-loaded as compressed textures (ASTC on iOS, ETC2/ASTC on Android), the shader handles all blending (additive, alpha, screen), and the encoder input is a GPU surface. There is no memcpy of image data in the hot path.

Reach for CPU compositing only when: your effect count is <5, your target resolution is ≤720p, and you need to ship a prototype in days. For anything that will scale or hit 1080p+ production targets, GPU-first from day one.

Asset pipeline — why install size killed us twice

Install size is the leakiest bucket in mobile-VFX product work. SuperPower FX hit install-size panic twice. The first time, in 2015, the app ballooned past 200 MB as effect packs accumulated. We switched the sprite format from PNG to JPEG where alpha was not needed, used separate single-channel alpha where it was, and shrank the install 10×.

The second time, in 2021, storage had loosened but users were churning on 100 MB+ downloads. We re-authored the bundled pack to ship only the starter effects (~15 MB) and moved the rest to on-demand download, so each pack arrives only when the user opens it. In 2026 the equivalent move is ASTC/KTX2 compressed textures (50–80% smaller than PNG/JPEG) plus App Store on-demand resources and Android Asset Delivery for the long tail.

Rule: keep the install under 50 MB for a consumer VFX app. Every 10 MB above that measurably hurts first-install conversion in emerging markets.

Need a second opinion on your VFX asset pipeline?

Our engineers have shipped 100+ effects and two live products through the install-size wall. Send us your pack and target platforms — we will return a shrink-list and a rough benchmark.

Book a 30-min call → WhatsApp → Email us →

Subject tracking — from custom code in 2014 to on-device ML in 2026

The hardest part of “lasers from the eyes” is not the laser; it is knowing where the eyes are, every frame, reliably, in varying light. In 2014 we built a bespoke face-detector plus a hand-tracker that used HSV colour segmentation. It worked, but failed in poor lighting and required constant tuning.

The 2026 stack is nearly free by comparison. Apple Vision provides face landmarks, body pose, and hand pose detection on-device with millisecond latency. Android ML Kit offers equivalent pose and face detection, plus Selfie Segmentation for alpha masks. Both run on Neural Engine / NPU silicon, so they consume almost no battery.

The practical upgrade path for any legacy VFX app: replace the custom tracker with Vision/ML Kit, keep your existing renderer, and reinvest the saved engineering budget into more effects or more polish. We ran this migration on SuperPower FX and the stability gains on low-light indoor footage were immediate.

Reach for on-device ML tracking when: your effects anchor to the user’s body or face, your minimum device spec supports the Neural Engine / NPU, and effects must work in varied lighting and motion — which is every consumer VFX use case.

Performance targets and the device matrix that matters

A consumer video-effects app needs to hold a steady frame rate on a 5-year-old mid-range device. Targeting the current flagship is a rookie mistake — your Google Play install funnel is dominated by devices one or two tiers below. We use the matrix below to validate every build.

Device tier Example Target fps Target resolution Cold-start budget
Low-end (5y+ old) iPhone XR, Galaxy A32 30 fps 720p ≤ 2.5 s
Mid-range iPhone 13, Pixel 7 30–60 fps 1080p ≤ 1.5 s
Flagship iPhone 16 Pro, S24 Ultra 60 fps 1080p (4K optional) ≤ 1.0 s
Battery (10-min session) All tiers ≤ 6% drain
Install size All tiers ≤ 50 MB bundle

Monetization — effect packs and the Oreo lesson

SuperPower FX runs a free app with paid effect packs and occasional brand collaborations. The model works because effect packs are narrative units — users pay for the fantasy (fire magic, ice magic, teleport pack), not for the app itself. Trying to charge for the app would cut install volume without lifting ARPU.

The Oreo partnership is the template we still use today. Oreo wanted a branded superhero effect tied to a campaign. We shipped it as a free, time-limited pack inside SuperPower FX. The co-branded pack drove a measurable install spike and — more importantly — turned a one-shot marketing event into long-term retention, because users who installed for the Oreo pack stayed for the core library. Any brand with a narrative IP (superhero movies, video games, anime) is a natural fit for this pattern.

Reach for co-branded effect packs when: you have a consumer app with a narrative-IP audience and a brand partner whose creative assets fit the effect mechanic. Keep the pack free and time-limited so it reads as a gift, not a paywall.

iOS and Android framework choices for a modern build

If we were starting SuperPower FX today, the framework shortlist is concrete. For iOS: AVFoundation for capture, Metal for rendering, Vision for tracking, Core ML for any custom model, and Apple Intelligence / Foundation Models for features like prompt-to-effect suggestion. For Android: CameraX, Vulkan (OpenGL ES 3.1 fallback), ML Kit Pose/Face, TensorFlow Lite or Gemma 3n via Google AI Edge for custom models.

Cross-platform languages: Swift and Kotlin on their native platforms. With Swift officially landing on Android in 2025, shared business logic in Swift is now feasible and it is the direction we are experimenting with on internal prototypes — see our Swift 6 iOS development article for the underlying language work. For UI, we stick with native: SwiftUI on iOS, Jetpack Compose on Android. The rendering and capture layers are in C++ with a thin Swift/Kotlin shell.

Cost math — what a SuperPower FX-class build looks like in 2026

Rough scope for a v1 launch of a real-time mobile VFX app (iOS + Android), from zero to published on both stores:

1. Discovery & design (3–4 weeks). Product definition, UX flows, effect catalogue scope, monetization model, platform priority. Output: a tight SOW and a v1 cut-line.

2. Engine build (6–10 weeks). Capture → GPU composite → encode pipeline, tracker integration (Vision/ML Kit), effect SDK, asset pipeline, build and signing. Largest single bucket.

3. First effect pack (3–4 weeks, parallel). 10–15 effects authored, tuned on the device matrix, and wired into the app UI. This can run in parallel with the engine build once the effect SDK is stable.

4. App UI and store assets (2–3 weeks). SwiftUI + Compose shell, onboarding, sharing, IAP, store listing and screenshots.

5. QA, device-matrix testing, store submission (2–3 weeks). Real-device testing with automation where possible (Reflect Mobile, Autosana), manual QA on the slow tier of the device matrix, App Store and Google Play submission loops.

Total. A v1 with a single effect pack typically lands in 14–20 weeks of elapsed time with a small senior team. We quote conservatively; with Agent Engineering in the loop we regularly beat the low end of that range. For honest estimates on your scope, book a 30-minute scoping call.

Mini case — the Oreo brand pack that turned a weekend into a retention lift

Situation. Oreo wanted a branded superhero effect tied to a two-week promotional campaign. The creative agency owned the visual brief; we owned delivery inside SuperPower FX. Deadline: three weeks from brief to live in app, across iOS and Android.

12-week plan. Week 1: intake, creative lock, sprite authoring by our VFX artist. Week 2: engine integration, device-matrix QA, internal TestFlight/internal-track Play release. Week 3: App Store and Play release, pack unlocked via remote config and featured inside the app for the campaign window. The remaining weeks were post-launch analytics, brand-report generation, and a controlled sunset of the pack.

Outcome. Daily installs spiked during the campaign window, and the 30-day retention of campaign-window installs beat the app’s rolling baseline. Users installed for the free Oreo effect and stayed for the core library — exactly the outcome the brand partner wanted and the reason we still recommend co-branded packs as a go-to acquisition channel for narrative-IP clients.

A decision framework — pick your mobile VFX approach in five questions

1. Is real-time preview essential? If users will record multiple short reactive clips, yes. If the use case is editing a single longer clip after the fact, a post-production model (like a desktop NLE) is simpler and cheaper.

2. Do effects need to follow the user? If yes, plan for on-device pose/face tracking (Vision + ML Kit). If effects are frame-locked, a 2D sprite compositor is enough.

3. What is your minimum device spec? Pick the slowest device you want to support, put it on the engineer’s desk, and test against it every sprint. Performance is a policy, not a hope.

4. Is the monetization narrative (effect packs, brand tie-ins) or utility (pro features, subscriptions)? Narrative rewards investment in effects and artists. Utility rewards investment in editing depth.

5. What is the App Store and Play Store positioning? Consumer reach requires short install time, obvious first-session value, and a strong screenshot reel. B2B or pro positioning tolerates larger installs and a learning curve.

Pitfalls to avoid

1. CPU compositing in the hot path. Every one of our performance-crisis rewrites traced back to a CPU blend someone added “just for this effect.” Audit the render graph every release; if a pixel leaves the GPU and comes back, fix it.

2. Targeting only current-generation devices. The install funnel in most consumer markets is anchored by two- to five-year-old devices. If your app drops frames there, your review score never recovers.

3. Install size creep. Bundling every effect pack in the binary looks convenient and kills your install conversion. Use on-demand resources and Play Asset Delivery from day one.

4. Custom tracking when Vision/ML Kit will do. In 2014 we wrote our own. In 2026 you should not — unless your use case is obviously outside the first-party capability. Reinvest the saved time into more effects.

5. Shipping without a long-term maintenance plan. Mobile platforms change every year. Plan a quarterly maintenance sprint covering OS updates, deprecated APIs, and new device profiles — or budget for a long-term maintenance retainer with a partner who can own it.

KPIs — what to measure on a live VFX app

Quality KPIs. Frames dropped per session (target <1%), cold-start time by device tier, crash-free sessions (≥99.5%), and thermal throttle events per 10-minute session.

Business KPIs. Recordings per session (retention leading indicator), effects tried per session, first-purchase rate by effect pack, 30-day retention by acquisition source (organic vs. co-branded campaigns).

Reliability KPIs. Background-save success rate (saving the clip after Stop), share-sheet success rate (export to TikTok/Instagram/Reels), ANR/hang rate on Android, and install size per store variant.

When not to build a real-time VFX app

Not every video product should ship real-time effects. Skip the pattern if your core use case is multi-clip editing with precise trims (a desktop-style NLE model is better), your audience is on very old devices that cannot sustain 30 fps at 720p, or your differentiation is in audio rather than vision. In those cases the engineering cost of the real-time pipeline buys you less than investing the same effort in editing depth, audio tooling, or creator workflows.

Equally, if your business model is enterprise SaaS rather than consumer narrative IP, effect packs are the wrong monetization surface. Talk to us about custom video/audio processing software instead, and we will scope the right architecture for your deal flow.

Ready to scope your VFX app with engineers who already shipped one?

We have run SuperPower FX and AnimePower FX in production for 11+ years across 20+ iOS/Android versions. Hand us your concept and device targets — we will come back with a phased plan and a tight, defendable estimate.

Book a 30-min call → WhatsApp → Email us →

FAQ

Is SuperPower FX still live in 2026?

Yes. SuperPower FX is available on the App Store and Google Play in 2026, with over 500,000 downloads and 20,000+ positive reviews. Fora Soft continues to maintain it alongside the sister product AnimePower FX.

How long does a SuperPower FX-class build take in 2026?

A v1 with a single effect pack on iOS and Android typically lands in 14–20 weeks of elapsed time with a focused senior team. With Agent Engineering in the loop we routinely beat the low end of that range on well-scoped projects.

Do we need ARKit / ARCore for a real-time VFX app?

Usually no. Vision on iOS and ML Kit on Android cover face, body, and hand tracking on-device for most narrative VFX use cases. ARKit/ARCore are required only when effects anchor to real-world surfaces (plane detection, depth, or world-space tracking).

How do we keep the install under 50 MB with 50+ effects?

Ship only a starter pack in the binary (5–10 effects). Move the rest behind App Store on-demand resources or Play Asset Delivery. Use compressed texture formats (ASTC/KTX2) and avoid bundling uncompressed PNG animations.

Does Flutter or React Native work for this kind of app?

Not for the rendering core. Flutter and React Native are fine for the UI shell but you still need a native C++/Metal/Vulkan module for the GPU pipeline. Most teams end up with a native core plus a cross-platform shell, or go fully native on both sides.

How does Fora Soft handle brand collaborations like the Oreo pack?

We treat a brand pack as a three-week mini-release: creative intake week 1, engine integration and QA week 2, store release and campaign launch week 3. Packs ship as free, time-limited effects unlocked by remote config, which lets the brand partner run the campaign cleanly and the app retain installs after the campaign ends.

What is a realistic monetization mix for a mobile VFX app?

For narrative-IP apps: free shell, paid effect packs at a few dollars each, occasional co-branded campaigns, and an optional subscription unlocking the full catalogue. Avoid paywalls on the first session — users pay only after they have made a clip they want to share.

Can we use Fora Soft’s VFX engine for our own app?

We build custom engines per project rather than licensing a shared one — the architecture patterns described here transfer directly, but IP is separated cleanly. If you need white-label VFX, talk to us about a dedicated development team engagement where our engineers build inside your repo from day one.

iOS

Swift 6 iOS Development

The language-level foundation underneath a modern SuperPower FX-style iOS build.

SwiftUI

SwiftUI Video Conferencing vs UIKit

Performance guide — how we choose SwiftUI or UIKit for video-heavy iOS apps.

Streaming

Video Streaming App Development

When you need to stream the effect-laden output in real time, not just save it.

AI

Fora Soft & AI in Software Products

How we integrate on-device AI features, tracking models, and ML copilots into production apps.

Case study

ChillChat: From 2D Pixel-Art Chat to NFT Marketplace

Another Fora Soft product journey — from consumer app to full platform.

Ready to turn a VFX concept into a live mobile app?

SuperPower FX proved the model: real-time, subject-locked effects on a consumer mobile app, monetized through effect packs and brand collaborations, maintained across 13+ years of App Store and Google Play cycles. The 2026 toolkit — Metal/Vulkan, Vision/ML Kit, on-demand assets, Agent Engineering — makes the same product cheaper and faster to build today than it was in 2014.

If you are scoping a mobile video editing app with real-time effects, the playbook above is the one we use. When you want a staffed plan and a tight estimate from engineers who have shipped two of these products, we are a 30-minute call away.

Want the SuperPower FX playbook applied to your product?

Tell us your concept, target devices, and monetization model. We’ll come back with a phased plan, a defendable estimate, and the two architecture calls you should make first.

Book a 30-min call → WhatsApp → Email us →

  • Cases