AVFoundation Tutorial: How to apply an effect to a video | Fora Soft Blog
Have you ever thought about how videos are processed? What about applying effects? In this AVFoundation tutorial, I’ll try to explain video processing on iOS in simple terms. This topic is quite complicated yet interesting. You can find a short guide on how to apply the effects down below.
Core Image is a framework by Apple for high-performance image processing and analysis. Classes CIImage, CIFilter, and CiContext are the main components of this framework.
With Core Image, you can link different filters together (CiFilter) in order to create custom effects. You can also create the effects that work on the GPU (graphic processor) which will move some load from the CPU (central processor), thus increasing the app speed.
AVFoundation is a framework for working with media on iOS, macOS, watchOS, and tvOS. Using AVFoundation, you can easily play, create, and edit QuickTime movies and MPEG-4 (MP4) files, stream HLS (more about HLS can be found here), and create your own features for working with video and audio. For example, with AVFoundation, you can develop audio and video players, audio and video editors, and essentially anything related to audio and video processing.
Adding an effect
Let’s say you need to add an explosion effect to your video. What do you do?
First, you’ll need to prepare three videos: the main one where you’ll apply the effect, the effect video with an alpha channel, and the effect video without an alpha channel.
An alpha channel is an additional channel that can be integrated into an image. It contains information about the image’s transparency and can provide different transparency levels, depending on the alpha type.
We need an alpha channel to not let the video with an effect overlap the main one. This is the example of a picture with the alpha channel and without it:
Transparency goes down as the color gets whiter. Therefore, black is fully transparent whereas white is not transparent at all.
After applying a video effect, we’ll only see the explosion itself (the white part of an image on the right), and the rest will be transparent. It will allow us to see the main video where we apply the effect.
Then, we need to read the three videos at the same time and combine the images, using CIFilter.
First, we get a link to CVImageBuffer via CMSampleBuffer. We need it to control different types of image data. CVImageBuffer is derived from CVPixelBuffer which we’ll need later. We get CIImage from CVImageBuffer. It looks something like this in the code:
Once again we’ve received CIImage but this time it consists of the three CIImages that we got before. Now, we proceed to render the new CIImage in CVPixelBufferRef using CIIContext. The code will look roughly like this:
The effect is successfully added to the video here. With that being said, the work was completed using the GPU, which helped us take the load off the CPU, therefore increase the app speed.
This is cool, right? Actually, there are 9 simple ways to make an iOS app even cooler. How? Check out here!
Adding effects to videos in iOS is quite a complicated task, but it can be done if you know how to use basic frameworks for work with media in iOS. If you want to learn more about it, feel free to get in touch with us via the Contact us form!