We’re processing movies with Core Picture filters in our apps, utilizing an AVMutableVideoComposition (for playback/preview and export).
For older gadgets, we wish to restrict the decision at which the video frames are processed for efficiency and reminiscence causes. Ideally, we’d inform AVFoundation to offer us video frames with an outlined most dimension into our composition. We thought setting the renderSize property of the composition to the specified dimension would do this.
Nonetheless, this solely adjustments the scale of output frames, not the scale of the supply frames that come into the composition’s handler block. For instance:
let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
let enter = request.sourceImage // <- this nonetheless has the video's unique dimension
// ...
})
composition.renderSize = CGSize(width: 1280, heigth: 720) // for instance
So if the consumer selects a 4K video, our filter chain will get 4K enter frames. Positive, we will scale them down inside our pipeline, however this prices sources and particularly numerous reminiscence. It might be approach higher if AVFoundation may decode the video frames within the desired dimension already earlier than passing it into the composition handler.
Is there a method to inform AVFoundation to load smaller video frames?
