Setting:
-
Gadgets: iPad Professional 11″ M4, iPad Air 11″ M3, iPad Professional 11″ Gen2/3/4
-
Language: Swift
-
Framework: AVFoundation
-
Entrance digital camera: UltraWide (M4/M3), TrueDepth (Gen2–4)
-
Video gravity:
.resizeAspectFill
Background
I’m setting an publicity focal point utilizing coordinates outlined in captured picture pixel area.
- Enter level: (1170, 1370)
Picture sizes:
-
Gen2/3/4: 2316 × 3088
-
M3/M4: 3024 × 4032
Preview sizes:
-
Gen2/3/4: 834 × 1194
-
M4: 834 × 1210
-
M3: 820 × 1180
What I do
First, I convert picture pixel coordinates to preview layer coordinates, then use captureDevicePointConverted(fromLayerPoint:).
let devicePoint = previewLayer.captureDevicePointConverted(fromLayerPoint: layerPoint)
Studying again publicity level
After seize, I convert again:
let layerPoint = previewLayer.layerPointConverted(fromCaptureDevicePoint: exposurePoint)
This ends in:
Remark
Evidently captureDevicePointConverted(fromLayerPoint:) doesn’t carry out a linear mapping when utilizing .resizeAspectFill.
My understanding is that:
Questions
-
Does
captureDevicePointConverted(fromLayerPoint:)account for.resizeAspectFillcropping, making it unsuitable for direct pixel mapping? -
Is it appropriate to compute publicity factors immediately utilizing normalized coordinates (pixel / picture dimension) as a substitute of utilizing preview layer conversion?
-
Is
exposurePointOfInterestat all times expressed in full sensor normalized coordinates (0–1), impartial of preview settings? -
Does this conduct differ between UltraWide (M3/M4) and TrueDepth cameras?
-
Is there official documentation describing appropriate coordinate mapping for this situation?
