-
Notifications
You must be signed in to change notification settings - Fork 51
Description
Hey @nonam4, I'm creating this issue to follow up on the discussion about returned coordiantes/bounds being incorrect.
A Camera streams in it's native resolution, and sometimes in landscape orientation, since that's just how sensors work.
If I run any ML model on such a native raw frame, I expect the coordinates to be relative to exactly the frame dimensions and orientation, not the preview/screen size or orientation.
This is especially relevant when I want to draw back to the Frame (with Skia Frame Processors), because then the coordinate system will be the frame's .width and .height.
I am talking about these two lines: https://github.com/nonam4/react-native-vision-camera-face-detector/blob/22c87fe4af1fd44fdff7bf729dd9bebe41da176e/ios/VisionCameraFaceDetector.swift#L245-L246
Currently when drawing a blur mask over the face using Skia Frame Processors, it looks like this:
before.mp4
After I change scaleX and scaleY to 1.0, it will look correct:
RPReplay_Final1713799605.mp4
Can you remove scaleX/scaleY from your code, and maybe provide such converters on the JS side using the Dimensions.get('window') APIs from React Native? I think that makes more sense, especially now that people use Skia Frame Processors.