Pose estimation [solvePnP]

Hi! I’m attempting to automatically position a camera around a 3D model in Blender given an image of the model, and found OpenCV’s solvePnPRansac( ), which looks perfect for this application. solvePnP example here

However, even though I hard-coded the 3D and 2D coordinates of these points, and know all of the camera’s intrinsic parameters, it seems like the function always returns garbage values for the position of the camera, and I think it has to do with how I’m saving the image (bpy.ops.render.render(write_still=True))

I’m certain the 10 points on the 3D mesh have exactly correct world positions, and I’m getting the 2D image coordinates as raw pixel positions from Blender’s rendered camera view. I noticed the output resolution is always 960x540, independent of focal length + sensor size, and unsure if this is okay. My camera matrix is calculated from f, sx, and sy as:

fx = width * f / sx
fy = heigh * f / sy
cx = width / 2
cy = height / 2

which I think is correct, and I can’t figure out why solvePnP is failing. Is there anything Blender-specific I need to be taking into account before running solvePnP?

Thank you very much!