Calibrate Blender Camera Using an Image and 3D Model of Objects in Image

Hello,

I’m looking for a way to get my camera in Blender to match a real-world camera and I was hoping I could get some assistance to point me in the right direction. I’ve found some stuff built into Blender that looks promising, but nothing that matches my exact use case. Here’s the situation…

I have a scene in the real world that I’ve taken a picture of. To make this example a bit more concrete, let’s say it is a basketball court. I also have a Blender model of this basketball court…the physical dimensions of the Blender model very accurately match the actual real-world basketball court. For example, the length and width of the court, the positions of the baskets, the markings on the court (free throw lines, three point lines, etc.) are all accurately modeled in Blender.

Is there a way to calibrate / optimize my Blender camera so that it best matches the real-world camera that snapped the actual image of the scene? In general, I think the situation is that I want to click N points on the image and for each point I click, I want to associate a known XYZ position to that point. For example, I would click the corner of the basketball court in the image, and then associate that to the XYZ position of that corner from the Blender model. It seems like after I had enough points spread out throughout the volume of the scene, it would be possible to “solve” for the camera’s position, orientation, focal length, etc.

Can this sort of thing somehow be done in Blender? And, if not, can it be done outside of Blender in some other tool that produces data that I could use to position / orient my Blender camera?

Try http://stuffmatic.com/blam-blender-camera-calibration-toolkit/

Have you looked into BLAM?

Hi Richard / SterlingRoth,

Thanks for the help. I’ve actually tried BLAM. It seems like a great tool, but I don’t think it was able to do exactly what I need (although I could certainly be wrong).

I don’t necessarily need a nicely-packaged GUI-based tool for this. I think I’m just trying to solve a math problem, fundamentally: Given N sets of 2D image coordinates and corresponding 3D real-world XYZ coordinates, solve for all the camera properties (position, orientation, focal length, etc.) that minimizes the error between the real image and a rendered image. Even a little Python script or something (that I could manually feed the data into) that can solve the appropriate equations would work for me.

The math itself is probably a little bit over my head…that’s why I was hoping to find something that already exists for solving this problem. I guess I’m just hoping that a tool or script for doing what I want to do already exists somewhere. Unfortunately, this is pretty far from my comfort zone, so I don’t even know what I should be searching for!

I really appreciate the help, though. Any other ideas?