I am new to blender. I found that a camera intrinsic can be calculated from blender camera parameters (see 3x4 camera matrix from blender camera). However, I am not sure how to set a custom blender camera with respect to a custom camera intrinsic matrix, e.g. a camera intrinsic matrix of a real camera, which is intuitive in OpenGL renderers.

Assume a camera intrinsic is

`K = [[fx, 0, u0], [0, fy, v0], [0, 0, 1]]`

,

where ππ₯β ππ¦ and u0,v0 is not the real image center (for example, for a (height, width) = (480, 640) image, `(u0,v0)`

have some shift from the center `(320, 240)`

).

My default blender cameraβs parameters are:

`lens=35mm, shift_x=0, shift_y=0, sensor_height=18mm, sensor_width=32mm, sensor_fit=AUTO`

.

The following is what I have tried:

```
camera = bpy.data.objects['Camera']
# This is what I am confused, I want to get different fx and fy but failed
# So just set lens can get an average: f = (fx+fy)/2
width = 640
height = 480
camera.data.lens = (K[0, 0] + K[1, 1]) / 2 * sensor_width / width
camera.data.shift_x = (width/2 - u0) / width
camera.data.shift_y = (v0 - height/2) / width
```

In this way, I can get an approximate camera intrinsic close to the real camera. The blender rendered result using this approximate `K1`

is very close to the result obtained from OpenGL result using the true `K`

, but a few pixels are still different by comparing the binary masks of them.

Does any one know how to set the blender camera parameters in Python API to exactly match the real camera intrinsic, especially in the case ππ₯ and ππ¦ are slightly different?

PS: I am using blender 2.79b.