Image resolution on EEVEE


what is the difference between: Sensor Fit, focal length and X / Y resolution?

why does the resolution only adjust one axis, for example X, while the fit sensor adjusts the other, for example Y?


does anyone know help me in the above questions?

First of all, this is not related to EEVEE, it’s camera-related and it applies to Cycles or any other renderer as well. Secondly there is a Blender manual which explains exactly what all these things mean.

Anyway, here goes:

Sensor Fit = What the real-world equivalent of a camera sensor size is. For instance the standard 36mm is equivalent to the standard still camera “full frame” while if you want to match a different sensor size you would refer to the camera manufacturer spec sheet and input that value in here. Auto sets the value to what Blender understands is the more reasonable interpretation of the value (typically a horizontal size).

Focal Length = What the real-word equivalent would be to the lens that would be attached to your sensor. For instance, a “standard” focal length is 50mm (supposed to match roughly human vision field of view), however if you want to go more wide angle you would decrease that number, and if you want to go more telephoto you would increase it. I would suggest you familiarize yourself with real-world lens sizes and aperture ranges as they will yield more reasonable and realistic images. For instance a 17mm lens with an aperture of f/.9 basically doesn’t exist in the real world.

X / Y Resolution = Render setting for what you want your rendered frame size to be. So if you’re exporting to HD you would choose 1920x1080. If you’re doing 4K DCI then you’d choose 4096x2160 and so on.

I’m not sure I understand what you mean by the resolution only adjusts one axis. Can you be more specific, maybe take a screenshot of the section you’re talking about?

I made a tutorial video on this exact topic that you can watch here, it’s for Houdini but the information applies across the board:

Also…consider buying a cheap DSLR camera and shooting stuff in the real-world with different lenses. It will elevate your CG skills considerably!

There was a related thread recently I’ll link to below. Basically you should just set your desired output size using resolution and then frame up your camera.

Here’s an image I posted in that thread. If you start with A and increase the Resolution Width you get B. To get the result shown in C Blender would have to move the position of the camera (or change focal length). This would be undesirable for many reasons

in resolution both the X and Y settings change only the vertical

the focal length changes the depth / perspective of the image, right?
does Sensor Fit do that too?

No, they don’t. The X changes the output width and the Y changes the output height.

Try sliding the X resolution down and see what happens when it reaches and then goes below the value in the Y resolution field.

Did you not read a single thing I typed? I explained it in my response.

yes I read

but this question was not clear:

the focal length changes the depth / perspective of the image, right?
does Sensor Fit do that too?

Yes or no

thank you very much, sorry for the inconvenience

Sensor width and sensor fit are for matching the camera in Blender to real world cameras. Useful if compositing CGI elements onto filmed camera footage or doing motion tracking. Altering the sensor width does effect focal length, but why mess with it when you can just alter focal length directly?

I have a Sony camera. It has an APS-C sensor which means 23.5 x 15.6 mm. If I wanted to match my Blender camera with that I would enter those values for sensor size and put in the exact focal length of my lens (my 50mm prime lens for example).

Alternatively, I could leave the sensor size in Blender at the default, full frame, size, and adjust the focal length to the full frame equivalent, which for my 50mm lens would be approximately 75mm

Familiarising yourself with how real cameras work, as @Midphase suggested, is great for helping to create stuff in Blender.

With a real camera if you want to fill the frame with the subject you don’t alter the output dimensions of the image. You either move the camera nearer or further or you zoom in and out (change focal length)

I suppose that’s a way to put it. More correctly it changes the field of view of what the sensor sees, but with that there are other side effects such as more distortion of the image with wide angle settings as well as an exaggerated sense of perspective with objects that are close to the camera. Telephoto has the effect of compressing the distance between objects. The Sensor Fit/size does affect the effective field of view by cropping the image that the lens presents. So for instance on a 36mm sensor, a 24mm focal length will feel very wide, but on a 16mm sensor it will feel more like a 50mm lens. Do some googling and look on YouTube for plenty of videos that will show you how all these things are interconnected.

As John suggested, you really shouldn’t need to change the Sensor Fit value unless you’re specifically trying to match a real-world camera. Even then, you might want to use something like fSpy which will give you more accurate values.

1 Like

you said :
I’m not sure I understand what you mean by the resolution only adjusts one axis. Can you be more specific, maybe take a screenshot of the section you’re talking about?

how do I make a video of the win 10 / blender screen to show what happens here?

image 1: FullHD
image 2: increases x-axis
image 3: decreases Y axis

both x and y change the height of the frame

They’re not, it just looks that way because the aspect ratio between 2560 and 1080 is similar as 1920 and 819. For instance, try a value like 800 for x and 819 for y and you’ll see that now the width changes and not the height.