Are objects not seen from camera angle rendered in an image?

Trying to save render time.

So here’s my situation: I’ve made an aircraft model that has interior cockpit details for when I render images for the inside.

If I render an image of the aircraft from the outside (i.e. you can’t see the cockpit), will those details inside be rendered, since they are not visible from the camera view?

And I can’t move all these details to a new layer or hide them, because of things like the seat having to be visible through the canopy glass. Not sure if only what you can see though the glass be rendered, or would it still use the entire object for the seat- most of which can’t be seen from the camera.

And( yes, I’m still new to this. I do understand that a render is a generated image, but since this is not an animation I don’t know if it will use all the vertices that are hidden behind objects, facing away from the camera, etc.)

Thanks!

I’m not sure if they are completely discarded or not. But I have noticed that the further away certain things are from the camera the faster they render and if they are off screen the scene also renders faster. Though I do think that you still loose time during the initial calculation of the scene since it still calculates the presence of the object.

If you are referring to Cycles (or any raytracer, really), the answer is “sorta”.

In one sense, yes, they are rendered, since the raytracing calculation traces against all objects in the scene. There is no way to know if an object is visible or not unless you actually try and trace against it. (or the user said it was invisible using stuff like the ray visibility flags) The entire work of rendering is seeing if things are visible, so you can’t just eliminate “invisible” objects to save time.

In another sense though, it doesn’t “render” them per se. Raytracers like Cycles do not render objects directly, they render screen pixels and see if things are visible to those screen pixels. While this does involve looping over scene objects, things like acceleration structures mean they never truly touch the vast majority of polygons on any particular lookup. This is responsible for the effect davidnaylas talks about above, where obscured objects have little effect on render time. They never come close to being hit, so Cycles never even has to touch them. This is also why the relationship between polycount and speed in raytracer isn’t so simple as with a rasterization-based renderer.

It depends on your scene. Taking cycles as an example - an object that is completely hidden from the camera can still affect the scene via emission of light, bounced light and reflection. Removing or hiding an object just because it can’t be seen directly by the camera could very much affect the scene you are rendering.

Take a look at this scene. It contains 3 diffuse cubes - a white one, a green one and a red one. The red and green are completely hidden from the camera (the red is off to the left in front of the white cube - and the green is hidden behind the white cube…but look at the effect these objects have on the scene - simply because of the light that is bouncing off them.


I guess its down to you to decide whether the objects in question have the potential to affect your scene or not.

You definitely want to exclude from the renderer’s consideration any geometry that you know will not contribute to the final visual.

Ideally, you would not link those objects to the shot-file at all. Or, organize the objects into layers such that you can use layer-masks to omit from consideration everything that cannot be in view or in any other way contribute to the shot. Don’t give the renderer a decision to make (because it will waste time making it …) if you already know what the answer should be.

Be aware of indirect effects: things that block the light, things off-camera that still have an effect, and so on. However, in these cases you have three plausible choices:

  • “Go for by-gawd physical correctness.”
  • “Ingrid, fake it.” Notice what a plausible effect will be, but find another computationally-cheaper way to approximate it.
  • Purposely omit and ignore it, on the assumption (probably true) that no one but the folks at BlenderArtists will actually notice or care.

“Is it really important to the plot? Will it affect the quality and the believability of the show?” You should decide on a shot-by-shot(!) basis just how much “cheating” you can reasonably get away with, and then “cheat” in every way that you can. As long as the resulting show looks good and you beat the deadline, “you win.”

Not much sense in manually turning things on and off in a path tracer like that. Polygon count has VERY little effect on path tracer render time with a good BVH in place, and Cycles definitely has a good BVH. There are some things I would like to see eventually (don’t subdivide until first ray hit for example) but most require subdivision to happen at render time rather than before export, so they aren’t possible just yet.