Modelling very large scenes

I am putting together a still image that will include objects ranging in size from a few millimetres to several kilometres. There will be a distant scene set out as a backdrop and some much smaller objects in the foreground. Here is a first attempt at the backdrop:

I am using Blender 2.79 and have been trying to use adaptive subdivision (especially on the sea) but it doesn’t always appear to work (sometimes renders black). I am wondering whether the sheer spatial range of the scene is causing problems with the numbers (rounding errors etc). Also I can imagine there might be similar inefficiencies relating to rendering things on this scale.

Of course there is no reason why I should not shrink the backdrop to the point where the difference in scales is less extreme but I have found there can be problems here where, say, foreground lighting can ‘bleed’ into the backdrop in a way that is unrealistic.

I would be grateful for any words of advice from people who have experience of working on similar projects.

Examples of other things I would like to understand:
How do you manage textures on objects that extend far into the distance (like sea or a plain) ?
Are there ways of breaking the scene down into layers in such a way that the lighting will appear seamless?
I adaptive subdivision reliable in 2.79 ?

I will move to Blender 2.8 eventually but right now I can’t afford to take the hit required to make the switch.

All insight and advice welcome.

David Wilson (omnivorist)

Active subdiv and normal mapping to not co-exist in the same material.

Thanks … but I’m not doing normal mapping anyway. Problems seem to happen when I have two different adaptively subdivided objects in the same scene.

For now I have backed off and I am doing the old way.

Spatial dynamic range can definitely be an issue. Blender uses single precision floating point numbers, so depending on your scene scale there are a limited number of places in a given linear dimension where you can place verticies.

Some more on this at: Cycles sun cast shadow position is WRONG!

Adaptive subdivision is significantly improved in 2.80, but it’s also rather buggy at the moment if you try to use it in a rendered mode 3dview (likely to crash or go into a loop allocating memory when switching view modes or changing subdiv parameters). Seems to work fine for F12 renders.

If the scope of this is large enough, this is where you might want to break up a single scene into a folder of linked files, come up with a scheme of asset management for all the linked models, and break up into separate render passes for foreground vs. background stuff which are then composited back together. (Although I haven’t gotten into such a project myself, I’ve heard that a large enough spread in the scale of things may bring some issues with camera clipping.)

Thanks for this. I have some familiarity with the limitations of numerical precision but I had no idea what type of numbers Blender uses. I appear to have arrived at a compromise where my background is at approx one tenth scale (and placed closer to the foreground in consequence).

Also useful to hear that adaptive subdivision is buggy. But I am not aware of the difference between a 3D view in render mode and what you call an F12 render. I normally do a preview render in the view and a full render from the side panel.

If you have the time, maybe you could explain - but thanks for your help in any case.

Yes - I plan to look unto a multi-pass approach.
Right now the mist setting appears to affect the foreground regardless of where I specify the mist to start.
I imagine I might choose to render the foreground separately with no mist.

And yes, camera clipping is a real pain. I find I have to keep setting it higher every time I switch into edit mode. (There doesn’t appear to be a way to set default values)

Thanks for your reply

In the 3dview you can change the display mode to “Rendered” which, for Cycles, will cause it to do a progressive refinement render of the current view up to the number of viewport samples configured. This has to be smart enough to restart the render whenever you move the view or make any other change, and this is what is buggy in 2.80 currently, where adaptive subdivision does not get properly re-tessellated every time the view or a parameter changes, and as a result it frequently behaves strangely or crashes/locks up.

When you hit F12 (or menu Render -> Render Image) to perform a final render, that passes everything to Cycles and it does not have to deal with restarting the render if you make changes etc. so it’s not subject to the problem. The non-interactive renders that calculate the image in tiles or buckets (by default) are significantly faster than the progressive mode used in the viewport.

Thanks - that’s really clear. I am familiar with the non-interactive (final) render but hadn’t realised you can kick it off from F12.
Interestingly, I often use progressive refine for the final render, as opposed to tiles. I have timed it on a number of occasions and don’t find it any slower. I imagine that this may not be true for very complex scenes.

A lot of people like the progressive refinement way of working, but devs have said it’s not a priority for them and they consider it inferior to the regular render mode. There are also some issues like if you set up an infinite progressive render, then you have to “cancel” the render when you get to where you want to stop, and Blender then will not run any compositing process you have set up which is kind of annoying. I put in a suggestion on Right Click Select a while back that there should be a way to say you want to run the post-process stuff even if you only have a “partial” cancelled render.

Good point. I always tend to specify my required sample count in any case. And although it is necessary to let it run to the end to be able to use the compositor, it is useful to be able to interrupt a render run if there is something not right about the overall ‘look’ as it emerges.