Workflow for big sci-fi scene - texture baking or render layers?

I’m working on a sci-fi scene showing something like the interior of a giant space ship, or maybe giant building. I have foreground elements with a face count of 500.000, which is about the amount my computer can handle. I still need to include background elements and I’m not sure which way to go. There will be buildings (big cylinders) and bridges fading off into the dark/mist. No ground, no sky. I’m using some UV-sculpting techniques, as well as arrays and mirrors to build everything. Should I

  1. model the complete background, put it on a different layer and different render layer?

  2. model the background, do texture baking and build low-poly buildings to go with the rest of the scene? I don’t yet know how to do this step, but there are tutorials.

I think both methods would work, since background and foreground are separated pretty well. Modelling everything would actually be faster, than baking things and creating low-poly versions. So which way would you recommend if it’s only a matter of how to set up the scene, so that my system doesn’t get stuck on the face count?


I am currently working on a large scene and I find low-poly versions invaluable to my work, especially if you want to animate. If you are just producing a still, it may not make that much difference.

My workflow is like this. I have a scene for every object, in my case spaceships. Each scene has a full quality and a low quality version of the ship. I have added each ship to it’s own group. I name the groups like so…ship-1_proxy and ship-1_render. Then I have a master scene where I use Empties to dupligroup in the ships into the scene. I have an Empty, in the same location, for each version of the ship. I set the proxy version to be visible in the 3D viewport and invisible in the render. I set the render version of the ship invisible in the 3D viewport and visible in the render. The net result is I have a fairly interactive viewport and high quality renders. This requires some setup, but once you are up and running Blender seems to work fine. Also if you just need to crank out a quick render of the scene you can turn on the proxy for rendering and turn off the hero mesh in the outliner.

I am sure there are other workflows. For me external linking is such a hassle I don’t bother with it.

You have another similar technique available. Model/texture your background. Render it and put that image on a background plane. Or paint it, or both. A common technique this is. Image planes in background, low-poly or planes midground, high poly high res foreground elements as needed. Depends on the shot. You have to plan it out well. Cheat everywhere you can!

@Atom Hey that’s a great technique. In-file linking. I never think about using the powerful Scenes feature in Blender. In my toolbox of tricks it goes. Thanks.



I really like the proxy technique, tried it out just yet and it’s working quite well.

At the moment I’m facing another problem. It appears that my overall poly-count is ok to handle in Blender Internal, but in Cycles I’m having a hard time previewing materials and lighting. I did turn light paths all the way down to 1 bounce, but still my scene renders unbearable slow. No gpu rendering possible. :frowning:

I’m using emission materials as lightsources. Some of them need to be visible but others I could replace with standard lamps. Would that render faster?

You maybe facing the memory limit of your video card. That is why GPU rendering will not replace CPU rendering…just yet. Blender Internal does support Indirect Lighting so you can get glowing surfaces to light other surfaces, just not cast shadows. Blender Internal may render faster and with no noise issues that you have with Cycles.

Well, I have an ATI GPU, which means I don’t have any gpu acceleration features.

I switched my scene to Internal and now I’ll see how far I can get with it. It’s easier to preview since I can shut things on and off like shadows and mirrors and more influences on the quality than cycles offers at the moment.

Do you think that applying arrays and mirrors helps too cut down render time? My background scene is almost done with modelling and I can navigate easily in the viewport even with over a million vertices, but trying to get the lighting and materials right is a pain, because I can’t even see anything below 200 samples rendered, which takes at least half an hour on my cpu.

I’v separated some more objects and applied the decimate modifier, but that didn’t help too much, since my detailed displacements with high face count are fairly close to the camera.