'Nostromo' environment study

These look really amazing, NIZu!
On the short animation test, how did you get the smoky/hazy effect? Sprites?
For Direct Light, this stuff looks very impressive. How many samples did you usually do while rendering?
Anyway, I love it, and I love your approach to animation render testing. Being a testmonkey is always fun!
Great stuff all the way around.

Hi, Thanks !

The smoke/ haze is made as a separate pass with smoke simulation rendered in BI ( so in rendertime : to the 3-4 mins per frame add 30 mins for baking the sim and 30-60 secs per frame for the smoke render )
The smoke doesn’t ‘interact’ with the geometry (as simulation) . Then it’s rendered with the full scene geometry present but with a sky/ 0 alpha material to mix the 2 elements in depth … so geometry occludes smoke even if they don’t interact as motion : not the cheapest or the most accurate way but seems to work well.

I still want to revise the flow speed of smoke : still a bit too fast (i need to play with temp. diff parameters for that)

About samples , it’s all non-progressive (some details in previous posts) so sampling is not a single param : the base is 4-8 AA samples everywhere , then you have to multiply that x6-8 for AO sampling, and x1-4 for lamps (most lamps have size 0 so 1 sample , just a few have 1-10cm size so require 2-4 samples).

So each ‘aspect’ of a pixel (light, ao, textures) gets sampled around 6-60 times …which is a great way to speed up rendering these very simple lighting scenarios, which can be solved with less than 100 samples. But doesn’t seem to work well when things get more complex, with bounces and soft lamps, then sampling everything ‘uniformly’ (progressive mode) performs better…

Hi NiZu,

Your work is just amazing. It is really a great inspiration and very detailed work. Thank you for sharing your know how, appreciated.

Besides, what are the size of your texture maps for the sci-fi corridor? 4k size?

Hi, Thanks!

No 4k textures here , i always use 2k for my tileables library (sometimes i save a 4k for special cases )

Then i blend multiple layers using dirtmaps.

There’s around 5-6 dirtmaps in these scene, all 1k*1k. Remaining 90% of objects get their dirtmap from vertex color.

The ‘tileables’ photo textures are around 30 (all 2k maps: some grunge patterns , some proper materials with dif/spec/bump maps)

Which is a lot of images in memory ! … it’s not optimized (i could have picked 15 tileables carefully and would be the same) and the beauty is that on a much larger scene i wouldn’t need much more tileables only more low-res dirtmaps.

Doing some rough math … if i had baked everything to flat textures (of course much slower manual work) i guess it would mean :

3x4k maps for all objects (would give comparable resolution ) *3-4 channels (dif/spec/norm maybe glossiness) =
equivalent -in total size- to : 30-50 2k maps.

…Which makes sense , i guess this is about the size of a scene where dirtmaps +blending layers takes about the same memory as baking everything down.

Hi NiZu,

Thank you so much for your so detailed explanation. I think I will be aiming for 2k for the safe side.
I mainly do portrait works ( http://www.mduque.com ) and your sci-fi corridor really gave me the push and decision to change my art direction to create sci-fi corridors / environments. Big thank you.

I have subscribed to your blog already and if you don’t mind, I will be asking for your feedbacks.
Looking forward to see more work from you.

Hi,
Thanks! and don’t exitate to pm me if you want feedback on a thread of yours .

A few more things about textures size … to clarify/generalize : the cases i consider are :
A) unwrapping a model for ‘custom’ ‘specific’ texturing : then i start with at least 1 4k texture (for a face , a vehicle…) or a few 2k maps ( one each per 3-4 main pieces of a room).
B) making a library of tileables for general re-use : then i stick to 2k. I do have a 8k metal generic somewhere (made stitching several 16mp photos) and 4k versions of the scanned grunge and scratches maps for Mango.
And it would be cool to have multiple resolutions (512 to 4k) for each re-usable texture to load in different projects … but it’d be a mess to manage and 90% of times it works perfectly to load the 2Ks … so i keep it simple…

Back to the project … i’m currently a bit busy and haven’t any update, but i have an old test here :

Next piece of the puzzle (next thing to test) would be some matte painting: a shot of a corridor showing something out of a window.
i’ve never done matte painting before, so i started rendering these to a plane and thinking of cards and perspective…


But I’m still undecided between the original idea : these towers + photo of space or mars … Or a city / spaceport : not too complicate though , just a concrete ‘landing pit’ and some buildings in the background … 2nd idea is much less a matte paint but i can always use some practice on mid/background buildings.

One interesting thing about these is the design … or lack of it ! :smiley:
it’s just a messy mix of simple models with a generic hull plates texture on top … And that’s why it works , a cheap trick that delivers (relative to the time and effort put in it of course !)

And there’s a bit of back story in this sense : the components for these towers are models i made for TOS , that -i think- were never used (as the rest of the spaceport assets Ian and others worked on) , maybe there’s some trace of them in the backgrounds of the city , but during work-in-progress the idea was to have a much more complex and definite ‘space port’ in the cityscape.

The towers and spaceships in the storyboard where a fascinating challenge but they got progressively left aside as things progressed, I guess it was too much to design and detail and it wasn’t a strictly vital ingredient.

That’s definitely the case for a smaller part of it: there was an even earlier model of this towers which i modelled and it was horrible -please don’t dig it out of the blog :smiley: - Luckily , i realised it wasn’t going anywhere and gave priority to other stuff.

What was wrong with it ? well it was too good a challenge : mixing the concepts silhouettes with details from the realworld Amsterdam architecture ! not just the typical sci-fi stuff but a mix of old and new that looked typical of the city…
Great ! and also easily 1 month of work for a background card :frowning: … not doable unless you have years and millions for a movie.

So a very small detail, but with a good story : i guess creatives must really develop a sense for when it’s right to risk on thinking, designing and refining and when it’s better to use dirty tricks and proven quick solutions .

The render you see here was done with this quick-and-dirt philosophy , and no presumption of original design, and took 1/100 of what would take to do the same ‘properly’ and with a more original design.

(… and to be honest : don’t discount also 1 year of practice since then :slight_smile: especially on learning cycles shading and textures)

Still, in any case, i have here a book with the drawings of Michel De Klerk containing a water tower design that begs to be turned into some crazy sci-fi control tower , and i have to do that someday !

Hi NiZu,

Wow, that was a hack of details. Great to know how you work and inspirations.
Work has been taking all my spare time lately, will post something once I have something to show soon.
I will definitely pm you :wink:

What I miss from the movie and not in Cobb’s art, are the lighting fixtures. They used rotating yellow hazard type lights, that raked the smoke. And back lit grilles along the kick space near the floor, as well as back lit computer type racks along the passage ways. Many set photos online seem to be taken with a flash and kill the atmosphere. Frame grabs from the movie would be more representative.

NiZu, I love your threads and video tutorials, and I really appreciate you showing us how to use Cycles in a new way to get the faster renders. I keep trying to get my head around your 3 Stencil setup from just looking at the videos, and I still fail at trying to replicate the effect. I downloaded the texture atlas script to get AO for dirtmaps, and I tried the batchbaker script as well. All cool stuff, but I’m so far behind :slight_smile: Very fun stuff, and I hope to get some stuff textured with similar real world feel.

This thread is very interesting, and I’ll be doing many of my own experiments around these results. I have looked all over, including your blog, but can’t find a download for the node group material setups, or the stencil node group (V3 wanted especially). I’ll trawl the videos to try and find stills for the setups that you have so helpfully explained. I did find the autobake script, and I’ll be trying that today.

Typical. After searching for hours last night and more today, then posting to say I couldn’t find a link to the stencils… one click later and I’ve found a link to the stencils. Fab. Still searching for the material ‘shader’ preset nodegroups, etc though.

zzero101, mind sharing the link? I haven’t seen anything but the messy noodles in the simple quadbot that I downloaded earlier - but from the videos and the screenshots, NiZu has a much cleaner grouping. I’m working on understanding how to use color ramps to determine the blend amount, but I’m stuck.

I found the link through this You Tube channel Nicolò Zubbini, of the author. Some really interesting videos. Here is the link to the autobake script https://github.com/nizu/BatchBake/arc… and this is for a blend containing the dirtmap stencil node groups http://www.pasteall.org/blend/18165.

I’m having trouble implementing it all together, though. The autobake works, and I can see the AO in textured view correctly, but when it’s plugged in to a shader via an image node, the UV faces are strangely rotated, or something, all the faces are wrong. I can’t seem to work it out. It doesn’t seem to be the stencil nodes.

Any ideas?

Let me try it out and get back to you - I can’t think off the bat why it would rotate uv’s. I would suggest to doublecheck that the normals aren’t inverted first, but thank you for the links, I will dive in asap.

I can’t reproduce anything that looks like rotated faces, but I do know that it is important to bump up the blending on the box mapping for the textures. Works pretty well on the 3X Stencil v2, though I only basically plugged in the three box mapped textures for the black, grey, and white inputs and plugged my uv mapped dirt map to the first slot. I need to watch the tutorials again to figure out what to do with the inputs for the black and white noise, though just bumping the values on the slider made it smoothen the transition between the box mapped textures. Very cool stuff, very very cool stuff. I want to go out with my camera now to get new textures!

Thanks so much for the help, and thank you NiZu for this asset :smiley:

Hi , Thanks everyone !
Lots of posts in few days i haven’t replied

3PointEdit: i totally agree about the lighting i’ll keep that in mind, infact i wanted for the 2nd shot exactly all that stuff : light changing over time, computer screens , interaction with smoke… i got sidetracked , but i should do a serious pass of ‘animation’ on that scene lighting.
It would also be cool to do a closer match to Alien’s intro as modelling, but i want to keep the freedom to model and test random sci -fi stuff and the speed of being more generic for this test…

Craig Jones and zzero101 : i really need to prepare that ‘resources’ page on my blog ! i know it’s a mess to put toghether the stuff i posted around.

About the glitches (rotated faces) do you have a link to some image or a thread ? point me to it and i can try see what’s the issue , doesn’t seem strictly related to the script , have you tried baking with the ‘normal’ AO baking in Blender, do you get same errors ?

Or more likely : do you have multiple uv channels on your object ? if yes you might need to specify the right one for the dirtmaps :
Add an ‘attribute’ node ,
type the name of the dirtmap uv channel in it ,
plug the vector output into the uv input of the dirtmap .

Sorry i have been busy and didn’t manage gather all the materials in one page ( and i still have to release some minor updates to the stencil nodegroup!)
I’ll do that as soon as i can , post if you have more questions !

I just realized this - if you plug everything together in cycles first, including a blank 50% grey image for the dirt map, you can then select that node and go to texture paint mode, and paint in the black and white areas, smudge things around, save the image and check the cycles render view to see the result. Flipping sweet!

NiZu - The rotated UV faces (each face was rotated differently) problem I had has not recurred. In the end, I started a new project and started again from scratch, which was also what I’d done on the problem project. I thought I’d done everything the same, so expected the problem to happen again, but to date is has not. I’d been using the script on a single object, to save time using multiple objects. I was experimenting with getting the inv AO to give nice edges, so kept re-baking. I checked, and there weren’t multiple UV channels, and to be sure I added an attribute node, but it made no difference. I have had other issues since, though. The main one is that the various AO images that are produced can quite quickly ‘dissapear’ (from the compositor?) so that when viewing in rendered mode, the output is the pink that says the image is missing. You can still view it in the UV editor, and in 3d texture mode. Because the images are generated, you can’t reload them. Saving the image to file stops this problem, but when experimenting with the options, that is annoying, and easy to forget.

The reason I had to keep experimenting was that I’ve had real problems trying to get the inv AO to give sharp-ish edges. Any tips on that?

I was being lazy earlier, the ‘shader’ node groups etc are easy to set up, although it would be handy to have the script, shaders and blend node groups all together in one download.

I’ve experimented lots with the shaders and blend groups, and when I saw the videos describing these techniques, I came away thinking that I would switch immediately and use this setup with all materials from now on. I’m not sure that’s really the best way to go, though. The stencils work well in the kind of situations you were describing, but can be tricky with other objects and looks. The image textures for the 3 channels need to be quite carefully chosen so as to produce a good result. This is not a criticism, just an observation. Could you explain more where you think these techniques should and shouldn’t be employed? And maybe what criteria you use in selecting the various noise textures and stencils.

My last question: I am constantly experimenting with the multitude of options to provide quality results and render speeds. I do really enjoy the experimental process, but there are just so many variables. My question is why you want to use these nodegroup setups in the ways that you have explained at all? I’m only talking about the processing times now. I understand the re-useable presets, and the quality of the results. Are each of the node trees recalculated for each pixel, AA sample, material, or object, or render? Presumably each time they need to be recalculated, it obviously takes time and power, but equally, the fewer number of times this has to be done, the less it matters. So how does this all fit in to your way of keeping the times down, and could you explain your overall process for this, please. I’m sure that this will provide invaluable information to me and the Blender community, coming from someone of your experience.

One tiny last thing. Cheeky, I know. Could you give 2mins to critique some of my latest posts please? Please please?

Hi,

About the images disappearing : all i can think is they are not saved , the script generates AO , INVAO, AO2 images that are only intermediate steps to get the final dirtmap (_DIRT) , those are generally only cluttering the image list , so it’s intended that you have to save everything manually (including the dirtmap).

About INVAO (a.k.a. edge highlight) being hard to setup right : absolutely , i explain that in the tutorials , you need a small radius to avoid excessive white areas , but then the effect is not strong enough.
One answer is practice and getting used to find the exact right radius.

The other is improving the scripts : i plan to add 2 passes of inv-ao : one white with tiny radius , one 75% grey with bigger radius… often helps.
Or there’s other methods which could be included (also mentioned in the tutorials) the best solutions would require a real coder to write some ad-hoc code for edge-detection that takes into account the size of the face to give the edges the right amount of highlight … maybe one day in OSL :slight_smile:

How do you pick the right grungemaps and when do you use this tecnique rather than others ? huuge question , i think it would take a full project-tutorial on a dvd to explore the subject better than i could in the videos i made already.

And also , yes it’s not a method to use always , infact in recent months , being sort of comfortable with how this method works … i got much more into taking good photos , doing a good work in Gimp and making clean and useful textures for ‘flatter’ materials (= using only 1 image per channel and much less node blending operations)

In a way , those methods are for “synthesizing” from scratch, and they work for building stuff from nothing , like in sci-fi pieces , while using more photos works better for re-creating stuff that already exists in reality (contemporary buildings,etc…)

But the real bottom line , i think is …you have to learn both … then it’s easy to pick the right one for the job !
(…so i’m doing more camera/gimp practice now, and i hope to have a tutorial on that on YT sometime soon)

About the technical question of speed : first thing : plain and simple they’re -often- slower to render but faster to create ! make a good one and adapt it to 100 similar objects … without need to unwrap and restart from scratch for each piece.
It’s about sparing some human work and recycling it for new assets , more than technical efficiency :smiley:

Second thing : I think it really depends a lot on the render engine how much slower these methods are , they use simple blending ops (simple math) not raytracing dirtmaps per every frame or other crazy stuff …

So they might be quite slow for a renderer like progressive cycles where 1000 samples mean the material being recalculated fully 1000 times including the color channel, but do you really need to recalculate that color information that much ? you need 1000 samples for gi or glossy , but the color channel is clear in 10 passes…
We’ll see how things progress , the non-progressive mode in cycles allows just that but it’s bad at solving GI or Glossy (…i heard it got better recently , so let’s see…)

Thanks for the info, NiZu.

I did look at the script, I might try adding the 2nd invAO you suggested, it’s a good idea. I also thought about modifying the way the contrast works.

Interesting comment about the colour channel. Are you trying to hint? I already use multilayer when needed, but I’m going to try it.

Hi,
hinting ? i’m not hinting to a technique to bypass these issues.
Multilayer render and passes won’t help much with this, imho, you mean separate color and light passes with different settings ? if it speeds up in any way it’s very convoluted and not very practical.

The only practical multilayer workflow i can think of is rendering characters with full gi and progressive , and backgrounds with direct+ao, spec only, non-progressive (as the env in this thread)

I did tests on denoising light passes but results weren’t any good (you need clean edges in your render anyway or bilater blur won’t work well , cycles samples everything (big planes and eges ) equally and that’s no good , with Vray i did get some great results)

What i meant It’s the integrator that matters , direct-light only renders with non progressive are fast but limited and i can’t be sure how much faster could be with other integrators : only a raytracing coder can say for sure.
The above makes sense in theory, but i fear it conflicts with the nature of Cycles (being brute force and interactive) and the need to make it work for gpus … making cycles faster for large node trees might not be a wise priority (even if i’d personally love that !)