Brecht's easter egg surprise: Modernizing shading and rendering

Hi, set your DISTANCE to like 3m or 4m!

Indeed, with such an empty scene, you have to set a very high value. Other thing is the world with a uniform lightning, which of course will make it look fake. To test a feature for real use case, you have to use it in a real use case. Most of the time, the real use case will have an HDrI and much more details/assets all over the place.
There is an example of the scandinavian interior from chocofur I made for my funding https://blenderartists.org/forum/showthread.php?436581-Cycles-1-5-to-2x-faster-interior-rendering-for-GPU-and-CPU . I could have done it better, but it already shows some potential of the feature. In this case, AO approx was set to 2 and cut the rendering time by 2 on a Vega64.

facepalm

@zeauro…the scene with suzanne was a simple example file because @bliblubli asked a test file…obviously i tried with fully detailed scenes and still got these results, and since I do interiors my sky will always be strong and very bright most of the times

@bekic.bojan @bliblubli I work in real world scale units, settings 3 or 4 meters has no sense since everything would be full black.

Anyways, as brecht explained, it’s not the feature i thought, it has another goal, an heavy approximation to be used without GI, it’s not my case so it’s ok, i got it. I’ll keep using a subtle world AO for my works.

Thanks

I’ve now changed it so AO factor can be used to adjust the intensity of the AO bounces approximation, should help in this test scene, maybe in more complex ones too.
https://developer.blender.org/rB171c4e982

Booom. I love open source software. Thanks Brecht.

That sounds great many thanks Brecht

by the way Marco, as I understood it, this feature is very likely resembling your old AO trick

Hello Brecht, how difficult would it be to make a node with similar functionality to the “Object Info - Random” but instead of randomizing per object it would randomize per loose part?

This is very useful when making procedural wood textures or the like. It even removes the need to have every mesh in a different object.

I don’t know if I made myself clear… Anyways, this is one of the few features I miss from modo :wink:

Where does one make feature suggestions?

@charbelnicolas
You can do this by using vertex colors which you use as a factor. Random vertex colors by loose part can be generated by a python script.

Possible new features soon coming to Cycles.

Direct and Indirect passes for volumetric scattering
https://developer.blender.org/D2903

Alpha transparency for parts of backgrounds behind glass
https://developer.blender.org/D2904

In my opinion, the first one fills a glaring hole in the compositing system that was there for years, so good on Lukas for getting it done. The second patch should also be useful for compositing.

Lol. I just saw the “Developers ask us anything” question about the glass transparency,
and Lukas & Brecht already shat out a patch for it.
Gotta love 'em.

Though it seems to be without the mentioned refraction pass for now.

Edit:
Ok, this is some dedication I’d say…
The “Developers ask us anything” ended at around 15:00 local time (video description).
…and Lukas created the revision at 16:47.

This is almost comical.

oh, how nice!
finally ends an era of bypassing glass’s alpha transparency

It looks like Cryptomatte support could soon be in Master, there is a staging branch on the developer site.
https://developer.blender.org/diffusion/B/browse/temp_cryptomatte/;12076595992d706e05341fe99df3e298fe940f6a

Contains a lot of new commits from Stefen Werner with one more from Lukas Stockner. I know there’s been a lot of noise over the potential power it will bring to compositing, so it can only be a good thing.

A new paper fixing the black edges when using Normal Mapping:
http://cg.ivd.kit.edu/publications/2017/normalmaps/normalmap.pdf
It would be really nice to have this in Cycles.

^ thanks https://blenderartists.org/forum/images/smilies/sago/wink.gif

note: delete the 2nd “http://” or click here >> Microfacet-based Normal Mapping for Robust Monte Carlo Path Tracing (papers) <<

Looked like an interesting approach, until I got to figures 16 and 17.

The scenes that used their algorithm just had no contrast at all compared to existing methods already in use (in the area of actual normalmap detail). Those older methods get rid of black areas too, I wonder if the researchers ever had the idea of looking up how an artist expects a normalmap to look?

A normal map is expected to look exactly like the same surface would look like if it was modelled with actual fine geometry - I don’t see much room for artistic interpretation there. That is what the comparison in that paper should have been - a high-res reference render and ones with normal maps generated from that high-res mesh.

+1

If you need more contrast, you exagerate the normal map.

Fig. 11 does that comparison. Their technique isn’t perfect, but (at least in the edge cases and in my opinion) way better than the standard normal mapping.

I took a look to the code lukas is working on at github.com/lukasstockner/blender/tree/theory-public
He’s working for a studio widht a version that supports

  • a cycles net renderer. (besides local CPU and GPU you can have remote render device based upon an IP address.
  • they use adaptive sampler as well, but no info on how to set it up. (but it would be nice if this came back)

Its work in progress, and it got my attention because as i needed to render something larger then usual.
Well it doesnt work at my side, and from the bcon2017 at youtube i hear the problem with cycles netrender is that it quickly consumes a lot of network bandwidth and denoising isnt working.

It seams the reason as of why it is developed is to have different servers working together at tiles level, ea multiple machines can work together on 1 frame. Sure it sounded great but i’ve also seen a few python scripts that can devide an image into multiple smaller one (used for extreme large renderings when wants to print A2 size, or larger), such script could also devide normal images and then use common blender distributed rendering (alike ; https://www.youtube.com/watch?v=y3EcpkwLCFI ) recombine the images… and then only maybe get denoising working. (to split up an animation instead of still might require a bit python scripting).

…well i kinda wondered if there would be benefits if cycles did network rendering as compared to some scripts + the youtube tutorial.
as there seam to be several methods now to do network rendering (there also some blender farms, or advanced scripts like sheep it