Actually, you donāt need to enable any passes manually. When you render, the required passes are automatically enabled before rendering, and then automatically revert back to your settings afterwards.
Iām unable to replicate a situation where the automatic enabling of the necessary passes doesnāt work, so I think your resolution may have been something else. If you do notice this behaviour again, then if you could send me the scene to the support email Iāll take a look
I just finished a project, where I use various denoising technic, incl. motion channel based ones. I have to say, that it sometimes works perfect, but can also fail. I tried the Optix denoiser, but it was far worse then with classic compositing tools.
If someone could only bring the fantastic 3d outputs and compositing tools together, it would optimal. But honestly, Blender needs to get lot better in pre denoising, in order to do animations
Yes, there are various motion channel based techniques, but all have limitations unfortunately. Iām still working on adding temporal stabilizer, so once thatās done you shouldnāt need to employ your own techniques. It will still have unescapable limitations of course, but with foresight most of these can be bypassed (moving parts on their own view layers for example so you can give only small parts of the scene more samples and lower the static elements samples dramatically).
The interesting part would be having that in compositing. So either before saving out the EXR channels or afterwards (which is more logical).
Even be interested seeing that in another software like Fusion or Nuke.
Yes, thatās the bit thatās holding me up. I can temporal denoise the render layer cache for all passes which is extremely slow, but means if ultra denoising mode is selected, each pass can be temporally stabilized before going out to other software. Or alternatively I could just do it for the end result (whatever goes into the composite node), which would mean it could be used for any render engineās results (redshift, octane, fstorm, etc), but would also mean it would need to be re-calculated every time the tree is re-published.
I might just go with a standalone operator which works on any image sequence or movie file initially, otherwise Iām going to end up going down the rabbit hole of also having to stabilise standard cache to ensure they donāt add any flicker when publishing in fast mode. This way youād render, do the compositing either in Blender or some other softrware, and then load in the movie file or image sequence to temporally stabilize afterwards.
Denoise Diffuse total then multiply it with Diffuse color
Denoise Transparent total output.
Denoise Reflection/Glossy total output.
Comp these.
Indeed it takes time, but you really need that control, because the noise is always different on each of the 3 types. GI is more a flat noise, Transparent is a complex noise and Reflection has often fireflies from specular.
So I need to take care of each individual.
I have a workstation with a fast RTX card where my Comp work is rendered, but that can take a few minutes per second, depend on the complexity of the node tree.
Yes turbo render already does that dependant on the denoising mode chosen, as well as a load of other stuff. Itās whether or not to temporally stabilize the passes individually after denosing, or whether to just offer it as a process to work on the single pass end result (image sequence/movie file). It will probably have to be the latter initially, but Iām still trying solutions to make the former fast.
Hi, hope youāre all well. Just wanted to let you know that version 2.1.3 of Turbo Tools is now available for download from your library (link on your receipt).
This update has several performance improvements under the hood, and also fixes the recently introduced bug of cache files being kept for every rendered frame when the Turbo Comp āanimationā option was disabled. Now if you disable the āanimationā option, only the most recently rendered frameās cache files will be kept on disk. This option is useful for ensuring you donāt use disk space unnecessarily when rendering still images, and also allows you to save disk space when rendering animations if you donāt need to use the full animationās cache files in Blenderās or a 3rd party compositor.
This reduction of hard drive space requirement also means you can use a smaller external storage device to reduce SSD read/writeās if you donāt have a mechanical drive available.
Decided I needed to do something a bit cooler than a kitchen for the 3d world magazine Turbo Render tutorial. My first robot! Full scene will be available with the magazine
Extremely excited. Temporal stabilization is almost complete, and the results are way better than I expected. It can also temporally stabilize individual passes, this means itāll even work with complex compositor set-ups.
Hereās a test with @RobertLe 's incredibly difficult to render train scene. Originally the scene needed 1024 samples to get decent results with neat video. Below is 64 samples with Turbo Render and Temporal Stabilizer enabled (coming soon).
Hi Michael, Iām curious about your tool, since you claim it works with features already in Blender. I guess the core of it is denoising single passes, which of course is great. But how does it work with reflections?
I mean: I can have great results with the tecnique, but as soon I look to some diffuse surface reflected in a mirror (which has no glossy albedo detail to be multiplied) all the bells an whistels are gone.
The temporal phase is part of the publishing operation, so after rendering, the farm would need to call the publishing operation in the same way as above. The result will then be saved out to whatever you have set in Blenderās output options (jpg, mpg, etc).
The farm would just need to set the cache directory prior to rendering.