sure, the comparison shows a good improvement, but those where two renders with the same samples count. What I understood is that you are proposing to make a 1200 samples (+denoise) still vs a 400 samples (+animation-aware denoise), since the other 400+400 samples would in theory computed for frames that would not exist, except for that denoise algorithm
The samples would simply be divided among the buffers. If you need more samples, you just set the number to a higher value. In a sense, you could also argue that animation denoising is making use of
sample count * N(frames) samples as opposed to
sample count / N(buffers)
That’s not to mention that I’m not talking about multiple frames, but just a single frame (since what I propose is a solution for still images).
Nope, sorry, that is exactly what it does. To be precise, there’s some math for the variance buffer, but it’s not exactly advanced either.
Also, yes, stacking images is technically inferior to rendering with the full count, but if you rendered at 256spp and realize that it’s still too noisy, rendering an extra 512spp and merging to get 768spp will look better than deleting the old render and rerendering at 512spp…
Also, it’s a bit hidden at the moment, but there is a command line option for starting the render at a sample offset, which means that stacking the original and a offset version with the same seed gives exactly the same result as rendering at the higher count in one go.
Wow, I didn’t know that. Could that be used for resumable rendering? If yes, then it should be exposed in the UI.
It’s not really usable for resumable rendering yet because it won’t resume from a previous image - instead, you get one image for e.g. sample 1-100 and one for 101-200 and you have to merge them separately.
An option for true resumable rendering in on my ToDo list, but there are some usability questions (if you modify the scene in between, the result will be weird, how do you prevent that - or do you just tell users to not do it?)
I’d favor just telling users to not do that. If the “weird” result looks interesting, then some users might want to do that sort of thing on purpose.
I agree. Pausing a render without changing anything can be very useful. Sometimes while rendering a long render (several hours or even days) you might want to pause the image while working on something else and then, when finished working let the render process continue.
It would be even better if we could pause a single GPU. If you are rendering a time consuming image on two GPUs and then want to start working on something else, you just pause one GPU and by that free resources for the other work. Then when you have finished the other work you let the GPU continue working.
What I don’t get is, you can pause a viewport render, but not a full render.
If you set the number of samples to something high like 1000, if part way though the render you want to pause it, you can simply change the number of samples to one that is below the current number of samples. When you are ready to resume - simply put the number of samples back up again and the render continues where you left off (as long as you haven’t changed anything)
So the question is - why can the viewport render do this - but the full render not?
On the viewport render you can even add samples on the fly. If you render with say 100 and get to the end of the render and arent happy with the noise, you can simply add another 50 and it continues to render up to 150 samples again from where it left off.
What about CPU rendering, it might be useful to be able to free up half the threads from time to time so you can do other work (which for those on 8 cores or higher, it would allow full fledged work in pretty much any other software including a separate instance of Blender).
Ja, fully agree. This would be immensly useful.
If you’re on windows you can use Process Explorer to reduce blender’s priority (that’s what I use on my 2-core CPU, I just let blender render in the background while I’m working on another instance). You have the option to pause the process from there too and resume it later if you like.
On linux you can simply use STOP and CONT signals.
The drawback is that the memory is not freed
Can it used for Luxcore EXR sequence.
Or can be Cycles denoiser used for luxcore render by node magic for one frame?
I find it interesting that so many are asking for Cycles denoising tech. in Luxcore.
Are people disappointed with the results from Luxcore’s new denoiser by any chance (because it should also be noted that it is a WIP)?
I wonder how the Optix denoiser would compare to cycles denoiser. I believe its licence has been changed and it could be now shipped with blender. If its results are better then I don’t see the point of having another denoiser.
That would be pretty sad for everyone without an Nvidia graphics card, because they wouldn’t have a denoiser anymore. The OptiX denoiser is also not suited for animations, so that’s another reasons to keep the denoiser.
It all depends on how well it scales with higher sample counts and how it handles very small or otherwise subtle details (as I recall, early videos of it in action killed smaller details and gave the impression it is optimized for small sample counts).
Besides that, the current denoiser has a bit of room for improvement as seen with Lukas’ latest work (animation denoising is going to be a big deal for those needing a higher level of realism than provided by Eevee).
In addition, the methods based on passes and general algorithms (as opposed to neural networks and machine learning) still have the advantage of being able to infer a result with literally any scene you throw at it (because there’s no risk of creating something poorly covered by the training data).
it understands noise from pathtracing and how to remove it. the only issue you can run into is when you alter the noise(running some postprocess before denoising, like chromatic abberation) or if you use bi-directional pathtracing (in luxcore for example.) For detail preservation it can use normal and albedo passes. But it can do pretty good job even with only beauty pass(I use standalone version with luxcore cause BCD is useless for low sample renders and its got nothing to do with it being WIP). Cycles denoiser is actually pretty good compared to optix, but optix is so fast we could use it in viewport rendering in realtime(I guess)…
lets hope nvidia is next