Viewport cpu denoising while gpu is calculating ?
We will see, Intel will hopefully soon release a version with depth support, I’ll have a look to the performance then. Patience In the current state as you can see in the videos above, the image converge in some seconds anyway, so the March update coming today will already make fast preview possible.
You always keep saying that the image converges in seconds, but it just gets from noisy to slightly less noisy. I think most users would consider converged image to be quite clean, with just a bit of residual noise.
Well the point of preview is to get a fast feedback about the overall mood, see if you should adjust colors, move objects a bit, etc. For that, it’s good enough.
I developed that feature in less than a week and just wanted to discuss it at first. As users liked, I accepted to release it quickly, because I think users know better than me what is good for them.
The forum is limited to 5MB files, so to have the videos compressed as I wanted, I made them short. If you have a video upload service that doesn’t recompress, I can upload a long video that gives a noise free image, although for that F12 render is better.
In the videos above, master needs twice the time to get to the same noise level. If you want noiseless images, you can just wait more, you will wait 2x less using E-Cycles.
man, I can already imagine a couple of years and the possibility of seriously having realtime rendering in thisi kind of render engine …
this is that it would be revolutionary …
it will change everything, it will change the industry of movies, video games, augmented reality, new gadgets that mix photos or video and virtual reality without a visible distinction … I can already imagine …
wow, just wow
the new March feature update of E-Cycles is available for download. Now both final render and viewport are up to 2x faster.
- E-cycles now cleans the noise about 2x faster than master in viewport.
- the quality of each sample is much higher (about a fourth of the sample is required to have the same quality as master, but each sample takes 2 time the computation power). So it enables to get a very good feedback of the scene during camera move, while still being fluid.
Note that this is a very young feature. It is preliminary focused on visualization of interiors using regular path tracing and CUDA (Branched path tracing is not supported yet).
This is an example in full HD full screen on a single 1080Ti:
You can buy E-Cycles now on Gumroad based on Blender 2.8x or 2.7x to get all the features that will come in 2019. Their is also a very affordable monthly option for 2.8x currently at the reduced price of 9.99€/month until the 4th of March (people joining now will benefit from the reduced price for the following month).
Some more examples
Here is a comparison after 2sec of rendering with master (noisy, 64spp) and E-Cycles (much cleaner, 7spp):
And E-Cycles live viewports rendering different Evermotion’s scenes:
also working with lot’s of refraction bounces through water, glass, Subsurface Scattering, etc.
The new denoiser
a 3 seconds render of the BMW scene
7sec at 100% (4 times the pixels)
You can also learn how to make your own version of Blender. It includes the modifications made in the february version of E-Cycles (including the new denoiser), plus new modifiers, how to streamline the UI and add patches available in Branches and patch tracker.
The new version boost up has worked well with my RTX 2060 in the Render preview: 7:11 - 5:08. Note that currently Cycles doesn’t work very well with RTX cards.
I can imagine on a GTX card rendering with E-Cycles feels like using Octane. Mathieu, you did a wonderful work!
Great it made your RTX card work better
bliblubli, where is new denoiser?
In the compositor, add node -> filter -> denoise.
I’ll make a tutorial soon on how to use E-Cycles to get the best performance and clean renders with the AI denoiser.
I have it , but denoiser effect in compositor is extremly strong …
You have to connect all the inputs to get quality results. In the pass tab, activate “denoising data”, feed the output of noisy image to image input, denoising normal to the normal input and the denoising albedo to the albedo input.
make tutorial please
So here is a quick tutorial to get the max quality, it is derived from the technique used by Theorie Animation. It renders in 4sec/frame as you can see including preprocessing and denoising:
I’ll upload the nodegroups later.
lol, 4sec… the first time I rendered that one, it took over 10 minutes back then…
The addon to get AI denoising setup in one click is nearly done.
One day left to benefit from the reduced price. Will be 14€/month starting tomorrow midday Berlin hour. The full 2.8x 2019 updates are also reduced.
The addon is up. Activate it, then at the bottom of the render panel choose quality level (1 for fast, up to 3 for very high quality) and hit “create node tree”
I used quality level 3 for the 4 second renders of classroom, so it’s still fast but requires more memory.
Edit: a quick tutorial to use the addon is also up now. Some hours left to benefit from the reduced price.