A novel attempt to speed up rendering using AI technology

http://www.airenderer.com/

I’m not sure how far they’ve gotten though, as the examples look quite splotchy still in spots and some elements in the video examples blow out.

Still, it’s free to try (and in the browser at that), but I wouldn’t expect miracles.

Truly intelligent, or intelligent-sounding hype?

This is probably a convolutional network trained to be a denoiser, so not a renderer at all.
But, neural networks (let’s not call them AI all the time) are a great way to do well performing image filters.
If all the denoising data is available from blender cycles, then such a network can be trained relatively easily by providing it with a lot of samples of undersampled and fully converged renders.

In this case, the denoiser only takes final images as inputs so, it doesn’t have the possibility to work properly.

@Ace Dragon
The developer had already started a thread here plus a few tips:
https://blenderartists.org/forum/showthread.php?420803-A-new-rendering-engine-that-uses-AI

SunBurn; So it seems, just that Blender Tests happens to be a very unusual place to announce a new product (I usually don’t check there as a news source).

@Ace Dragon

SunBurn; So it seems, just that Blender Tests happens to be a very unusual place to announce a new product (I usually don’t check there as a news source).

Me too. I found it accidentally, but I thought I should link to it just in case…

Looks like nothing fancier than a bilateral blur to me.

I am pretty sure that with this kind of technique, one could achieve tremendous results. It is necessary to have thousands to millions of examples with all possible styles in order to produce amazing results, which I believe should be doable with the necessary resources.

If that is how things are working here, then I wonder it will ever be able to handle the huge variety of weird and wonderful science fiction and fantasy worlds that people will create as well (the majority have no analogue in this world for starters)?

Different paper, but machine learning is indeed a very promising approach for denoising:
http://cvc.ucsb.edu/graphics/Papers/SIGGRAPH2017_KPCN/

That’s the way neural networks are trained. They get both the input and output picture, meaning the noisy and not noisy version. With a huge amount of those, it learns patterns in an abstract way, such that it can handle inputs it hasn’t seen yet. However, it needs a lot of variation for the training to achieve that. If it has never seen water or semi-transparent objects in general, I am pretty certain that it is going to fail to handle them.
The training itself is in this case not really the difficulty. It takes time and for sure needs quite some tuning, but it is a pretty usual use case for a (convolutional) neural network. The tricky part is to get enough and good training data.

First impression was that it looked like some useful technology that should be borrowed from to improve the new denoiser in Cycles, then I got to the training requirement.

I don’t think very many here would have the time to render hundreds of noise-free images so they can start using its denoise capability (and having them all done by the BF and bundled into Blender would create a huge increase in file-size). In fact, such a thing seems to be what might keep neural-based solutions off of the PC’s of individual users for the near future (same with apps. like Blender, due to a mix of the sheer amount of data needed, the heavy amount of processing that would be needed, and the robust internet connection that would be needed if the first two points were worked around by having it server-side).

Get Latest AI News and Shocking Facts !! , Click to Download the file …

You should read more and write less. Since what you wrote is nonsense. While training is heavy, deployment is light with nn. You train the network, then distribute the already trained network-as a compositor node for example. It shouldn’t be more than a few megabytes. For example, what you download in your phone when you make voice recognition available offline, is actually a high-end neural network.

But… isn’t this just a denoiser? (and based on results I see on the site is worse than the actual implementation I think)

Cheers.

AI applied to montecarlo raytracing algorithms should aim to calculate true random states at a point in a recursion process without calculating the recursion itself. AI should fight against montecarlo raytracing diminishing returns.

The quality of the output depends on the quality and quantity of the training data. I agree that the results look worse than the ones of other denoisers, but I believe that they are not using enough or good enough training data.