Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Network

New paper from Disney on Deep Scattering supported by neural networks:

http://www.drz.disneyresearch.com/~jnovak/publications/DeepScattering/

Stunning results, reduced render time from hours to seconds/minutes.

Two Minute Papers on this topic:

Attachments


verry interesting, makes one wonder how much other parts might benefit from neural networks.
As light calculations seam to be the most difficult part of rendering

Pretty cool

As cool as predictive algorithms for rendering and image manipulation are, I can pretty much guarantee that we’re not going to see any FNN, CNN, or PNN code in Blender any time in the near future.

Just out of curiosity, why do you think so?

Because NNs require either:

A.) Lots of training

or

B.) Lots of cached training data

Not to mention coders who are familiar with writing NNs, which aren’t exactly a dime a dozen.

Not to mention that the training data shown in the video appears to mainly cover cumulus clouds, so I’m left to wonder if you will need many times more images to create accurate depictions of the dozens of other cloud types that are out there (especially if you add the clouds seen on other planets).

Also to note, the specific material the learning algorithm is designed for (it would get a bit trickier to expand it to general volumetric rendering).

My question was meant in a more general way. For this particular topic, I would be surprised if a generally applicable solution would be found in the near future, something that works for all sorts of clouds and other volumetrics like fog. I would not believe it until someone rendered Cosmos Laundromat successfully with it :slight_smile: .

If it is trained for cumulus clouds, it could just as well be trained for other clouds.
While training takes some time, using a NN doesnt take much time. And its not that training takes month’s for NN, usually the cases i’ve trained NN its more something like hours or days, which is not a unusual training time, if i compare it to animation rendering.
Oh and and those GPU’s are also excellent for NN

Now, is fog a cloud? Basically it is. What happens when camera enters the cloud? Visualization thus depends on POV (proximity).
Haven’t read the paper, but does it consider it.

In this particular case, creating the training data is the bottleneck. Getting a coverage for volumetrics in general would likely take a lot more.

Please read the paper, the NN just predicts the radiance for various shadings, it has nothing to do with the image or camera setup.

Aha… “Monkey see, monkey do.”
:slight_smile:
Thank you.

From my limited exposure to Blender’s particle system it is an absolute pig on resources and render times.

Rendering of Blender particle systems in Cycles is often done with the point density texture node.

The reason why renders take so long is because Cycles has no way of varying the sample counts based on which areas are of interest (which would be possible to do if Cycles gets native OpenVDB rendering support). The solution to the render time issue is known, we just need someone to implement it.

Holy crap, it’s Disney and PIXAR. In three years this conversation of complaining will be in the past.