Cloud rendering in Cycles study (using the Horizon Zero Dawn's volume rendering techniques)

Hello everyone

Recently, I want to give it a try again to rendering clouds in Cycles with all the new tools available (Mesh to Volume, Volume displacement and Point Density). However, I haven’t found any “realistic” cloud render examples in Cycles. So, my goal is to try different techniques to find the perfect solution for “photorealistic” clouds.


image

Part 1: Research

The basis for modelling a cloud in all the works I looked for are two simple steps:

  • Get a basic cloud shape (doesn’t have to be extremely detailed, even a sphere could work)

  • Distort the shape with noise (or any similar)

There are many ways to distort the shape with noise and they can vary according to the software, technique or render engine. For Cycles, I used two techniques:

  • Volume displacement modifier

  • Using a Noise texture (mixed with a Voronoi texture) to distort the Vector Coordinates of a Point Cloud texture

That’s it, these techniques are similar to others you can find in a lot of Blender tutorials, but still it doesn’t look quite right… Yeah, it looks like smoke, but not quite like a cloud, there’s something missing…

Here’s where Horizon Zero Dawn comes into the topic

Part 1.1: Rendering research

The guys at Guerrilla Games discovered what was odd with cloud rendering (or at least for their rendering engine), and they tweaked a little bit of the math involved in it. You can find the original paper here: http://advances.realtimerendering.com/s2015/The%20Real-time%20Volumetric%20Cloudscapes%20of%20Horizon%20-%20Zero%20Dawn%20-%20ARTR.pdf

In short, usually the enery transmittance of a volume is calculated using the Beer’s Law and the Henyey-Greenstein model to reproduce the Anisotropy of a volume.


However, this equation doesn’t deal pretty well with an effect that the guys at Guerrilla found, the dark edges around the clouds.

It’s important to mention that this effect is achievable using the traditional Beer’s and Henyey-Greenstein using multiple volume bounces and multiple sampling (as they mention in their paper), but for them that wasn’t possible for a real-time render engine, and in my experiments in Cycles, even increasing the volume samples at 10 didn’t do too much.

So, they found a technique, the Beer’s Powder equation


The Beer’s Powder equation would allow the volumes to have more pronounced edges (remember this since it would be important later). But, is there a way to implement the Beer’s Powder equation in cycles?

Part 2: Building Blender with Beer’s Powder for volumes

So, given the equations above, we can write the Beer’s Powder equation as the green one:

Is there a Blender source code file that we can modify to write the Beer’s Powder equation? Sure there is!: https://github.com/blender/blender/blob/master/intern/cycles/kernel/closure/volume.h

Here, in line 169, does it look familiar? That’s the traditional Beer’s Law, so we just have to rewrite it as:

float3 init_energy = make_float3(1.0f, 1.0f, 1.0f); 

return ((exp(-sigma * t)) * (init_energy - exp(-sigma * t  * 2.0f));

That’s it, we have the Beer’s Powder equation in Cycles!

In Part 2.1 I will show you some test running the Beer’s Powder Cycles (since I made the built with CPU only rendering and I found some bugs with the implementation).

In Part 3 I will show you a less invasive and technical way to achieve similar results using a regular version of Blender, the same technique used for the first image of the post

12 Likes

Part 2.1: Beer’s Powder Blender Renders

First, let’s take a look at the cloud using the Point Density technique

Blender Vanilla Render:
image

Blender Beer’s Powder Render:
image

Not much of a difference at first sight, it almost looks the same in the light side, however, in the dark side, we can see that Beer’s Powder has more energy transmittance, and as we look closer (in the lower right corner) we see the pronounced edges of the Powder effect!

Both renders were made with the same Density multiplier (10x) of the Point Density input (which will be important later on)

However, how does Beer’s Powder technique looks on other implementations?, specially the Mesh to Volume implementation

Blender Vanilla Render:

Blender Beer’s Powder Render:
image

Ok, that looks… terrible, but what can we learn from that? Hmm, it seems like what the Beer’s Powder technique does it accentuate the edges of volume, doing that the edges reflect more light than what they absorb, or in general, the edges reflects more light than the rest of the volume. Can we try something similar in Blender Vanilla?

More info coming next in Part 3

7 Likes

This is extremely interesting, I’m looking forward to part 3 :slight_smile:

Part 3: Using Material Nodes in Cycles

The idea is simple, use a Color Ramp to isolate the edges of the volume, then increase the density of the edges and add them to the density of the rest of the volume.

Node setup for Point Density: (first, distort the Point Density Texture)


Edge isolation:

Optional: Add a Emission Shader for faking more transmittance

Node setup for Mesh to Volume:


Point Density with just Principled Volume:
image
With Beer’s Powder effect:
image

It seems like the Node setup doesn’t do much difference, but there is something weird with the Point Density Texture as well

Density at x10:
image

Density at x1000:
image

Now, Mesh to Volume with Principled Volume:

Mesh to Volume with Beer’s Powder effect (with edge density at x1.5 of the general density)

x2.0 of the general density:

x4.0 of the general density:

x6.0 of the general density:

I will update the post soon with a denoised render version of the Mesh to Volume Beer’s Powder technique for better comparison

3 Likes

I like these tests,would be nice to get the equation running in the code without flaws.
Isn’t the powder effect a kind of inverse fresnel look on the volume (i know its not the same equation) that the cloud edges looking darker depending on view and light direction?
And is the HG scatter direction used in the code as well?

1 Like

Yes, it would be great, however, I don’t have too much knownledge of how to implement it better, neither knownledge of how Cycles Render Engine works nor knownledge in C :face_with_diagonal_mouth:. That implementation is as far as I can go, the purpose of this little research is that people look at it (and also make common people become less scared of trying new things, modify code, break some stuff and built it’s own Blender versions) and maybe someone with more knownledge could improve the technique (or at least explain what went wrong with this approach).

Yes, the HG scatter direction function remains untouched and open to modify the values (it’s the classic Anisotropy slider in the Volume Material)

About the inverse fresnel look, in theory it can be interpreted as that, but, how do you calculate the fresnel values of a volume? Remember that fresnel is something calculated based on the normals of the surface. However, there is an explanation to that in this video and in the comments (again, I don’t understand too much of what is said there, but it seems like they have a good explanation of that): https://www.youtube.com/watch?v=8OrvIQUFptA

1 Like

x4.0 of the general density: (128 samples, denoised)

2 Likes

It looks very interesting, but wasn’t the beers-powder devised for real-time rendering? I’ve been thinking for a long while that a modification in eevee to render like that would pull droves of users to render clouds, as it is easier to use than unreal engine and probably would allow for greater detail, depending on hardware.

I’ve been trying to reproduce with the volume samples at max and general samples at max. The closer approximation to the dark edges comes at above 7/7.5 anisotropy, light exit shows at the belly and the base of the hat, but the cloud lacks density. If I increase density both dark edges and deep scattering effect disappears so anisotropy seems to play a role in how long the ray travels (probably for the sake of speed). Seems like a trade-off… to exemplify this, I’ve been looking around in every all-purpose off-line renderers and in my opinion, out of the box, the best solution to this problem comes from the guys at v-ray.


But even then, they run into the same issues : modifying the density through the cloud mantaining the silver edges is not possible, and neither is having consistent behaviour depending if the light comes from the front or the back.

In the deep scattering papers from Disney both of this issues are explained

“As shown in Figure 3, the intricate shape of the Lorenz-Mie phase
function produces characteristic visual effects such as silverlining,
fogbow, and glory, which are lost or synthesized inaccurately when
using an isotropic or Henyey-Greenstein approximation. At the
same time, however, using the high-frequency, multi-modal Lorenz-
Mie distribution makes it difficult to sample the high-energy, multi-
scattered transport efficiently; too difficult to render our training
data within reasonable time. We follow the suggestion of Bouthors
et al. [2008] to “chop” the diffraction peak and lower the cloud den-
sity according to the fraction of scattered light contained in the peak.
The explicit simulation of near-perfect forward scattering—the main
source of sampling difficulties—is thus removed and accounted for
implicitly by reducing the optical thickness; see the supplementary
material for details”

scattering1

“Since the silverlining stems from low-order scattering, we opt to use the full Lorenz-Mie phase function for the first bounce and then switch to the chopped version and its corresponding optically thinner volume. This preserves the effect of silverlining”

1 Like

Years ago some Houdini wizards tackled this issue, and one did succeed in identifying the problem and solved it. A solution very similar to the one at disney :

"Here’s a simple but costly setup. Density is used to modulate the phase function so that it’s highly forward scattering (values over 0.9) where the cloud volume is thin and slightly more diffuse (values closer to 0.7) where the cloud is more dense. Scattering lobe changing it’s shape as the ray travels inside the cloud was one of the main observations done by Bouthors. My solution is a very simple mimic of the phenomenon but it already does a lot.

It has 64 orders of multiple scattering which might be more than enough. It also uses photon map to accelerate the light scattering. No Mie scattering LUT is used. Render time 3.5 hours.

It’s not identical to the Hyperion image but certainly has some nice features emerging. Some parts look even better IMO. I’ve also tone mapped the image with an ACES LUT and tweaked the exposure a bit to preserve the whites."


I wonder if scatter depth could be added to the light path node in a future release. Cycles must know how many times a ray has scattered within a volume.

If we had that - you could key the anisotropy off scatter depth rather than density - since lower densities on the edge of clouds would naturally scatter fewer times.

1 Like

I guess volume depth to the light path node could be added to hack some things, options are always nice anyway. There was a conversation a while ago regarding this very subject, with also some people finding about the limitations of the ray bounce limit. The amount of rays bounced cannot be compared to that of mitsuba, but maybe in the future with path guiding… who knows. Of course for a renderer that aims to be fast and usable these are inane issues… maybe for real-time eevee there’s a potential conversation to try and reproduce the hack in Horizon zero dawn, but for cycles I don’t think this is even a question in the developers mind. Would be cool tho…