Cycles Development Updates

And here’s the performance comparison.

AOSamples

Looks like amount of AO samples doesn’t even modify the noise level, but rather accuracy of the effect. And it appears that 16 samples is indeed the best speed/quality performance ratio, where lower values significantly reduce accuracy while higher values increase rendertime without adding any significant accuracy.

What’s great is that even in such synthetic scenario, where the scene is solely focused on the AO map effect, so it’s the most expensive shading effect in the entire scene, the 16 samples AO mask adds only 50% to the rendertime :slight_smile: In complex scene, this fraction would be even smaller :slight_smile:

9 Likes

Well that’s that. Lukas, you should definitely make more stuff “just to end arguments”. :smile:

6 Likes

AO - cool stuf. Thanks Lukas.
It would be great add something like vraydistancetex or coronadistancetex.

1 Like

Distance map is a very different, and likely significantly difficult kind of map to implement. It works on a quite different principle. So I’d rather see AO map finished into an usable form first, before we overload Lukas with tons of new requests :slight_smile:

In fact, AO map can do most of (not all off though) what distance map can do. Distance map is just sometimes a bit faster at doing it.

1 Like

Small test. Used AO as Dust mask + limiting edge damage(curvature) based on Bevel node:

As artists, we are always trained to tell a story with our assets (e.g wear from use, weathering etc). As far as I’m concerned, we pretty much have full solution now. Just hope it makes to master soon.

Thank you Lukas and Brecht for your amazing work, it’s very appreciated.

6 Likes

I also second Lukas making more things to end arguments! This node is pretty much perfect. One more vote to polish it up and get it in official Blender. (also, can we have a proper curvature node too? A dot product with the bevel node works, but doesn’t distinguish convex vs concave edges)

So pointiness is not the curvature?

Is this Real Life ?

Simple Test Monkey:
TestCase_AO_small

00:11.59 = No AO Node
00:13.68 = 1 AO Sample (already useful result)
00:30.67 = 16 AO Samples
01:33.50 = 64 AO Samples

~18% increase of rendertime for an already useful result for masking purposes.
(On this very simple test case.)

Though the results seem to be quite “facetted”, is this a side effect of the algorithm ? (model is smoothed)
Ironically It gets a bit blurier the lower the samples are, so the lowest quality seems to be the best right know. ha!

I also had an “illegal CUDA call” while fiddling around with the AO Node.
Shall I try reproducing it or is it still too early at this point ?

Thank you so much Lukas! I’m baffled.

Yes, the artifacts are nature of the AO effect, but if you use the AO pass for procedural effects, you will almost never actually see these.

1 Like

The AO-Node is amazing and very fast! :heart_eyes: If there would be a option for convex Surfaces and tracing against other objects, this would be already production ready from an artist standpoint. Had no crashes or anything.

Thanks to Lukas, Brecht and all other Developers for their work!!!

3 Likes

It actually does, but you’re mapping it non-linearly, so the noise can’t converge out properly anymore. If you keep it linear, more samples will accumulate to less noise just like regular AO.

Being able to map it non-linearly is why you need extra samples in the first place and it’s what you need for a sharp(er) division.

I can’t get a sharp division at any amount of samples, but it should be possible with falloff/length parameters like the other renderers have.

You can cut down the overhead by about 50% by only tracing it on primary rays, so your estimate of 25% overhead was pretty much spot on.

More polygons will definitely make this effect relatively more expensive.

2 Likes

That’s not acceptable. If you use this to procedurally shade a material, you want that material to have same appearance in a mirror for example.

Polycount doesn’t affect this too much. It usually comes down the convexity/concavity of the surface. If you put such shader on a completely flat plane, then it will render equally as fast regardless of if the plane has one polygon or one million of them.

Well, you can add “Glossy” in there if you want, I think Cycles has some roughness threshold after which “Glossy” becomes “Diffuse” (I may be wrong about this).

I now understand where you come from though, you would rather have 50% overhead on a material that “works everywhere”, rather than having an optimized material with only 25% overhead, rather than spending 10 minutes on a crappy unwrap and an AO texture bake with 0% overhead and no noise.

And… that’s fine. I mean, it makes my heart bleed a bit over the collective man-centuries spent on getting another single digit percentage of performance improvement out of 3d renderers… but it’s fine (yells at cloud).

Polycount affects the cost of BVH traversal, which affects the cost of any ray cast. Concavity increases path length and therefore the amount of secondary AO evaluations, but unless it’s mirrored reflection those are pointless and you shouldn’t do them anyway.

If you don’t believe me, you can just test this with a subdivided Suzanne, which is just as concave at subdivision level 5 as it is at level 0.

The patch is there, just test the AO color shader with a suzanne at subsurf level 0 and then the same at subsurf level 5 to see who is correct (correcting for the additional time needed for the preparation phase).

Also, like with all shaders, the usefulness is limited if you can’t use it inside of scenes with reflective materials and/or glass (that being if the shading was only visible to primary rays). You would easily notice missing effects within parts of the scene, which such arbitrary limitations was one of the key reasons to get rid of BI.

Limiting the effect to primary rays is just as valid as the filter glossy option, or any other optimization that adds some bias to the output.

These limits could be applied with a node group, but having it built into the shader itself would probably be faster, and require less headaches to setup the ray depth + math nodes necessary to make that happen.

I am worried about this. We don’t want to have ridiculously complicated AO shader for every possible use case, because then it will become pain to use and will likely discourage Lukas from ever finishing it. I think we just need bare minimum at the moment. Tomorrow, I will write down a concise post of what it should do, along with GIF examples.

The somewhat blurred caustics via filter glossy can be seen in mirrors or behind a piece of glass, low values can even produce a biased version of effects that otherwise wouldn’t be seen at all.

That is a little different compared to a sizable shading effect missing completely in some views. That and the fact that Brecht is generally against adding a big pile of hardcoded toggles like you see in engines such as Vray and the now defunct Mental Ray.

The reason behind this is that scenes often consist of more than one object and these crappy unwraps add up. You frequently have a scenes with thousands or even tenthousands of objects and unwrapping all of these is rather impossible.

Texture bakes are also really impractical for certain things. In animations you can not really use them for example. Or at least you'd have to use animated AO bakes which is impractical in its own as you would have to upload very large files to your render farm. If the customer requests a change you have to bake again. And customers allways request changes. If the customer decides he suddently wants a close up the resolution might not be large enough. Baking adds an extra step which adds a source of errors. You might forget baking or updating the AO texture and waste an enourmous amount of time rendering with the wrong AO map.

All in all a more or less one click solution is often the best solution even if it adds a signifcant amount of render time.

1 Like

Hi, glad to hear that it works.

  • AO against the entire scene (not just the object) is possible, I’ll add a checkbox
  • AO towards the interior is possible, I’ll add a checkbox
  • The result is faceted because the code currently samples the hemisphere around the geometrical normal, not the shading (smoothed) normal. Changing this would mean that some rays hit the object itself so you’d get non-white Ao even on e.g. a sphere, which clearly is not desired. One workaround for this might be to abuse the VNDF sampling algorithm that’s used for the GGX BSDF sampling - this way the code could sample a distribution that is centered around the shading normal but clipped by the actual geometry. Since the probability density is irrelevant here as long as the sampling method is perfect (which VNDF sampling is), this should have no measurable slowdown and is quite simple to implement.
  • I’m not a fan of automatically disabling it for some rays. Since this will be used a lot to blend to a different texture near corners, completely missing this color change in secondary rays would result in very noticeable absence of color bleeding even for diffuse rays. If you really want to disable it for some rays for performance reasons, you can always mix it with a simpler version based on ray type, the code is smart enough to skip evaluation of subtrees with weight zero.

Expect a new version in ~3h I guess :slight_smile:

12 Likes

we should crow fund some nodes for cycles :rofl: