Bidirectional Pathtracing and Caustics

Now, I know this has been discussed a lot. In fact, I want to discuss one of these threads, more specifically this one ( https://blenderartists.org/forum/showthread.php?399296-Bidir-Pathtracing-for-cycles ). It was said in this thread that photorealistic engines like LuxRender have the downside of not giving the artist too much freedom, because of it being too locked on real-life lighting (tricks like the Ligh Path node of Cycles would be impossible in such engines, for example). The point is that today there is a growing search for photorealism. Of course, Cycles is perfect able to achieve photorealistic renders, and it actually is more than enough for the majority of cases, where caustics don’t play a big role. However, for scenes like an underwater shot, you can’t simply disregard caustics, they are a HUGE part of the scene. Sure, you can fake them in Cycles, but having a engine that can calculate then would be more realistic and practical in this situation.
In the thread, it was said that bidirectional pathtracing still is not widely used in the industry, which is true for now, but I believe this is satrting to change. As this link shows, Pixar used this method in Finding Dory ( https://graphics.pixar.com/library/BiDir/paper.pdf ). So I believe that those looking for a more photorealistic and unbiased engine in Blender has a point.

The paper talks about how to implement bidir sampling for some common production cheats like the ‘thin shadows’, but it didn’t really get into how to implement that sampling for often-used shading effects like custom fresnal/facing curves (mixing shaders based on angle is said to be either difficult or impossible with known bidirectional techniques).

The same may apply to any shading magic that is dependent on the position of the camera (because you’re not just tracing from the camera anymore). There’s a lot of cool shaders people have made that would break as soon as bidir rendering gets switched on (and the solution to keep them working is not immediately known).

Thirdly, even if the first point becomes a non-issue, it is a known fact that bidirectional implementations tend to get very complex from a code perspective, and it’s not very easy to implement such a thing in a way that can easily be understood (especially if you have a full feature set it needs to support). In fact, it is said that presently, bidirectional pathtracing is one of the most complex things out there, and most production engines that even have it now see it having limited to no support for some features.

@OP
Rather experience PRman caustics for yourself (IMHO, not yet good for production)

Other notable engines: Lux, Indigo, Maxwell, Arion, Mitsuba, Iray, Vray, Yafaray…

Composite, Fake or Forget… :wink:

Also, nice to know @ 18:20

@Evan Avast: Cycles can do caustics…they are just not very fast and can be noisy.

Thanks for all of the feedback;). Indeed, it looks like caustics will continue being a problem for CG for some time, lol. Anyway, there is LuxRender if your scene really needs them in your scene. Is there a way to convert Cycles materials for LuxRender? On a side note, do you know if Pov-Ray can achieve the same results as LuxRender?

I don’t think Pov-Ray can be compared to LuxRender in terms of realism. A good solution for caustics I saw was to render everything in Cycles and then add caustics rendered in YafaRay in composition.

POV-Ray definitely can do that, It is only currently slower than LUX for this task. I have been answered that this is so because it’s still using RGB, but spectral rendering is high on the todo list of developers for future versions.

I was thinking, is it possible to code a render which starts with a few light rays coming out of emitive materials until the hit the camera, and progressively increases the amount of rays until it notices a convergent pattern being formed, which would be the real picture? (Or at least a close aproximation?) For example, take a look at this site:


You can play around positioning different objects and light sources. My idea is that the render would start with a few singles rays (set the “Ray Density” option all to the left on the site, for example) and would gradatively increase the number of rays in the scene (as if you were increasing the Ray Density option on that site), until it reaches some sort of converged scene which would be a close aproximation of real life? Of course, by this method, a lot of unnecessary rays (rays which won’t ever hit the camera, so they won’t affect the final render) would still be calculated, so this is a huge problem with this idea

There are a couple of conversion buttons, but I haven’t had luck with it converting everything. On another note, bidirectional path tracing would give great caustics, but Lux and a few others use Metropolis Light Transport sampling along with bidirectional pathtracing integrator which really makes caustics shine. Cycles will do it eventually, but after 20min on the GPU here, it was just beginning to show. 40min CPU in Lux the caustics were already established well.

Cycles 20 min GPU:


Lux 40min CPU:


The filter glossy value set at 0.1 to 0.2 will bias the caustics a little bit, but they will show far better than just leaving the value at 0.

Also, when I tried Lukas Stockner’s initial attempt to bring MLT to Cycles, it allowed Cycles to do reasonably well with difficult light paths, even though it was unidirectional (I would like to see if it would be possible for Cycles to do MLT with the trickier paths only and do generic path tracing with the rest).

Filter glossy, even at .05, would ruin those caustics above.

I was thinking, is it possible to code a render which starts with a few light rays coming out of emitive materials until the hit the camera, and progressively increases the amount of rays until it notices a convergent pattern being formed, which would be the real picture? (Or at least a close aproximation?) For example, take a look at this site:


You can play around positioning different objects and light sources. My idea is that the render would start with a few singles rays (set the “Ray Density” option all to the left on the site, for example) and would gradatively increase the number of rays in the scene (as if you were increasing the Ray Density option on that site), until it reaches some sort of converged scene which would be a close aproximation of real life? Of course, by this method, a lot of unnecessary rays (rays which won’t ever hit the camera, so they won’t affect the final render) would still be calculated, so this is a huge problem with this idea

Filter glossy is something I’d use if I use glossy to transport light if faking it using isDiffuseRay->SeeAsDiffuse trick isn’t a valid option (I pretty much always use this trick though) - with caustics turned off when using the trick. I wouldn’t use it for portraying sharp caustics patterns.

Would it be possible to have a separate renderer available, even if it could only be used for baking purposes/caustic pattern generation? I’m not willing to trade off all the flexibility in the current system if all that flexibility had to be lost due to bidirectional.

… for me is not not only about caustics, but all extras that come with intricate light bending :wink:
thanks to Lux and experimental SPPM such sims can be achieved in quite acceptable times
(with caustics in reflections & behind glass)

~ Less than 2h (4x3 K)

but what impressed me most was Ocean renderer (IIRC couple of minutes (2-3) on MS SurfacePro3 :eek: … AVX set does some crazy fast magic)

There is reason BD path tracing has neever been able to establish itself as a good altenative, there are nowadays much more efficient algorithms than bidirectional for caustic work, and caustics do make a difference in more scenes than you might think. The first problem is that light shooting in bidirectional is not really directional or focused on the problem at hand, and uses too much artillery for returns that diminish after each bounce and pass. Any strong sampling curve should not only multiply sampling and shooting settings but also visibility rays in each event, and thats too much entropy for a desktop system. SPPM is much more efficient because there is not visibility rays but statistics and you can keep good a good variation by changing eye rays in every pass.

Yet (especially combined with a node trick tweaking glass/gloss roughness with lightpaths), you can obtain things like caustics in reflections (something which is not possible otherwise).

In a path-based environment, it’s a tradeoff you have to consider (and almost definitively better than nothing and better than a noisy render that takes forever to converge).

On that Ocean Render image, I just love how it seems to have an intelligence in where to place the watermarks (so it’s almost impossible for a thieving user to remove). :slight_smile:

Wait, is it possible to create fake reflective caustics in Cycles?

Well the conversion work it seems only with standard shader mix, when you start with complex nodes, many will not be retained, as they are not forcibly the same between cycles and Lux. … So i think it is way faster to just redo them in Lux and vice versa.

Yeah the converter is really not that good. It was not exactly fun to code so I didn’t pursue it further.
It works only with very simple node setups like a textured matte or glossy.
Here’s some more details: http://www.luxrender.net/forum/viewtopic.php?f=11&t=11101&p=120900&hilit=cycles+converter#p120900
Anyway the result of any converter without a serious AI behind it will probably not be as optimal as what a human can do. So if you want maximum performance, do the conversion by hand. The converter can still help you by converting the boilerplate, e.g. lights and simple materials.

Btw. someone else started to write a converter, too, but I didn’t try it: https://blenderartists.org/forum/showthread.php?387406-Addon-Plugin-LuxRender-Material-Converter

The biggest problem would be that the majority of the rays emitted would never hit the camera, so they wouldn’t affect the final render in any way.