Bidir Pathtracing for cycles?

I have wondered at times if Cycles can have a metropolis sampler that is only applied to caustic bounces (using the ray-type logic to determine whether to start a chain from that specific path).

If that could be done, then we could have the benefits of metropolis sampling without the drawback of splotchy noise patterns across the entire scene (the other path types would still be done in a uniform manner like it is now). My train of thought then is that because of the partial nature of it, it can largely just sit on top of the current sampling code as an extension.

I don’t see forward path tracing going anywhere in favor of BiDir any time soon, and it really has nothing to do with hardware. Caustics are easy to fake (you can look at my ubershader for the node setup) when you “need” caustics, and the ability to art direct shaders and lighting will always trump realism in production. Bidirectional path tracing is simply too rigid to fill the gap, and there’s been next to zero research into extending it to make it more artist-friendly. It doesn’t help that the number of people who truly understand the intricacies of the BiDir algorithms could fit in a small room. It’s not nearly as simple as just adding another direction to regular path tracing, it’s an incredibly complicated field of study even for PhD-level coders and researchers.

@BeerBaron, Charlie, LazyVirus
It is necessity for product viz/archviz. Other areas are more or less carefree about proper distribution of light.
And i do use Lux, Mitsuba, Corona, Maxwell, PRMan, VRay and test many others… but that’s not the point here.
There’s only one Cycles and helping it to perform best as it can will benefit all users to gain quality in visualizations, knowledge. Especially to those who are not susceptible, as diverse and as fast accommodated or adapted to changes in environments, styles, behavioral patterns… working specifically on Blender (since it’s open source, thus available to whole kind).
Please, don’t post if you mean to be negative about finding a solution to the core problem.
Here’s just a simple excursion, contemplation on possible and viable evolution of an engine.

@m9105826
Is your ubershader the one made via tutorials on CGcookie? Did test it. Gotten also Prism and experimented with few other solutions, done my own dirty hacks to get close to real, though nothing was ever as satisfying as Maxwell, Lux or even VRay (in some cases). In these respect all do beat Cycles in render time, usability and finally marketing value. There’s a strange unpleasant imbalance among caustics in shadows (kinda like uncanny valley effect). For now i hope new Smith model will bring nice changes, at least some pleasure. I do hope.

Since i have noticed that many architects, event managers & designers got used to Vray style of imagery (even if it looks a bit odd, flat, saturated and stylized) while on the other hand product designers, jewelers, carpenters, inventors, engineers, doctors… are straight forward realists, find pleasure in truth and do notice flaws in realism which cannot be obscured by use of flares, aberration, glare, glow and other disruptions. Am beginning to distinguish clients on their profession to easier define value and time needed to get the final work done. Next stop is food&beverage viz (restaurants, bars, bakeries…) since it’s a bit harder/time consuming to model and texture groceries while photogrammetry has easen/simplified the job to be competitive vs. photographers. And caustics do play a big role in fresh fruits, veggies, cake glazing, drinks, accessories… the devil and god in the detail.

Well, thanks for all your inputs.

I strongly disagree on that. Sure Caustics depend on the Lighting Situation but you will notice them everywhere if glossy or refractive surfaces are around.
For Photorealistic Rendering they are very Important. I Think its exactly like enilnacs said, you dont want to miss them anymore once you have had them.

I quickly shot some Images for Illustration and these arent even Refractive Materials, just Metals and glossy Plastic.

Attachments


1 Like

So adding different GI calc methods will change the nature of cycles? I’ve stated quite clear that I’m talking about conflict of nature, not of technical/usability limitations.
I strongly believe Cycles has no focus by nature, the focus people believe it has is no other than BF/BI’s focus and it’s proved by the fact that all decisions you’ve listed (and so on) are taken by its only (more or less) developer (BF/BI) based on its own focus (dreaming pixar). Again, I’m not polemic at all and I’m tryin to be more ceral I can.
Just want to say that if people start to strongly believe that cycles should think only in 1 way because of its nature or because some other “modern, fashion and superdupercool” renderers do this way, it will miss a chance to be… more valid? …more useful? …or maybe different?.
When the consensus meets the dictatorship of an idea, It never born any good

me?
I’ve said clearly that I would agree with the addition of other GI calc methods to cycles.
The only “negative” comment was about BDPT, but not strictly, I just think it’s not worth the effort.
I’m also not so postitive to G-PT. As much as I’ve seen it doesn’t like that much reflective shaders. If it is so, the only way I find it useful would be with an adaptive sampling maybe… but I don’t know if then would be worth it… oooh don’t let me think so much! I’m not used to.:ba:
For me whatever will come is good, at condition that there is someone with the good will to mantain it.

Yes, it’s very subtle, that’s the entire point. It’s a minimal contribution. Show me the difference images for a representative range of real-world scenes, if you want to want to make a credible argument that this is an important feature to have.

I argue it isn’t important at all for the vast majority of scenes. If it was important, everyone would be using Maxwell Render instead of Arnold Render.

I don’t find that your photos contradict my statement, maybe you have read it too literally. Sure enough, if I run around the house looking for caustics, I’ll find them. If you want to render household objects and the caustics are somehow important, just go ahead and use Luxrender.

However, you should be looking for interesting things to render. Look at this gallery. Do you see a big potential for caustics here, in all honesty?

There’s only a finite amount of development resources and efficient caustics are one of the most problematic and least useful features of all possible features to have. The best possible version of Cycles is therefore not wasting any effort on caustics.

If you argue for caustics, you’re basically arguing that people who render jewellery or bathrooms (etc.) deserve a grossly disproportionate amount of development time. Instead, those people could just use a different renderer.

@Beerbaron

Subtle doesnt mean NOT important! The GI difference between a PT and a very good scanliner is ALSO subtle, but its EXACTLY that subtleness that makes the perception difference and crosses the photorealistic uncanny valley, for the “OH!” moment! Look at the examples above.

The many (as you say) real world scenes have exactly that, FULL light transport. Nobody will dispute that a full light transport solution DOES look better. The fact that a maxwell render is NOT used everytime (because its still slow!), is not an argument that we dont NEED that precision.

If Cycles would be a full MLT/BiDir and would run like a snap, NOBODY would use PT anymore, like nobody uses the old BI or a scanliner or SLraytracer anymore (for photoreal renderings of course).

So the whole “vast majority of scenes” argument is going nowhere, because it by default assumes that the quality we have in that “vast majority” is enough at all. Would you render them in MLT for ex, there would be major and subtle uncanny valley differences all the way!

Yes it is enough for NOW, but only because we are still SO VERY FAR from the reality truth that it is still a long way (just look at energy conservation/distribution!). To say that normal effects like caustics are not important, is like to say that a car needs no lights for driving.

That like you said, the “amounts of developers” are limited, is another topic by itself and yes is reasonable. But it has nothing to do with what is stated here, which is (always!) the need for better render precision and quality importance!

Oh, and by the way, you said everyone would be using Maxwell instead of Arnold. Yes, they do but not Maxwell!
Arnold is slowly gone IF they dont change to full light transport! (which they already experiment inhouse AFAIK), at least for the very serious big players: Disney is on VCM/Hyperion, Weta on Manuka, Animal Logic on Glimpse (bidir version!) and many others are going the MLT/Bidir way…

Since these Images are 90% Characters/Faces there is no need for Caustics at all. I agree that there are more Scenes that dont need caustics then there are that profit from them.

But…
Some examples from the gallery you posted:
http://kutsche.cgsociety.org/art/bulb-3ds-max-bright-mental-light-ray-andre-photoshop-kutscherauer-rhino-wwwak3dde-ak3dde-selfillumination-2-3d-434863
http://vitorugo.cgsociety.org/art/3ds-max-photoshop-vray-zbrush-self-portrait-3d-1053110
http://gtsw.cgsociety.org/art/dragonfly-3ds-max-photoshop-vray-dragontfly-3d-559645
http://zuliban.cgsociety.org/art/3ds-max-frogs-3d-227889
http://dareoner.cgsociety.org/art/3ds-max-shake-vray-ice-diamond-shader-3d-270992
http://cgpro.cgsociety.org/art/hebdomas-lightwave-mechanical-3d-pocket-watch-mechnical-v-134716
http://lhnova.cgsociety.org/art/3ds-max-vray-kawasaki-ninja-zx10r-3d-291569
http://hynol.cgsociety.org/art/escher-3ds-cube-max-figure-impossible-photoshop-renewed-eschers-3d-360371

Examples of CG Cars:
http://other00.deviantart.net/7141/o/2015/038/d/d/dd16141660cc8545a7ca962d50be6235.png
http://other00.deviantart.net/fdd0/o/2014/175/7/2/72e7fe0560c85581856860d73119dacc.jpg

that dont need caustics…

caustics are a normal side effect of rendering,
Its something that gets averaged out over multiple phases of recalculating a pixel on a tile.
Essentially a photon can travel and bounce quite randomly in a scene, thanks to noise /roughness and bumpy surfaces, transparency etc.
So unless a lot of beams have hit and get averaged out, cycles produces raw samples of a pixel.
Once you enable a lot of bounces for light, and reflections; each bounce is a new calculation with a random direction.
ea gough trough the glas, or randomly bounce back, or randomly change direction because of roughness.

Eventually they endup someweher those rays of lights, but are quite chaotic
And thus not every pixel has a equal amount of certainty and brihtness
(computer screens can have rgb 0…255 (or 32k collors) where the real sun doesnt have such a limit)

There have been various things done to reduce them and still get a good result, some beta some made it to branch.

  • Metropolis render (resolve a group of similar beams, complexer math it didnt gain more speed, but let to some amazing results).
  • Adaptive sampler (it rendered till noice level was acceptable).
  • Light portals (telling cylces how a group of beams can enter a window)
  • clamping of light
  • Currently in GSOC a denoise method inside cycles
  • updates in noise paterns sobol/corelated multi jitter.
  • updates in shaders.

Cycles does have a dev direction, it is supposed to be an animation render engine; producing good results over an animation without flicker. This has always been favored above a realistic physical correct render model; despite the fact that Cycles is pretty good at that too, it is geared towards animators, we even got OSL inside it for example. And a great node system.
There are not that many developers on it, but we’ve got good extreme good developers on it.
And as for the future, they team up with leading industries; from Disney, to Nvidea, to AMD and Vulkan tech, and more.
I think just give it time, its not the only render engine that develops slow (but awesome good), slow is just caused by available manpower.
Its not by limit of knowledge of them, in 3d rendering math, and coding skills.

As a animation render engine, a lot of use love speed and low sample counts, but… its more then that.
Using high sample count and making using of caustics it still can produce awesome images like the one earlier in the post.
But if you know a bit how rendering works, you understand why such images are so hard.
If you dont understand it, then get a bit of how it works by watching this dinsey made explanation of their render engine hyperion

What does “a very good scanliner” mean? The difference between having indirect light or not can be massive, or it can subtle (e.g. an exterior lit mostly by a skylight), but for your “average scene” the difference is at least significant - not so for caustics.

The many (as you say) real world scenes have exactly that, FULL light transport. Nobody will dispute that a full light transport solution DOES look better.

I will actually dispute that, in a full light transport there are so many contributions that are so insignificant, it doesn’t make for a better looking image (which is of course subjective). Caustics can be one such contribution, but it’s also true for unlikely paths that you terminated instead of tracing them until their remaining contribution is literally zero. In your “average” scene, these contributions just make your image slightly brighter or add splotches of light that you most likely wouldn’t have noticed anyway.

When I talk about “real world” scene, I mean something that you would actually render in production, in contrast to something you would render to show how the fancy new algorithm in your paper makes a difference.

The fact that a maxwell render is NOT used everytime (because its still slow!), is not an argument that we dont NEED that precision.

It’s not an argument, it’s evidence that in the majority of cases users don’t need that precision, otherwise they would pay for it (in the form of rendertime). If Cycles implemented BPT+MLT, it wouldn’t magically render the same scenes much faster than Maxwell (or Luxrender).

If Cycles would be a full MLT/BiDir and would run like a snap, NOBODY would use PT anymore, like nobody uses the old BI (for PR renderings) or a scanliner or SLraytracer anymore.

Yes, but that’s obviously not an option!

So the whole “vast majority of scenes” argument is going nowhere, because it by default assumes that the quality we have in that “vast majority” is enough at all. Yes it is enough for NOW, but only because we are still SO VERY FAR from the truth that it is still a long way. To say that normal effects like caustics are not important, is like to say that a car needs no lights for driving.

You can’t handle the truth!

In all seriousness, you seem to be obsessed about caustics, yet you show no difference images or anything that supports your argument for the “average” scene. Show us the DIFFERENCE. If the difference isn’t significant for the majority of scenes, then it isn’t an important feature to have, since for the minority of scenes we have other renderers!

That like you said, the “amounts of developers” are limited, is another topic by itself and yes is reasonable. But it has nothing to do with what is stated here, which is render precision and quality importance!

To the contrary, if caustics were important in general, they would be important enough to focus developer effort on. My argument is that they aren’t and therefore no effort should be spent on them. If we could have caustics for free, I obviously wouldn’t argue against that. Rendering “precision” (whatever that means) is only important insofar as it makes a visible difference, otherwise why pay for it? Then, even if the difference is visible doesn’t mean it is important enough to pay for.

The reason that this “argument” is going nowhere is that neither of us is bothering to support their claim with images, but also that we might have vastly different conceptions about what an “important” contribution to an “average” scene is. I guess we should leave it at that.

I have the feeling that most people dont undestand what caustics are. They only think that caustics are somehow “formed” and summoned into existence when a magic crystal makes them. (glass, gold ring, etc).

But no, its way more complex than that. Caustics are already there when a simple metal plane for ex. is reflecting light. The difference and the interactions if/when you see them is based how light is bounced off that gives total (naturally) unexpected results compared to PT.

In simple PT you have light, and the BSDFs, where each material has a (camera) ray interface. Depending on the rays that hit, the material distributes the pixel values across the samples taken. But, this is not a full solution! From diffuse to glossy, there are different calculations and distributuions for each material which are NOT unified. This is why photons are interpreted in different BSDF shader models and for ex. caustics are a separate thing. In reality there is no such thing, there are only photons and their interplay. In reality you could say that everything is a reflection and/or absorption. Nothing more nothing less. Nature doesn’t “see from the camera”.

Crazy reality mode ON (lol):

So even the whole MLT/BiDir stuff is compared to reality a mess! You would need to shoot from each light photons into the scene (not rays!), and let them bounce as waves(!) trough the whole scene interacting with real matter properties and surfaces (EXPAPILLIONUGHAMUGHABILLLIONS of atoms VOLUME geometry), THEN whatever photons hit the lens, bend them and capture them virtually in your small, cheap 64 Exabyte RAM. :smiley: The requirement for that are maybe insane, and we will have this only in the year 2100, but i want to show how the whole stuff is different from our actual pity technology… lol

1 Like

Again, just because caustics are “everywhere” doesn’t mean they’re significant. Diffraction is everywhere too, yet we don’t bother to simulate it because it’s usually insignificant.

In simple PT you have light, and the BSDFs, where each material has a (camera) ray interface. Depending on the rays that hit, the material distributes the pixel values across the samples taken.

That’s not actually how it works.

But, this is not a full solution! From diffuse to glossy, there are different calculations and distributuions for each material which are NOT unified. This is why photons are interpreted in different BSDF shader models and for ex. caustics are a separate thing. In reality there is no such thing, there are only photons and their interplay. In reality you could say that everything is a reflection and/or absorption. Nothing more nothing less.

In a BSDF, there also is only reflection, transmission and/or absorption. There is no difference in “interpretation”, caustics are a natural effect of a reflective BSDF, not a feature! Disabling caustics is a feature.

Nature doesn’t “see from the camera”.

The rendering equation tells you (as a consequence of the conservation of energy) that it doesn’t matter which way you look at it. Whether you do path-tracing from the camera, bidirectional path tracing from both camera and lights, or “photon tracing” from just the lights, for an infinite amount of samples they will all converge to the same result.

For a finite amount of samples however, the probability of certain paths to be found is vastly different, which is why caustics are inefficient in unidirectional pathtracing - they are improbable paths. For the same reason, photon tracing is incredibly inefficient: You’d spend most of your time tracing paths that never hit the tiny camera sensor.

So even the whole MLT/BiDir stuff is compared to reality a mess! You would need to shoot from each light photons into the scene (not rays!), and let them bounce as waves(!) trough the whole scene interacting with real matter properties and surfaces (EXPAPILLIONUGHAMUGHABILLLIONS of atoms VOLUME geometry), THEN whatever photons hit the lens, bend them and capture them virtually in your small, cheap 64 Exabyte RAM. :smiley: The requirement for that are maybe insane, and we will have this only in the year 2100, but i want to show how the whole stuff is different from our actual pity technology… lol

The nice thing about physics is that the difference between our probabilistic models and actual reality tends to be insignificant for most intents and purposes.

a kinda wonder… for those who know how render engines work.

A pixel is always an estimation of what is at a certain point in space, flat surface a edge.
Where light gets as a result of the quazi random nature of reflections (surface roughness), and glass alike pass trough materials inside the render engines.

What if…
A render engines also kept a probability map; for indirect light.
And did a kind of statistic density analysis on it, to create a smooth depending on density.
-or-
if caustics turn up (noise detection was in AS version of blender), then try to follow more beams in that direction
As certainly there is a ‘collection’ of beams then that found a less usual but a working solution to the way of light to the sun.
In such way calculate a group of beams, or calculate beams with slight deviations of that caustics
Probably in the last, beam bounce; as earlier deviations might endup at complete different areas in the scene (see that disney youtube).
Thereby derive more info on how to sample it, and create a result without the white pixels but more like heavy caustic long render.

You kid right ? A BSDF/BxDF IS a mathematical INTERPRETATION of an ATOMIC surface! Its not an photon transverse an atomic grid, at all. So it IS an INTERPRETATION of nature that we solve with an ABSTRACT RAY PATH. It has NOTHING to do with reality. The simple fact that you have a GGX vs AS vs BM that totally differ in interoperability is fully interpretive at all. A BxDF shader is IDEALLY what you say, but there is not ONE implementation till today (show me one!) that from the render core to the shader to the rays has FULL energy exhaustion! Not even Maxwell! Because even Maxwell is NOT full spectral, it has “only” 12 wavelenghts. If we really want to be purists how things are!

The rendering equation tells you (as a consequence of the conservation of energy) that it doesn’t matter which way you look at it. Whether you do path-tracing from the camera, bidirectional path tracing from both camera and lights, or “photon tracing” from just the lights, for an infinite amount of samples they will all converge to the same result.

For a finite amount of samples however, the probability of certain paths to be found is vastly different, which is why caustics are inefficient in unidirectional pathtracing - they are improbable paths. For the same reason, photon tracing is incredibly inefficient: You’d spend most of your time tracing paths that never hit the tiny camera sensor.

LOL, the rendering equation, really ? The rendering equation is only a framework of MATHEMATICAL reference, and well you said it, nobody has an INFINITE amount of time. This is WHY it makes the difference. duh!

And regarding finite calculation, well, improbability does not equal unimportance (maybe to you). Tell that to a scientists, jewelry maker, architect, and so on… that uses MLT, etc… For them they are VERY probable paths.

But the biggest thing that you miss is that now the big companies ARE switching to full ray transport from Disney to Weta, read my earlier post, and in the longer term, EVERYBODY will once its affordable or a cycles-wonder happens!
Its faster to setup right, its totally clear, pristine! VR, Lightfield rendering, spectral HDR + rec.2020, 3D and many others will not stay with PT, it is simply not enough.

The nice thing about physics is that the difference between our probabilistic models and actual reality tends to be insignificant for most intents and purposes.

FOR NOW! (although its changing ALREADY!) and only for the entertainment industry BTW! But it will not stay that way and neither should we, or the quality. If you don’t understand that, you ignore the truth: The world is moving forward and simple PT is getting replaced everywhere, sooner or later.

Sorry to those who don’t think caustic bounces are important, but I have to agree with Enilinacs here when it comes to their contribution.

Sure, the effects are a bit more subtle if you don’t intentionally set things up to make them obvious, but what’s being forgotten here is that the difference can indeed be highly noticeable once the number of subtleties reach a high amount (collectively, they will add up as you would expect).

You can put it this way, in a scene with a number of noticeably glossy surfaces and several lights, the subtle effects would be everywhere and also add on to each other (that being the bounced light from every glossy surface from every light). To conclude, Cycles gives you the choice if you want to have caustic bounces in your scene, so if you don’t think they’re important, just turn them off and see if people can’t notice the omission of lighting details.


On to something different then (when it comes to advanced sampling techniques that are better for complex lighting than unidirectional path tracing). Fortunately for those who value the flexibility that Cycles gives you in terms of how to set up your materials, there are numerous methods (as mentioned by Razorblade) that allow for just that without compromising anything for the artist (ie. the ability to tweak based on lightpaths among other things).

1 Like

Dude, you need to cool down a bit, writing random words CAPITALIZED and underlined doesn’t make your post more credible.

You kid right ? A BSDF/BxDF IS a mathematical INTERPRETATION of an ATOMIC surface! Its not an photon transverse an atomic grid, at all. So it IS an INTERPRETATION of nature that we solve with an ABSTRACT RAY PATH. It has NOTHING to do with reality. The simple fact that you have a GGX vs AS vs BM that totally differ in interoperability is fully interpretive at all. A BxDF shader is IDEALLY what you say, but there is not ONE implementation till today (show me one!) that from the render core to the shader to the rays has FULL energy exhaustion! Not even Maxwell! Because even Maxwell is NOT full spectral, it has “only” 12 wavelenghts. If we really want to be purists how things are!

You’re mixing up a lot of things here: A BSDF is a function that describes the amount of reflected light from one direction to another one. That’s all it is, nothing going on with atomic surfaces there. It can be either a mathematical function that’s designed to approximate a certain physical surface (such as Lambert diffuse, GGX etc.) or the measured BSDF of a real material (not supported in Cycles, though).
I have no idea what you mean with “FULL energy exhaustion” - in case you mean energy conservation, well, I can indeed show you a few: Lambert diffuse is 100% energy conserving, the new multiscattering GGX BSDF is as well.
The next thing you mention is spectral rendering: Yes, that would indeed be useful and, as far as I know, add way more realism than some caustics hack. However, for production, it’s pretty useless since almost no input data (textures etc.) is available in spectral form.

LOL, the rendering equation, really ? The rendering equation is only a framework of MATHEMATICAL reference, and well you said it, nobody has an INFINITE amount of time. This is WHY it makes the difference. duh!

And regarding finite calculation, well, improbability does not equal unimportance (maybe to you). Tell that to a scientists, jewelry maker, architect, and so on… that uses MLT, etc… For them they are VERY probable paths.

But the biggest thing that you miss is that now the big companies ARE switching to full ray transport from Disney to Weta, read my earlier post, and in the longer term, EVERYBODY will once its affordable or a cycles-wonder happens!
Its faster to setup right, its totally clear, pristine! VR, Lightfield rendering, spectral HDR + rec.2020, 3D and many others will not stay with PT, it is simply not enough.

That doesn’t make any sense either. The rendering equation describes light transport between surfaces in a geometric optics setting, if you want volumes as well, it can easily be extended with a radiative transfer model. Note that the rendering equation describes every optical phenomenon in geometric optics - yes, it describes caustics, spectral rendering, stereoscopic rendering (I guess that’s what you mean with 3D), any colorspace you could ever want, every dynamic range you could ever want, every VR model, lightfields, everything!!

Of course, not every renderer actually implements all of that - but the rendering equation itself does describe “full ray transport”.
By the way, what do you mean with “full ray transport”? Path tracing is an unbiased estimator of the rendering equation, it already can simulate everything (in theory - in practise, some effects might require millions of samples). The goal of more advanced light transport algorithms is to solve it faster and with lower amounts of samples, not so solve it more correctly.

FOR NOW! (although its changing ALREADY!) and only for the entertainment industry BTW! But it will not stay that way and neither should we, or the quality. If you don’t understand that, you ignore the truth: The world is moving forward and simple PT is getting replaced everywhere, sooner or later.
The industry is still busy switching towards path tracing, not away from it. Of course it will be replaced one day, but there currently is no better all-round algorithm for production rendering. Just look at all the new renderers being developed: Hyperion - PT only. Glimpe - PT only. Arnold (not new, but industry standard) - PT only.

Generally - don’t just throw random buzzword around. Of course, every article will make their own renderer sound 20+ years ahead of the competition, but if you look behind the marketing phrases, Cycles is actually still quite competitive in terms of algorithms.

This is really a pointless argument. Even if everyone were to concede that caustics are the be-all-end-all of amazing lighting effects that no one can do without(they’re not), what then? There’s no solution that would work within the framework of Cycles (or any other renderer) that would solve them any more quickly while keeping the renderers attractive to anyone outside of scientific visualization buffs. Not even ArchViz people render full animations with bidirectional path tracing (for many of the same reasons that production animation people don’t). It simply isn’t a mature technology.

Path tracing will be replaced some day, but it’s not going to be by bidirectional path tracing in its current form. Path tracing was “invented” in 1986. And it’s a stupid simple algorithm to wrap your head around. We’re just, within the past few years, getting to the point that it’s viable for production on a large scale, and it’s still very much under heavy academic research. Bidirectional path tracing was “invented” in the mid-to-late 90s, is incredibly complicated, and remained mostly unchanged and unoptimized until very recently. It is under light academic research (mostly because so few understand it well enough to even start to iterate on Veach’s original thesis). It is rigid and scientific.

There are plenty of bidirectional options already out there (even for Blender!). I see no immediate reason to implement a feature that is against the core use-case of Cycles when this is the case. There is no “do everything” renderer, and trying to shoehorn Cycles into that non-existent role isn’t going to make it become a reality, especially with the very limited development muscle behind it. Want caustics right now? Prepare for long render times with Cycles or use another render engine (and still prepare for long render times because bidirectional isn’t some magic bullet). Want fast, clean caustics with all of the flexibility and benefits of path tracing? Welcome to rendering, that’s been the goal since the 70s.

After 2 pages of posts read, kilometers of text (thank you logorrheic fighters) I’ve realized that this thread is only about caustics.
I feel so off topic.http://www.myemoticons.com/emoticons/images/msn/moods/sad_small.gif

#ForeverOffTopic

1 Like

he! he!
I think it’s because caustics are the biggest thing that bidir adds to plain pathtracing. And since bidir in cycles is unlikely to happen, people (me included) started ranting about caustics.

I mean, aside from caustics and very corner-case indirect lighting cases, not a lot of value is added with vanilla BiDir, especially considering the tradeoffs that come with it. Caustics are the natural focus of attention in a thread like this.

1 Like