Exactly my point also. Over and out.
Not only that, but bidir is much more realistic no matter how you look at it. You get realistic lighting easily, no need to do anything special. Of course itâs slower and often harder to avoid noise. But you only have to try similar scenes with pathtracing and bidir to find out the difference yourself. I also believe itâs unlikely that Cycles is going to have bidir any time soon, so letâs just relax. For realistic rendering we have Luxrender.
Thereâs nothing inherently more realistic about bidirectional path tracing. By definition it will converge to the exact same result as forward path tracing given enough samples. The only thing that might make it more realistic is if it uses a spectral color space, in which case thereâs nothing to prevent you from doing that in a forward path tracer, itâs just slightly slower than RGB calculations.
Exactly, the only more realistic thing it will do is it will make the noise levels across the image more similar, because it converges difficult light paths quicker. But given enough time, all unbiased algorithms will produce the exact same image. Thatâs the whole point of the term unbiased.
Plain bidirectional path tracing without MLT will make simple, 2-3 bounces caustics clean, the ones that unidirectional path tracing turns into fireflies, but for more complex paths, like light going through two bottles filled with liquid in a row, you need Bidir with MLT. Itâs just not possible with plain Bidir in any reasonable amount of time because bidir, like unidir tracing, shoots rays pretty much in near-random directions (Sobol sampler) and itâs unaware of any surrounding possible bright light paths.
MLT is specifically made to explore these unlikely paths once it finds just one of them. The algorithm itself, if you read Eric Veachâs famous paper on it, is the most brilliant thing ever written that Iâve seen.
If Cycles wants to compete right in this unbiased business, it would be a sin to have Bidir without MLT. Thatâs just like the standard these days. The more advanced stuff is in gradient-domain path tracing, VCM, and such latest algorithms that you can try out in Mitsuba. But Mitsuba isnât production-oriented.
I vote for Bidir+MLT in Cycles, especially because of the type of work I do. But Iâm sure devs are already sick of hearing people ask for shit, especially something as insanely difficult to implement as this is. Itâs practically a complete rewrite, all these years of work on the unidirectional integrator would be set aside, and theyâd be basically starting from scratch. Iâm sure theyâre doing all in their power to make Cycles awesome, itâs just hard as hell to write a solid, production-oriented renderer.
If you still lust for some free caustics, go with the mighty LuxRender. Itâs my renderer of choice and itâs just plain amazing.
Cheers.
It is not. See Five Common Misconceptions about Bias in Light Transport Simulation.
I vote for Bidir+MLT in Cycles, especially because of the type of work I do. But Iâm sure devs are already sick of hearing people ask for shit, especially something as difficult to implement as this. Iâm sure theyâre doing all in their power to make Cycles awesome.
Well, nobody is holding anyone back from contributing code. Here is source code for bidirectional path tracing, here is source code for metropolis light transport. Feel free to port it to Cycles.
That was an excellent paper, thanks for sharing the link. I havenât seen that before.
By me saying âunbiasedâ, I meant that in a purely mathematical, theoretical sense: given infinite amount of time, âunbiasedâ integrators will eventually trace all possible light paths in the scene without introducing bias, and arrive at a single, fully converged solution. In mathematical terms, the term âunbiasedâ is only valid if we say that these unbiased integrators have infinite amount of time.
Keep in mind that this is purely in a mathematical sense. In practice, we will never arrive at a completely converged solution because there is an infinite number of light paths to trace, and obviously, itâs a never-ending render. So yes, the paperâs authors are rightâan integrator can be unbiased, and inconsistent.
But, at this point weâre just discussing the meaning of the word unbiased, and thatâs not very useful.
What do you think?
What about unidir pathtracing+MLT? Lukas already made an attempt, so I guess itâs compatible with lightpath stuff, but it would also
feature the MLT plus for difficult paths. Might this be the missing link?
If youâre looking for complex caustics, in my experience, not really worth the effort. But it can work better than plain unidir PT in certain, but very specific situations, for example, where there are very few fireflies and they are concentrated in a single area, like simple refractive caustics from a large area light, but even then, itâs questionable how pretty it is.
Take a look at this comparison of PT, BDPT, and both with MLT: http://indigorenderer.com/documentation/manual/rendering-with-indigo/render-mode-guide
Effectively, it just changes the fireflies into MLTâs infamous splotches because where plain PT would render a one-sample firefly, MLT would mutate that light path to find more nearby bright paths, and make a big splotch. Same happens in real scenes, not just in Cornell boxes.
The problem is that some caustic light paths are complex (like in the two-bottles-filled-with-a-liquid example I talked about above), and tracing rays just from the camera through these complex paths is very inefficient. Most samples donât end up hitting a bright light source, thus they end up being too dark. Those that do hit the light source get mutated for a while, make a splotch and eventually, after a preset number of consecutive rejections, another new ray is shot in a different area, which explains the behavior of PT+MLT shown in the images on that link.
If you look at plain bidir vs bidir with MLT, youâll see that plain bidir already has very few fireflies, and those have been âsmoothed outâ with MLT with ease.
Now I know for production rendering and VFX, all this caustics stuff is nearly completely irrelevant, but if youâre doing pure CG from scratch, especially interiors, this starts to make significant difference.
I donât expect Cycles to have any of this pretty much ever because itâs just not the purpose of Cycles in the first place. But I am saying thereâs people that want it and hope it comes to Cycles. For that, people, look elsewhere, like at LuxRender, or commercial ones like Maxwell.
Go play with LuxRender, and itâs new blazing fast LuxCore API. It needs just a bit of a push and it will be an extremely capable competitor to commercial renderers. The only big thing itâs missing is spectral rendering, but thatâs fully supported in its Classic API and works great. Bidir with Vertex Connections and Merging + MLT absolutely crushes on this extremely caustic-heavy project Iâm working on now.
There, Iâve pretty much said all I had to say in these few posts on this topic. Now forget about all this technical stuff and go make some art!
Cheers, David.
See section 3.2 in the linked article. There are unbiased estimators, that regardless of the number of samples will never and can never converge to the correct result. The mathematical property that a statistical estimator converges to the correct result is called "consistent".
I think MLT works a bit different then sketched above, it tries to find common light paths and it assumes nearby light paths might solve in that directions as well and tries that out (instead of complete random paths), in my opinion the math behind MLT does remind me a lot of neural networks too (strong connections get wider) . And yep monte carlo calculation do have an application too in neural networksâŚ
BTW there are also ways of having a neural network polish a render⌠but not the topic here.
On topic however is i think a good image to see what MLT can do, because previous image doesnt tell the whole story.
That developer thread started with this image
And i think thats amazing
However to get there, MLT uses more complex math and so it has a cost on GPU mem, and render speed.
It wasnt bug free (but i think working like 90% or so). It showed one of the areas Cycles might develop, or not.
I think it was decided to cancel futher dev in this dircection, and currently they seam to focus on optimizing and improving current engine(s). To also better support futher updates (like the GSOC denoise proposal), or other things.
Its better to first write a great framework that allows future expanding, then to thighten yourself on something that dissalows easy futher upgrades. A reason why also there is so strong code control on blender (writing stylez etc) because if you endup with spagheti code there rarely is good way back to good code. However developers are ofcourse allowed to hack around and see what is possible not follwing the rules but investigating the What IF⌠and if things do work well, the Question is, Might it work within this framework of code, or would it be a to big change, or corrupt other stuff, is it indeed the direction to go to, or should we plan it for later and resolve other parts first so this new experiment will work even better.
Attachments
I still think that a dedicated method for just caustics can fill the gap and make everybody happy. Something that can be enabled and sit on top of current pathtracing. Be it photon mapping or something else, even to be set by hand for each lamp that needs to produce caustics (cast photons). Maybe even make it an AOV pass.
Itâs useless to repeat that caustics arenât needed, and belong to research niche. They are actually needed in some field, even a mainstream field as archiviz.
And also, when they will begin to show up here and there in movies, because Solid Angle or some other Big Guy implemented them, everybody in animation will start to ask for caustics. I have a feeling that this will happen sooner or later. It happened for GI, Fur, SSS, Volumetrics and so on. Itâs technology going further. Even the untrained eye of the non-CG people is evolving, so that what was a good render 10-15 years ago is looking a bit uncanny now.
+1 for photon mapping for Caustics. Redshift does that and itâs fast on the GPU.
Nobody actually said that. The argument goes like this: Efficient caustics (especially BPT+MLT, the topic of the thread) arenât worth the tradeoff for Cycles. If you need them, you can use another renderer. Why is archviz is supposed to be a priority for Cycles?
I could see the point if there wasnât a free renderer that solves the problem, but Luxrender has a great plugin, support for tons of fancy algorithms, and is competitive in terms of performance.
And also, when they will begin to show up here and there in movies, because Solid Angle or some other Big Guy implemented them, everybody in animation will start to ask for caustics.
Marcos (of Solid Angle) is on the record saying that they donât work on better caustics, because the customers arenât asking for it. Their customers obviously know about caustics, yet they value Arnold over renderers like VRay, which do support photon mapping but then have artifacts, or renderers like Maxwell, which donât have artifacts but then lack flexibility.
Again, itâs all about tradeoffs. Thereâs no magic solution that satisfies everyone. With the flexibility you get from something like OSL (which doesnât work with BPT), you can create the caustic-like effect that you need for your shot. With BPT+MLT, youâre limited to something physically-plausible, you may not actually be able to create the effect you need.
Of course, people who canât even be bothered to use another renderer canât be expected to write OSL scripts (or use OSL at all). There is no solution for these people.
Lukas Stocknerâs old unidirectional MLT patch more or less supported all of Cycleâs shading features at the time (because it was largely an extension of what existed and did not have any bidirectional component).
I think MLT is doable in Cycles without compromising any of the shading flexibility, that is as long as its unidirectional, but the tricky part is making it work with individual render tiles while keeping it robust and easy to use.
@isscp
The thing with caustics is they are not âwrongâ or a result of âbadâ rendering math and neither are a future to be âenabled/disabledâ.
Its a result of Light bending When a ray tracer bounces against something.
Depending on surface a lot can happen think of a metallic diffuse glossy BSDF with roughness as below
Specular rendering is relative âeasilyâ, but non flat surfaces there a render engine has to randomly pick a direction, a random divergence of the input angle.
Glass is even more complex (leaving out absorbtion, but specular glass behaves likes this
And diffuse glass would be a combination of the above plus a random path variance
See the problem gets pretty complicated now.
Reminding the image earlier in this post
Only after enough particles have passed his raw diffuse bumped glass, a render engine can balance out their influence as compared to other rays hitting the table, if one doesnt render a long time it cannt balance out so we get spikes because there some table pixels got extra enlightend due to a small group of random divergence beams hit the table extra.
Supressing those spikes can be done by some image filters, or post production, but in essence that reduces quality of the image.
Its not that al images need such high end result (a cartoon like animation probaply could use clamp setting or filter out spikes by some perpper/salt noice filter, or gausian filter (like gsoc project).
As for MLT experimental branche, think of it like well every render engine has to solve all possible pathâs but in the end if rays dont endup being visible then calculating them is a waste of time. The random divergence of rays bouncing or reflecting produces a lot of rays that wont get visible (but only random trials in the other rendering methods will turn out to be useful or not)
With MLT there is still a random factor. However lets say that a specific ray bounced the glass at 75 degrees and hit the floor; MLT notices this was solved as a visible ray. And it will try with some more beams parallel to the earlier beam in around the same area to see if they also hit the table. It wouldnât fire those paralel beams again with random divergence trough the glass, but it will use past solution because parallel rays might resolve that way too too.
Chances ar higher that those beams will hit the table too (but still possibly some not)âŚ
It still has to do multiple beams in around the same area under different angles (because roughnes does add something to the 75degree) but if again 75.5 degrees works itâl try that again with paralel beams. Thus calculation to keep track of this all is way complexer as compared to random beams
However in some areas MLT does do a lot better in some areas âindoors, light behind a half open doorâ, then other render engines who just fire randomly beams of the light behind a door (bi or single path tracers). MLT will get to understand a light is behind the door and a âlotâ of beams will pass trough the half door in a certain direction. where other methods only randomly get eventually. At the cost of heavier calculations requiring more time, more GPU memoryâŚ
Its kinda a little bit like the light-paths now in master, light-paths for a window tell the render engine that a lot of beams will be there and have a common direction (this helps solving of the beams a lot, because of the earlier images i posted beams get a lot of random bounces in a scene, light-portals tell a render engine it can trust you and take your wisdom in account for recognizing openings.
And yeah techniques on how to optimally solve rays (and leave out the rays that donât produce a result) constantly evolve by progress in math, and progress in gpu power, and smart coders. Blender is blessed by very skillful people in this area; maybe not many coders so evolution of code isnât rapid, but hey we got a node based rendering system with an amazing quality⌠and it still evolves.
They work together with major industries, and some of them work with Blender, some donate opensource code, while other fund as well.
If you want to see improvements follow thegithub logs and see that a lot work is going on, on blender every dayâŚ
you read whats going on first hand, but its not a blog like BA
BTW personnally iâve always wondered if Adaptive Sampling if properly used, could be a solution to caustics
Be it that it doesnt do anything to limit them, but it will keep on rendering on those noise areas (spikes are noise)
While it wouldnt keep on rendering the tables on areas that it conludes to be (near) noise free
The problem is that it requires i think to much tweaking to get it behave like that.
Well coders have tasted all these things, iâm kinda sure that in the future of blender, new tricks get in and old ones get combined, and will provide even faster/better render results
Explained to my best understanding of MLT, i followed the news and github code for quite a while, as i did to too with AS.
It has my great interest on how code is developed at Blender, while i myself not coding for Blender (maybe if iâm 65 or so and got the time), i think its realy awesome whats happening in this opensource community.
Attachments
In practice, it doesnât really make things any faster with forward path tracing, just spreads out noise over more places. Full convergence still takes a very long time, you just get to see caustics earlier at the cost of noisier easy-to-find paths.
In my experience with the patch, lowering the Max Rejects value as far as possible without introducing bias can go a long way to improving convergence for the easy paths while still making for a big performance leap with caustics.
If you use the Sobol sampler, that value can be lowered pretty far while still keeping things perfectly unbiased (I used a value of 128 with Sobol in a very complex scene, and the convergence rate was far faster than with the default value of 512).
What i wonder is, how did Octane implement even spectral bidir so good on GPU, and is it possible to port/write that to cycles or is that stuff patented (algorithms) ?
Is ask this because lets not forget that a lot of Cycles is also Brecht (Octane team!), so the natural question would be, could we expand on that into the Octane direction style ? Its spectral, its fast, its bidir, etc⌠Because what i get out of it is simply amazing (including perfect caustics)
Anyone have a clue about that, would it be a complete rewrite ?
Last I read, Octane largely limits you to what is physically correct when it comes to creating shaders. It doesnât give you as much an ability to have creative shading techniques that can be used for NPR or for working around existing limitations. The NPR that Octane 3 seems to have for instance looks like it might be more of a post-process thing.
The thing with bidirectional pathtracing, it would largely limit your shading ability to what matches the real world with little room for shortcuts or tricks (as mentioned before if the current algorithms were to be used). I for one like the fact that one is not limited to simulating a photograph with Cycles (if I want to make something more artistic in nature I can).
I do wonder though just how much of Cycleâs shading power people are willing to sacrifice just so they can have better caustics , itâs a lot like the group of people who say they will abandon their fully-featured and production-ready render solutions for Redshift just because it makes biased renders GPU-powered (even though the feature list is not near as long).
Itâs possible, but Iâm operating on practical level when comparing Luxrenderâs bidir mode to Cycles. They look different, Luxrender being more realistic. I wish Cycles had more realistic rendering modes, would save the trouble installing Luxrender.