Specular Manifold Sampling Caustics ON GPU!

That may be true - but which features get developed and at what stage is not only a function of how useful the features are, but also how complex the problem is, the experience and expertise the available devs (both from a development and review standpoint) and how easily the new feature can be integrated into Blender/cycles.

Arguably many of the things you have listed are far more important than other features already developed for cycles, but blender development isn’t driven simply by a list of features ranked most to least important/useful - not least because a list of what is important would depend very much on who you ask (as this topic clearly demonstrates).

For the record, I too think there are some important features Cycles is missing that should probably take precedence over Caustics. Things like glints, microroughness, procedural texture updates are some things I would personally like to see at some point in the future.

This is what I also found, would be also very interesting.
PPM Hybrid Kernel watch from about 1h:20min
But of course I have the feeling we deal again with photon caching stuff.

Well… that’s how the rest of the world™ defined it at the time when the industry was born. Time is money, so fixing f***-ups produced by the render engine cost lots of money back then, thus the term “Production render engine” was born, and defined as a render engine that produces reliable results with zero unpredictable results (pipe dream at the time) and focus on animated sequences (you don’t produce millions of $$$ on a single fixed image only, unless you are a philatelist, do you?). Renderman (1993) was one of the first of this class. Now is old tech and was replaced by Arnold.

Or use a non-production renderer focused engine. There’s always Luxrender/Luxcore that already does this and far more. (And are light years more inclined to use and abuse of experimental features than Blender devs are, BTW).

Then again, i’m not saying it should not be implemented (if possible). After all cycles is a single directional path tracer at best, so a solution do exist. The only problem is that is painfully slow, and in the past, various people have tried to implement several solutions to this problem. No success until now, but who knows??. The only thing is that probably none of the capable devs wants to tackle this (too much effort for too little return), or they are just too busy fixing the weakest areas of Blender nowadays, so the best bet is an external contributor :slight_smile:

This reminds me:

Two “easy” and production reliable techniques (pathtracing and photon mapping) combined together.

When I asked on BDT the reply from Brecht himself was in the line of: how do you combine them? e.g. how do you send back to pathtracing the light you get from photon mapping, and viceversa?

I don’t know but maybe at Otoy they found some kind of solution?

Cycles-X project presentation mentions “path guiding”.

Could that help improve caustics?

Sure. According to papers path guiding allows unidirectional pathtraciing to find and explore difficult light paths, and caustics are often mentioned. As of now there are many path guiding algorithms, each one solving the shortcomings of the previous ones. It’s not very clear to me if path guiding is already implemented or not (I guess not) anyway, I’m sure devs will choose the best option available.
And I’m very curious and happy about how the new architecture will make it easier to experiment new algorithms and techniques

Please don’t use that thread on the devtalk forum to express your wishes or if you are not a developer (as Brecht is having a conversation with the other developer there)

Well that is not me, my username is everywhere enilnacs or PetroneFlavius. I’m not Tutul :smiley:

OK im definitely not a light transport researcher or a programmer so i dont really grasp the technical details, but im going to play devils advocate and say from all the examples in that paper it looks as if theirs produces the same caustics as a unidirectional path-tracer but theirs are clean as if denoised. Which is great, if that means faster cleaner caustics in blender then congratulations it would be an excellent upgrade to cycles… But ive played with different rendering engines like mitsuba and with different renderers such as bidirectional (light rays simulated coming from both the camera and the object?) you can get more and easier caustics, and theres other methods such as photon mapping those also provide way better caustics and more of them, and then theres multi-spectral which can simulate prisms breaking up white light into its spectrum.
So i would say this method does not solve all caustics for cycles once and for all, its just another step towards a more perfect light transport rendering simulation.

Oh yea and i forgot to add of course with Cycles X they are planning to introduce path guiding, so i guess more compute power fill be focused into important shaders and areas of the scene such as those that are going to produce caustics, and then combine that with OIDN would clean them up, but as for being stable in animation its yet to be seen.

Well, no. Manifold is not just denoised caustics.
Secondly the caustics generated by cycles CANT be sharp due to an infinite search for those rays.

Yes i am happy for Cycles X, it will be easier to test different approaches, but the problem of full light transport is still not (properly) solved, not even by Disney & Co. Even bidir VCM has issues.

To be honest, IMHO, I see only 2 REAL solutions for full spectral unbiased light transport:

  1. Either a hardware solution where a big player like Nvidia or AMD are implementing the full spectral MLT equation in hardware and we also really have speed, or (more unlikely)…
  2. There is a genius that can invent a kernel on GPU level that can operate MLT spectral with interactive framerates. For that we would need a TRUE GENIUS. This is no small feat, and maybe even impossible given the challenge at hand.

So, yes, Manifold Sampling Caustics are NOT the ideal solution for MC PT. But a really good approach. Ideally we would need something completely different in a renderer.
Maybe spectral MLT is not the best approach either… maybe there are other ideas that could achieve unbiased interactivity with caustics… who knows…
I want to add that ideally a renderer would need to CALCULATE the rays in such a manner that even simulates the bloom and streaks of a camera lens trough the rays themselves. You could then load different Lenses with their internal lens groups and get perfect natural glass effects that would look perfect without post.

Octane does that for ex by a post filter, but that is not how a camera and lens behave naturally. So there are still a LOT of features missing in an ideal renderer. It just happens that the actual “unbiased” renderers are “good enough” with a lot of tricks.

We are still VERY far from the ideal renderer. And to be honest, i would want to see a programmable RPU (Render Processing Unit) that implements full calculus. This still needs to be invented. :smiley:

What about AI and machine learning? Can’t AI already dream up fairly photo-realistic stuff based upon being fed millions of real world images? AI could totally cheat and fake all the caustics and spectral stuff at a fraction of the compute power…

We are only at the beginning of AI. While AI theoretically “could” do that. The problem lies in its predictability as always. Not only that but you would need an INSANE amount of training data to do that. If you need to show an AI your scene, lets say some complex geometry and lights, then the AI would need to have and insane amount of information how that would look like to be truly “unbiased” or near ground truth. This would even be more difficult without maybe petabytes of training data. I think a good implemented Render Processing Unit in 7 or 5 nm would destroy a generic GPU. By far. We are talking raw metal power. Does anyone remember RenderART RPUs ? I worked with them, I had real-time Raytracing in the EARLY 2000s ! :slight_smile: When everyone was barely rendering a frame with render man on a complex render farm. With the tech and knowledge we have now hardware MLT could be easily implemented.

EH? What’s that??

But isnt it true that when DeepMind made Alpha Go they first fed it with insane amounts of game data, but then later realize that it could teach itself from first principles? Is it not possible that AI could learn how physics/light works in the realm of rendered computer graphics/raytracing… And thus in the end not require insane amounts of training data, but instead only enough to teach it how rays get traced and how to get to the end result with guessing/predicting rather than the brute force simulation?

Blockquote
EH? What’s that??

For that time, those babies were FREAKING fast! We had the Renderbox with 8 Cards inhouse.
It was as fast as roughly a single 1080, but 2005!

  • (2002–2009) ART VPS company (founded 2002[14]), situated in the UK, sold ray tracing hardware for off-line rendering. The hardware used multiple specialized processors that accelerated ray-triangle intersection tests. Software provided integration with Autodesk Maya and Max data formats, and utilized the Renderman scene description language for sending data to the processors (the .RIB or Renderman Interface Bytestream file format).[15] As of 2010, ARTVPS no longer produces ray tracing hardware but continues to produce rendering software.[14]

Here some links:

I have the feeling you dont differentiate ML from DL from true AI.
At the moment we barely can make the GANs/DNN, etc work for simple tasks. To teach AI to truly LEARN what light is and then apply complex relationships across reality calculus is a task that maybe only Deep Mind could do IF it would be done exklusively for that AND an army of AI experts would invest all their time in that for some years. ATM there are way more important tasks at hand like protein folding and other real world solutions that are done. Maybe one day it will come. We will see…

Wow im fascinated to know that it could be possible if it was taken on as a major project by deepmind. They should consider projects like it though they could generate huge revenue for them so that they can tackle important problems like protein folding. I mean if deepmind owned the AI technology that could do this rendering then they could either sell it, licence it or run something like a renderfarm that uses it on their premises(so nobody could steal the technology).

Protein folding was already solved this year by Deep Mind. The team is already proposed for a Nobel Prize. This is the biggest news, comparable to the discovery of antibiotics or the transistor. Because ANY successful understanding of any disease and making medications depends on protein folding mechanisms, and most of all cancer research. We could say now that curing cancer is just years away now that protein prediction is over 90% accurate and improving. Of course that is more important than VFX. :smiley:

Here the links:

https://www.nature.com/articles/d41586-020-03348-4