I have an idea for a rendering algorithm!

Maybe what i say is BS, but just listen.

So, what we do at the moment is shooting rays, no matter if path or bidir, we shoot rays.

What i dont understand is: Once calculated, why do we throw them away, we could do more with them.

My idea is this:
Lets say we have a HD resolution: for each pixel we start a ray. Now when it hits we sample that BSDF material. But instead of now throwing that ray info away, lets keep sample a dome of secondary rays and lets keep them in memory. Now, lets do that with the next pixel, but now when the rays are shot, cant we see if they dont intersect with each others volume and somehow average them. This would imply that they have a ray “thickness” once they hit the surface and influence the other rays that come after them. How much could be influenced by the angle they hit on the polygon.

This would kill a ton of noise and would converge things way faster. In short, i think the rays we use now are too “dumb” or get thrown away too “fast”.

I dont know if i could exlain it properly. No i dont mean Adaptive sampling, i mean smart rays, in the direction of Disneys Hyperion, but WAY smarter. Hyperion only sorts rays, this would mean that a ray will influence the rays after it in a smart way (angle, normal, diffuse, energy, etc…)

Aside from info where the lights are, spatially, there is no influence of ANY ray, neither in the same pixel nor in the neighbour pixel that tells now an incoming ray what to expect. Yes the pixel gets averaged, but thats all. Pretty dumb IMO, isnt it ? Like lemmings, they hit dumb the surfaces and die. :smiley: rofl

Couldnt we deal with light like with some kind of water, based on angles, having after some initial sampling the values in memory and now a ray comes in and the other says: “hey dude, dont be lost in that corner, and the other one, dont go with that value only, you can add to your value my confidence that this area of space is pretty much this bright across these pixels, but that dark on that spot, and so on”.

Maybe some kind of radiosity algo on ray level values.

PS: Even if this would take a TON of memory (lets say 32GB for FullHD with 128samples dome per ray/pixel with RGB/Normals and Energy Values) but would kill render times and noise, i think it would be worth it.
In the worst case you could throw in a 3D Xpoint SSD with 1TB for a nearly RAM speed pagefile.

Ok, done.

jm2c

giving rays a thickness… I don’t know how todays path tracer renders work (I don’t think they have thickness) but it sounds like a whole new problems that need solutions.

i mean with that an influence radius. Once a ray intersects that space, it gets information. That is numerically not a problem, we have 32 float values, hell even 64float, lol.

Also i have the feeling that the CPU is underused on smartness. The idea is that the CPU is able to combine very complex interactions recursively, a strength that GPUs lack with more complex kernels.
So a CPU with a lot of memory could make a map of the 3d space and its smart rays and calculate way smarter interactions than what we do now. Maybe a GPU ca do it too, out-of-core memory (?).

In ue4 there is a line trace and an option to trace with spheres and such and line trace is all but free you can run like 200 - 500 of them per frame and it will not impact performance at all, but doing a trace with a sphere on the other hand has a far more significant impact on performance. For a while I was working on a stealth game and I ended up swapping from doing sphere traces to player to doing several line traces to each bone and doing a bit of math, simply because of the performance improvement that it afforded me. Even though it went from doing one sphere trace to 30 line traces per frame per npc.

How is this different from photon mapping?

Photon mapping is just for GI, in this case more values also per ray are taken, not only that but the rays are true subsampling not some blobs, then the rays are converging over time more precise, a photon map is calculated once and very low res.

You could say, its like a photon cache, but instead of photons, it stores rays and its values that are smarter also. And the more rays add up over time the more confident it becomes for the new rays. Of course it has to be solved in such a way that the confidence converges to a unbiased result, as much as possible. AI could be also involved for better prediction based on the past rays, but not necessarly.

This scenario would also open the door in the far future to unheard-of post processing and deep data. Imagine to wait for the frame to finish, still having all rays and then doing in post ray manipulation, occlusion, etc. Even taking rays away and adding new rays would be possible (no, im not talking of a difference render like Maxwell render, where you can continue the render, cause that only stores deep pixels, im talking aboout still having the whole ray, how deep, primariy secondary, should be configurable, also based on the material, object, etc…).

Of course the memory implications are staggering. But for highend production why not. We have already 16core processors in the mainstream, and 1TB of PCIe SSDs with 3GB/s costs nothing anymore.

PS: For another topic, of course there would be the possibility to train AI to tell where to sample too and how many rays for what, based on the past rays.

1 Like