Temporal Gradient Domain Path Tracing

http://www.cgg.unibe.ch/publications/temporal-gradient-domain-path-tracing

Great little paper coming up at SIGGRAPH Asia. Looks to be quite an improvement over PT and GPT.

Sadly though Blender and therefore Cycles is temporally ignorant. Looks cool however.

Off topic but I found more open movie goodness in a paper on color disassociation https://gvv.mpi-inf.mpg.de/projects/LiveIntrinsicVideo/

That looks like some amazing amount of noise reduction! Looking forward to trying to read, and then realizing I’m not smart enough to read, this paper.

Holy Moley! That is very cool. This would be very powerful for matte and roto.

Not sure what you mean by that. Cycles has enough temporal info to handle motion blur and motion vector output, which I believe is all the second technique in the paper calls for. And I’d imagine that an extra set of buffers to store the necessary information within Cycles itself wouldn’t be an unfeasible task.

Ooooooo… Wow, that’s pretty amazing. Am I reading this right? I seems to ideate that it needs to have the data of multiple frames in order to make this work though?

This is an extremely interesting paper.
Implementing it would have to be done like the cross-frame denoising: First, you render your frames into EXRs, and then you run the filtering on them.
When you use that approach, implementing the temporal GPT extension on top of regular GPT is almost trivial: Basically, you just render each frame twice - once with its own seed, and once with the seed of the previous frame.
The Poisson reconstruction isn’t too different, in the end it’s just 3 more constraints.

So you would need to use less than half the amount of samples that a regular noise free render would need? Otherwise the sum of frames would take longer?

Also suddenly blender can cache alternate frames? That would be a helpful architecture, but is it a sticking point conceptually? Isn’t the foundation against such frame handling?

This is exactly one of the things Lukas wants to discuss at BConf this year. Many current algorithms (and likely many future papers as well) are becoming more reliant on cross-frame data. Lukas did some work to allow for some extra info to be stored in a written EXR for his work on denoising, but it would indeed be nice to have as an inbuilt ability of the renderer.

Wow, artificial seeing!
So natural, very similar to the basis of ‘seeing’ as am experiencing in observing nature… When a beam of light hits an object, the object cannot be seen, neither observed, unless is distinguished from the environment/background. In order to see, is to have comparable data from at least two different locations in space-time.

Denoising is just one of the fruits.
:slight_smile:

Yeah, the issue for a lot of this stuff is that you wind up separate the engine and thus quality of the real-time preview with the final frame rendering. This is what Cycles and a lot of the other renderers have been trying to avoid. Having the preview rendering look different from the final image causes a lot of issues when you are trying to set up lighting and materials. And in my experience this is the way it “has” been in the past. It’s not very forward thinking. It’s one of the main reasons that I had never used VRayRT in the past. They only just recently got feature parity with the main engine. And even then, it’s only of mild use to most people. It’s also the main reason we haven’t seen irradiance, final gathering or light mapping in Cycles. And, actually if you look at the way the industry has been moving lately, it makes sense. Arnold, Disney, PixAR, Weta, Animal logic, et al, are all moving in this direction with their renderers too. I realizes that there are a huge percentage of you guys who don’t care about a render engine that’s focused on animation production. I can appreciate that but, the focus of Blender IS on animation production so unbiased techniques are super important for that. I know it’s tempting to use these biased or time dependent techniques but from my experience, it’s not worth the hassle. Really, computers are just going to get faster and faster and able to handle more and more of what we now find difficult today. I’d rather see more focus on things that happen in real-time. Things like MIS and ray bundle sorting are great examples of this.

Just my opinion, I know there is a potential holy war brewing at this moment.

You seem to have some misunderstanding here, G-PT is perfectly unbiased and physically correct (when using L2 reconstruction).

Basically, the renderer generates two additional passes which can then be used to filter the noisy regular image - and yes, it is entirely possible to combine that with general denoising :wink:

Lukas beat me to it. Both methods converge to identical results. This would just firmly place a difference between the quality:time ratio between preview and render (which I personally see no issues with). Workflow change will be nonexistent, aside from having much faster final animated renders.

Side note
It wouldnt be optimal to flip back to previous saved EXR files, such a file would easily fit into memory and thus removed disk IO delays.
Or, you could write out EXR, but keep the last one in cash memory before you destruct it.
Keeping it in memory might not even require to use EXR’s since you can then save to mpeg or jpg

Modern OSes will use unused memory for disk caching anyways, so it doesn’t really matter for speed whether you write 1GB to the disk and later read it again or whether you keep it in memory all along - but when you leave the caching to the OS, you can still use the GB of memory in the meantime if you need it (at the cost of slower read speeds).