Temporal Reprojection - how to replicate in the final render?

I like what temporal reprojection does to my lightning! It looks and feels just right, because, probably, it highlights areas where there’s been lots of emissive stuff passing (my crude understanding so far).
W/O Temporal Reprojection:

And with it (captured as a screenshot during playback, NOT viewport render!)

How would you go about replicating that? Any approach - shader, compo, geonodes, outside soft (last resort)? It looks so sick, grainy and the animation is cool too.

I thought of viewport rendering (since I’m working in Eevee anyway) but I can’t replicate what’s on screenshot via that either! Viewport rendered stuff is similar to p1

Looks like you are seeking additive transparency. This is going to make the object emissive, but also perfectly transparent. As more layers are superposed, it gets brighter.

I’m not sure how to make the grainy look. Maybe I would try texturing the model with a noise texture so some parts are brighter than others. Maybe I would try adding particles to the lightning.

But it won’t look exactly the same. Blender is just not designed to replicate a glitch on purpose.

1 Like

Thank you for giving this your time and thought! The problem is that the effect is temporal, so it either has to employ geonode simulation (which is rather computationally expensive to do if the desired detail level is as on the picture), or it needs to be post.

However, I’m currently pursuing an idea of rendering out an image sequence and manually making a node that makes a sum of last N frames.

Possibly it could also be useful to render at lower res/lower sample rate for added grain.

We’ll make a glitch machine out of blender yet!

I’ll post the results here if I manage to do something of value.

Also, I tried doing something with Simulation + Volume density, but I can’t figure out how to make my volume cube accumulate/lose density over time.

How you created this lightning?
If it is some kind of mesh you could try this:


You will need to play with settings.

1 Like

Thanks, this made me think I might’ve been overcomplicating things again.
I had a brainwave that maybe motion blur can help, as well as lowering sample count. I combined it with voronoi and got a decent result in the end, that’s pretty cool!

I’ll continue to tweak params and I still have to do that compo node. Thanks for the idea!


left - 16 sample, right - 1 sample.

Still, not close enough to what I would’ve liked but this is cool too!

1 Like