Fake SSGI in Eevee

I compare the first image and the last. The last need(EEVEE+SSGI+cubemap):

  1. to reduce the amount of shadows
  2. a soft variation of shadow
  3. the image reflected in metallic sphere are different, i don’t know why.
  4. slightly more saturated color

the lighting and diffuse are the same.

1 Like

How does this work with the render passes?

Because SSGI is using the SSR, it is considerd as a reflection.
Will it because of that displayed in the Speculair pass ( and would that be correct? ), or is it in the Diffuse pass?

In which pass is the result from the irradiance volume displayed? The SSGI should be in that same pass, am I right?

It’s in specular pass.

Specular light pass with default materials:

Specular light pass with SSGI:

Not technically correct. I think there’s nothing I can do about though.

SSGI is immensely useful for any realtime engine, including Eevee. Not every scenario is an interior archviz. For example for shot such as a car driving through an exterior scene, SSGI can help a lot in terms of realism, and the bias or shading issues will be minimal, because SSGI generally works very well in exterior scenarios.

For interior scenarios, as showcased above, SSGI can still be very useful together with irradiance probes, where irradiance probe field is used to provide general illumination from all the directions including those invisible by the camera, and SSGI can then clean up and improve the detail of irradiance probe grid in the visible areas, while still being relatively accurate for a realtime GI since the indirect, out of the camera irradiance is accounted for.

This is not that much different to the hybrid approaches of things like Brute Force + Light Cache or Photon Mapping + Final Gather.


Wow you have been doing some amazing work on this. Are you planning to release this addon becasue im very interested to test it?

1 Like

As discussed before, time is running out for him.

But I,m sure that this stone ll make some rings on the river.

Would have liked a bit more time to polish it up, but I need to pack my PC up for moving.


I had a lot o fun playing with your addon and I’m so impressed. can I give you an idea for improving it?:

One idea came on how possibly to improve your SSGI and i would like to comment with you my idea, see if similar thing didn’t got to you yet: Scourse any object out of the camera view wont be accounted to the GI calculatons cause the information is not there on the scree space, and thats a major drawback and can give you a very unrealistic GI

I’m not a programmer of any sorts, but if it where possible to render a real time cubemap with its center on the camera/view point with the passes feedback necessary for the GI calculations like depth pass and normal pass for each of the 6 faces of the cubemap you could include this information to calculate the GI and objects that aren’t on the view would still contribute for the illumination, the cubemap buffer could be be rendered in low resolution for performance.

i guess this is more addressed to Eevee coders. This addon main trick is to use screenspace reflections. Indeed is little more than a nodegroup

@lsscpp Is generally right.

It’s kinda doable in the custom build. Can’t say if it would work as it should without testing it out.

SSR has fallback to cubemap built in that I’ve disabled in the custom build I made (not released) to help the nodegroup get closer to the intended effect. It would be possible to enable it (like it is on default blender, but the nodegroup is not working as it should due to fallback being subtracted from the area with SSR), but I don’t think there’s a way to easily do that without interferring with other specular reflections as the addon is only in specular space (should be in diffuse).

Since I’m relying on a single node to provide both SSR and Diffuse bounce in case of principled shader then I need a way to mix it better with the cubemap that’s left on the base shader. It might be possible by doing something like the ShaderToRGB node does, if it could be modified to include SSR too (might not be technically possible) and use the fac from that to mix it with the base shader cubemap (or in other words - mix shader for only specular). That would give enough flexibilty to improve the interaction with base shader cubemaps.

If that would work just parenting a cubemap with autobake to camera is in similar vein. Of couse it would just be a cubemap still with no actual tracing done in that space.

This is all speculation :slight_smile:

I could add the fallback back in, but as I said before it then needs a lot more flexibility than it currently has to mix specular with other nodes (even the base version does to not just add to, but replace fallback).

1 Like

Do you know if the ssgi addon would benefit of an RTX card to speed up even more the rendering ?

It woudn’t benefit from hardware raytracing. RTX speeds up triangle ray intersections, SSR is traced in depth buffer. Eevee has no RTX support too atm.

Guys, is there a way that the SSGI pass is a 360degree camera that gets also the GI in the back, and then you use only the crop of the primariy camera ? But the 30 degree calculates only the SSGI reflections. Only the Primary camera does all.

I hope you know what i mean.

So basically its a "Reflection"cubemap with only the SSGI pass.
This should be in the background, part of the SSGI system, and you could switch it on with an switch.
Then also the background would contribute to the camera, it would be a full solution.

But this has to implemented in such a way, that it is realtime, NOT like the backing of the cubemaps.


This could be extended also to SS reflections and SS AO, why not?

1 Like

Yes, you are totally right, and then while this would only do one bounce only, it would be enough to have the total golden 2 bounces that we need :smiley: . At the same time, the SSGI is camera angle and rotation stable. It would not be location stable, but location stability is not a big problem at all, because the first ray hits always to the normal of the camera. Additionally the off-camera SSGI can be WAY less in resolution and precision.

1 Like

Yep. Remember this is a hack and won’t solve occluded areas in screen and off screen. Yet it could be a huge improvement both for offscreen light bounce (SSGI) and for those awful screen edge issues with reflections and AO


Having an viewport overscan like during render would be more feasible, although both for performance and quality reasons a better implementation would need to have most focus and resolution in the middle.

There the additional complexity from having it be performant enough for interactive viewport that requires different rendering set up and resolution. Maybe with the new Eevee rewrite having a panoramic camera support this would be easier to implement in some form.

It would improve screen edge issues a lot, but I think (without actually looking at the source) it would need tweaks in lots of places in addition to just doing an larger fov render and crop.

I did a very hacky test a while back with moving cubemap and at least on my test cases the location instability put me off from trying it out further. But the main issue currently is doing it in performant enough way that it would be useful.

1 Like

what about rendering the thing a bit smaller resolution+overscan , cropping out the camera area and using AMD superresolution to get it up to the desired size? (for realtime stuff??)

Could work if the AMD superresolution works well enough. Would have to look into a bit to find where to add it, haven’t really looked at how the final rendering output works. Just an adhoc value to fov and appropriate texture sample UV manipulation for crop somewhere in output should be enough for basic overscan with less resolution. Just need to find where to access and modify the final rendered image texture.

No, because just overscan doesnt fix the bouces from the back, and although you solve some edges, you dont solve the darkening of the image once you move the camera. But if you make a 360 degree SSGI that goes with the camera, everything is balanced, even in movement.
Yes the occluded areas are not taken into account, but that is already the 3rd ray. What we miss is the first lit ray from the off screen areas that contribute, that is the key to smooth animations.
And yes, of course there is vulkan raytracing, but this would work on ALL GPUs, even older ones, and is an extension to an already programmed part.

So, who has skills in eevee programming to test this ? :smiley:
The only thing needed would be to extend it to 360 degrees and remap it correctly.

1 Like