I mean no disrespect, but this comment is pretty useless.
Besides the fact that averaging all the items in a list is something i already know how to do (your code does seems a bit odd though), I said that I don’t know much about blender, not about coding and machine logic in general.
This is what i understand from your code:
#define [I]Samples[]</i> (idk what exactly goes inside it though)
Samples =
[list of samples]
#if the current object doesn't have the property <i>"colorSampleList"</i>
If "colorSampleList" not in own:
#Then define a property of <i>own</i> called <i>"colorSampleList"</i> as equal to <i>Samples[]</i>? idk what <i>[Samples]</i> means though
own['colorSampleList'] = [Samples]
#if the current object does have the property <i>"colorSampleList"</i>
else:
#Then iterate through all elements in <i>"colorSampleList"</i>
for i in range(len(own['colorSampleList'])):
#and set each one of them to the average between itself and the element of <i>Samples[]</i> with the same index
own['colorSampleList'] <i> = (own['colorSampleList'][i] + Samples[i]) / 2
# why do you average like that? shouldn't it average all the samples in [I]own['colorSampleList']</i> into a single variable? maybe I'm just not getting something
For instance: i have no idea where that code should be (i assume it should be in a text datablock referenced by a python logic brick though).
I also don’t know how to translate and store the result that my filter spits out into something that your code could refer to as “own[‘colorSampleList’]”.
And, as a mainly non-Python programmer, i don’t really get this line: own[‘ColorSampleList’] = [samples], i get that you are setting the property ColorSampleList of the current object, but i don’t understand what you mean by [Samples].
If you could clarify that for me i would gladly implement it.
It seems to me like you misunderstood what this program does.
On every frame and for each fragment, many samples are taken and they are added together, these samples come from VPLs (Virtual Point Lights), this is a pretty decent approximation to Indirect Light.
I am looking for a way to store the Indirect Light of any frame (wich is the result of these calculations) in an image and keep it around for a few frames.
Then, i want to average the Indirect Illumination of the current frame with the Indirect Illumination of previous frames, and add that to the rendered image.
This should help get rid of the flickering that occurs due to low sample amounts when moving the camera around.
Now for my issue: I don’t know how to store a frame in an image, and i don’t know how to do post processing to stored images.
Hi SebastianMestre,very interesting approach for indirect lighting. I see a huge potential. It looks fine for static scenes, but because the samples are static (relative to viewer) there is a lot of aliasing or flickering happening. Also the samples are gathered evenly on the view surface so when projected on a horizontal surface in perspective, the intensity decreases. This causes an uneven distribution of reflected light intensities - closer surfaces are bighter than in distance.
I think it could be solved with a system that would distribute the samples based on camera movement and reduce the intensity based on linear depth buffer.
Here is a little experiment: indirect lighting fully raytraced in screen space
it is hella slow and full of choppiness but it can light up entire scenes and produces both indirect lighting (i.e from an emisive plane / wall with a light being shun down upon it) and soft shadows from said indirect lighting.
this is just a silly and useless experiment but a similar technique could be used to add shadows to the existing VPL based filter to make it look better. It would still not look as good as raytracing but it would be much, much faster.
Also. This filter, unlike the VPL one, is temporally coherent (a.k.a. no flickering apart from some noise shifting around the screen).
My main project right now is screen space reflections. However, i do still work on this from time to time.
And although school does take priority over anything else i do, especially now when we’re going through finals week, i do find a few hours each weekend to work on various projects. and vacations are coming soon so i will have plenty of spare time.
That said, i found some time and updated this filter the other day. now it is less physically correct in a sense but it can look much better. Also, i added more proper perspective correction.
Not sure if it is related to your issue but it seems i messed up gamma correction big time.
I just noticed this today so bear with me while i upload a fix
EDIT: all done and dusted. Update is up. It should maybe run faster and look a bunch nicer.
Yes. I am aware of that. That is just an intrinsic property of how this filter works. It reconstructs normals from a depth buffer, so surfaces look “flat shaded”. In the future i guess I’ll be targetting UPBGE so i have to look into if UPBGE provides some means of getting a normals buffer which i can sample directly (this would also improve performance tremendously. It could run up to (and roughly) 3x faster)
As for your second issue, i know about that. It is just caused by the amount of samples available to the filter. Its effects can be alleviated by increasing the sample count or through some temporal filtering, I just haven’t gotten around to implementing it. Again. If i could get access to mipmapped colour and normals buffers this could be further alleviated