I’m not a python programmer, so I can’t test anything of what i’m telling you here, so take it only as what it is–>mind concept.
One year ago, at the Houdini workshop on Stuttgart FMX, I see something verry interesting. Houdini render motion blur and depth of field faster than still image without those FX.
This night, I got an idea
If we take an image -->A (10001000 px), Change its size to 500500. Scale the new image to 1000*1000 with the nearest neighboor option (photoshop)–>B. So take The two image and apply to them the same gaussian blur. The result is very similar.
It’s just the start of my reflection, but why not render everything tagged/ grouped/ on the same calque , twice as the final image resolution and apply it the blur we need. --> Because of the 4 time less pixels to render, this could be a great time save.
Possible to use the Zdepth map to create zone where only 1 pixels out of 2 / 3 / 4 (depend of the gray darkness)… You know what I mean.
I hope It’s not a lost of time