Rendering much faster with less pixels? Do you think its possible?

Hi Blenderartists.

I’m not a python programmer, so I can’t test anything of what i’m telling you here, so take it only as what it is–>mind concept.

One year ago, at the Houdini workshop on Stuttgart FMX, I see something verry interesting. Houdini render motion blur and depth of field faster than still image without those FX.

This night, I got an idea :smiley:

If we take an image -->A (10001000 px), Change its size to 500500. Scale the new image to 1000*1000 with the nearest neighboor option (photoshop)–>B. So take The two image and apply to them the same gaussian blur. The result is very similar.

It’s just the start of my reflection, but why not render everything tagged/ grouped/ on the same calque , twice as the final image resolution and apply it the blur we need. --> Because of the 4 time less pixels to render, this could be a great time save.

Possible to use the Zdepth map to create zone where only 1 pixels out of 2 / 3 / 4 (depend of the gray darkness)… You know what I mean.

I hope It’s not a lost of time :smiley:

I’ll try some test tomorrow just to see. If you got some comments about this, don’t hesitate.

you mean adaptive sampling (over/under) that takes dof and motion blur into account?
sounds great. that’s something the render engine related, so you won’t need python :wink:
blender internal doesn’t have adaptive sampling in general, does it? that would really help optimize scenes.