Predicting computational effort for a certain rendering

Hi,

I was thinking about a feature (even a script) to predict the computational effort for a certain render (either still or animation). So that, a blender artist can know the computational effort for a certain rendering, so knowing also his computer performance (running a benchmark) he can predict the total computational time.
So he can also compare this time with renderfarm’s times (even free, like renderfarm.fi, or payware).

I found Cycles very interesting and powerful, but when I bought my laptop i only considered that it was an affair to buy a 800€ machine for only 400€ (last piece in store!), regardless of the GPU card (It’s a Radeon HD5650 1GB)… Cycles doesn’t fully support OpenCL at this time so I’m very unhappy =(

I think that this will be a very useful and powerful feature.

Carlo

Laptops use underpowered graphics cards. Even if OpenCl doesn’t work now it’s not going to be much faster compared to the processor on laptops. I tested normal clay renders in cycles with both my i7 and ati hd 5730 ( laptop ) and there isn’t much difference only couple of seconds.

+1 for your ideea though it would be very handy !

I would wish this feature in blender too, but i think it is also very complicate, or maybe impossible to predict the effort. There are many things you have to consider, for example the complexity of the materials or the size of the scene.

There is a script (http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Render/Render_Time_Estimation) which can estimate the needed time to render an animation. You have to render at least one frame before it shows you the time for the rest of the animation in the image editor.
It is really helpful for animations but doesn’t work for still images.

I was thinking about an algorithm wich simulates a first rough pass, estimating how many operations will be required for the entire rendenring;

I noticed tha Cycles requires different times for every sample, but it seems to me that this time gets longer (the first sample is the quickest, the 400th is ery slow).

No one has got any suggestion yet? I will wait a week to speak with my professor who does reasearch in such things (and he enjoys open-source)