Better control of Cycles tile rendering?

So now that Cycles has tile renders, are there plans to allow you to pick (in the UI and with Python) which tiles you actually want (or don’t want) to render?

Preferably based on an index number (or range thereof) of the tile, and also maybe by what type of material/ray type makes up the majority of the tile?

This would be very useful for rendering stills (and animation I suppose) distributed across multiple pieces of hardware or combine rendering on your CPU and GPU for the same image. Sometimes it is a lot cheaper to build a personal renderfarm with discarded hardware than buying access to a professional one or purchasing the latest and greatest monolithic hardware.

http://lists.blender.org/pipermail/bf-committers/2013-July/041295.html

  1. Current projects

Awesome, thanks!

I figured if the UI and Python interface to select tiles to render was exposed to users, most could brew their own solution to do network render of stills with good control over how many tiles (or what type) went to specific machines based on knowledge of hardware specs. And possibly it would allow for an automated process to determine hardware speed of all the targets.

But this is really good news :slight_smile:

is there a linux build with the distributed tile rendering feature?

it seems that this project is stopped.

Anyone has more infos??
I’m really interested to see the distribuited rendering implemented in blender&cycles

I have actually very old idea how to distribute big movie (really big, 100+gb textures per frame, 4k+ resolution, all that things). Idea is simple, torrent like net, and main “node” must assign some pieces of work (like tiles, but there is better solution i think). That was when i participate in few renderfarm.fi rendering few years ago, as manual handled queue obviously was bottleneck for many ppl.

I think that for now best money per frame solution is 2 passes. Same as 2 passes compression algorithm doing. First pass is collect draft data how many samples each frame need, and store that information in some file. Just frame number, and some noise metric.

Second and main pass is to offload actual work to all current nodes. For now i think that MCMC/MLT sampler can show best performance at that task. It can “resistribute” samples inside frames, motion blur sub-“frames”(there is no actually such in real MB, more like continuous time domain, but i think you got idea), and more important, between frames. we can fire MCMC/MLT in some package of frames, say 8 frames at a time, and offload to network nodes not square “tiles”, but initial MCMC seed. Double gain, as it will be more cache friendly and can redistribute samples, we get even noise level across all movie, not need to stare every frame and adjust samples manually, and more efficient cache utilisatuion.

We need to redistribute samples in any high dymamic scenes, like actor exiting dark cave to bright valley, or action fight wioth all subframe extreme light bursts all over the place.

Just Idea, no code exist except early experiments with MCMC sampler in Cycles.

storm_st you have a good idea.

I don’t want to do a renderengine war but just see some examples that I used.

Vray: the new version has a system like your, when you start a DR render, it saves (i think in a compressed format) a complete scene file (mesh and textures) and it sends to the others so all the vray problems with files paths has gone. In addition, you can choose a DR cache size so the file can be stored in the slaves and only the modified objects/materials/textures will be transferred.

Luxrender: another good example of a working DR system. If I’m not in trouble, it sends the scene to the slaves and every slaves calculates and after a pre-setted time value sends to the master the calculated samples…

I hope that someone could solve this area where cycles is weak.