How to change the number of tiles rendered simultaneously with the GPU (cycles)

Hello everyone

I’ve started using blender cycles for my animations and I read the article on how to speed up rendering in cycles. One tip was to switch to the GPU instead of the CPU for rending. However, my CPU renders about 8 tiles or so at one time whereas my GPU only renders 1 tile at a time. Which is odd because although it takes me 26 minutes to render a scene of 500 samples with my CPU, it takes roughly the same amount of time with my GPU. Also watching other people’s tutorials on youtube show that when it comes to using the GPU for rendering… mine is the only one which does one tile at a time :frowning:

I have a 4gb Nividia GeForce GT 650m graphics card and an intel i7 3630QM 2.4GHz processor is that is of any help.

Mine also does one tile at a time but I don’t see it as a problem…the issue for me is render time. Is it slow?

the size of the tile for each makes the difference - small tiles for CPU so you can have multiple small buckets for each thread work fast, and usually 256x256 on GPU for tiles works fast for single tiles that make fast work of the samples requested.

I don’t understand how increasing the tile size is suppose to decrease the render times? Because it just takes longer for each tile to be rendered?

No, your GPU is capable of larger bucket rendering.

http://www.blenderguru.com/4-easy-ways-to-speed-up-cycles/

Thanks for the link

But I’m just not getting the same results as everyone else seems to be getting with their GPU rendering the scenes. I mean, the guy from blender guru cut from 9 minutes to 45 seconds by switching off his CPU. Me on the other hand, my CPU and GPU and performing equally as quick. I’m not sure how a 4gb graphics card translates to the same rendering power as a 2.4GHz processor.

Is there an underlying secret to using the maximum potential of the GPU?

  1. The amount of RAM on your graphics card does not affect rendering speed at all–just its capacity for geometry. You should be comparing things like clock speed, CUDA cores, memory bandwidth, fill rate, etc, not the RAM on the graphics cards.

  2. They’re using much more capable graphics cards than yours. A GTX 580, for example, from two years ago, has almost double the cores and almost double the fillrate of your GT650M, and scores three times higher on benchmarks like 3DMark.

  3. As was mentioned earlier, CPUs can only handle smaller tiles due to their particulars. GPUs handle much larger tiles at once due to how they can process large datasets in parallel.

  4. There is no magical “GPUs calculate n times faster than CPUs” formula. It depends on both your GPU and your CPU. In your case, the meager GPU isn’t much better at rendering than the CPU. It is an M-series card, after all, built for mobile applications at reduced power consumption. You can’t expect the performance you’d get from a dedicated, discrete GTX-series card that burns 100-150 watts and costs $500.

  5. Cycles doesn’t do multiple tiles at once on a single GPU. That may be different with multiple GPUs, but I’m not running SLI and can’t say for sure.

Is there an underlying secret to using the maximum potential of the GPU?

Unfortunately, the answer to that is to buy a video card suitable to use for rendering. Sorry, I know that’s probably not the answer you’re looking for.

Thanks for the intuition jkrushen, it really helped a lot.

Unfortunately, the answer to that is to buy a video card suitable to use for rendering. Sorry, I know that’s probably not the answer you’re looking for.

Ahh well, at least I know what’s going on now. Thanks mate :slight_smile:

Correct. Blender is currently able to render with up to 4 GPU simultaneously (this seems to be a bug - so it should render with more than 4 if fixed/changed). Meaning distinct graphic cards or dual-GPU graphic cards (like GTX 590 where each GPU counts as one). It also renders with different GPU - so you can boost up your render times with any additional (compatible) GPU (I currently use a GTX 470 and 670 for example). Only problem in this setup is, when the GPUs have a different amount of memory - you can run into out-of-memory errors even if one GPU could handle it.