Using GPU to Render?

Hi!

First time posting. Sorry if I placed this thread in the wrong forum section or messed up on something else. I’m also a new Blender user.

I downloaded Blender 2.73. I have enabled ‘CUDA’ under ‘Compute Device’ in ‘User Preferences’ and have selected my Nvidia 970. However, when I am rendering something, my CPU actually is able to render faster than my GPU, which shouldn’t be the case right?
according to this –> http://wiki.blender.org/index.php/Doc:UK/2.6/Manual/Render/Cycles/GPU_Rendering

I actually timed the render time (though quite unscientific; I was using my phone)
Nvidia 970: 1:01.5 min
Intel 5820K: 54.27 sec

According to the wiki, the GPU should render much faster, right?
But, my GPU is slower than my CPU by about ~7 sec, which could add up overtime.
All components at stock speed.

Are the Nvidia 900 series GPUs not optimized to the point they are slower than CPUs or did I miss something in the settings? >.<

Thanks

P.S.
What are the average file size of 3D blender files? This model in the pic, which isn’t very fancy or anything, is only about 700KB, so… Is that normal? If so, I was expecting much larger file sizes, like those of HD videos. Oh yeah, that’s a white emission plane in the background of the image.

Attachments


In your screenshot under the Render settings you have the Device still set to CPU.

…and to get the best out of CPU and GPU, you will have to adapt the tile sizes accordingly (Render > Performance tab):
CPU rendering = works best with small tile sizes (e. g. 32 x 32 or even 16 x 16)
GPU rendering = works best with large tile sizes (e. g. 256 x 256)

CPU can be as fast - but in general I found GPU outperforming CPUs by a lot.

Consider this: my GTX 970 was 330$ and I run an 8 core Xeon system. The CPUs along which are slower will cost a lot more!

If you pack in even two GTX 970 you have an amazing speed.

I had that screenshot just to show you guys what I rendered, something I consider not very to render. When rendering with the GPU, I did set it to GPU under the right side tab

Ahh, I see it. My image’s tile size is 64x64.

Just as a follow up, I increased the tile size to 128x128 and the GPU rendered significantly faster than my CPU.

CPU: 59.59 sec
GPU: 32.25 sec

Thanks everyone who posted in this thread and for helping me :stuck_out_tongue:

You will need to change the render device to GPU under the render properties as well as in your preferences. I found that my 560ti was about 10x faster than my I5 [email protected] . I now have a GTX 970 and that is about 20x faster than my CPU. But since i use them together at the same time it goes just under 30x faster than the CPU using the GPU. So yeah, i usually find that A GPU will be way faster than a CPU for cycles.

If you have done that there may still be some optimizations that you can do to help speed things up a little bit. For instance the plugin that is linked in my post will optimize the tile size that you are rendering in based upon what you are rendering with. In general you usually get as close to 32x32 as possible when using your CPU and try and get as close to 256x256 as possible while using your GPU. The reason i say as close to is because in reality it will be faster if the tile size is a multiple of the total render size. So if you want to render a 500x500 image on the GPU you would use 250 rather than 256. But this plugin will do all that for you automatically.

http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Render/Auto_Tile_Size

Yeah, setting the tile size to 256x256 did further increase the render time for the GPU to ~25 seconds.
Just curious, when I try to render with tile size 256x256 with my CPU, it seems to just freeze and not finish render. I mean, the CPU renders halfway and just seem to freeze, so I just hit the ‘Esc’ button. Is the CPU not able to render tile size 256x256 or something?