GPU Rendering with Cycles - Complete Guide

Hi everyone,

Ever since the release of Cycles with GPU support there have been multitudes of questions and problems about GPU rendering in Cycles.

A user, olesk, has been kind enough to put together a very thorough guide that works to answer all the major questions, concerns, and problems about GPU rendering.

Before starting a thread about your GPU woes please give this a full read-through:

[redacted temporarily]

1 Like

I was about to make a new thread here but yesterday I finally managed to get my GTX 660Ti rendering under Ubuntu. I’m using the irie-PPA and it doesn’t come with a precompiled CUDA kernel so I had to install it manually :slight_smile:

A useful link for Ubuntu users:

very good :slight_smile:

It’s stuff like this with Linux that makes it hard for Windows users to make the move - Gees what a complicated procedure just to get a card working with Cuda! I’m probably going to have to do this myself soon if all goes to plan and I move over to Ubuntu completely - I’m scared! :wink:

1 Like

I’m having a problem with 2.66, I’m using ATI radeon graphics card in 2.65 and its working smoothly but when i installed 2.66, blender doesn’t detect it on system prefs.

Hi, I started a thread regarding the installation of my Nvidia card in Ubuntu 12.10. There has been some changes recently in Ubuntu the way you should install a new card correctly - this is the method that works the easiest I found.

My card: Nvidia Geforce GTX 560 Ti

In Ubuntu 12.10 (not sure about 12.04 - haven’t tested) there is something called ‘Linux Headers’ missing on the first install. So on a clean install of Ubuntu 12.10 I installed all the updates, including laguages (removing software I didn’t require first to save bandwidth on the update).

Rebooted after updates.

Next I went into “Software Sources” (There used to be something called ‘Jockey’, this isn’t used anymore) - the last tab “Additional Drivers”, “Nouveau” is the default Ubuntu (homemade) driver, I just selected the 310 experimental Nvidia Driver - which is what others had suggested.

After the driver installation I rebooted.

Upon reboot the Nvidia Driver kicked in and I could access the Nvidia Settings in the Dash.

Hope that helps someone :wink:

1 Like

I’m buying a new video card.
What is the maximum number of CUDAs that cycles will support.

I finished reading that guide after reading this

OpenCL, the industry standard framework for GPU processing,

If this is the guide that you want to consider as reference, good luck.

Also there is not so much to tell, If you are buying an Nvidia GPU for Blender you need CUDA 2.x or higher and you can use this page as reference. As you can see each GPU has its own dedicated section/page including specs, you want to take a particular look at the number of CUDA cores and the amount of memory of your videocard. As a suggestion I wouldn’t bet on CUDA, but if you want to, I don’t think that makes much sense buying something other than a GTX.

Remember that CUDA is not the solution to everything, also Cycles doesn’t really scale that well when you start having a lot of geometry and the memory on the GPU is quickly filled with data slowing down the render time. Also there can be problems about the quality of the rendering sometimes since many features are still considered experimental.

Consider buying a much better CPU than spending money on a GPU, at least at this point.

More about Blender and Cuda here.

Consider buying a much better CPU than spending money on a GPU, at least at this point.

Could anybody break all these solutions down into Pricing lists. Say Minimum GTX with Cuda 2.0 and reasonable amount of GB on board as opposed to CPU and GB on board of that? Or is that too much trouble? I think that the monetary considerations for following the Cycles/CUDA development strain is of greater USE to me the average poorer non professional user, than just the tech listing of time to render etc. Am I understandable? Like which solution would allow me to use Cycles with any good results sans time considerations for say $100 or $200. Getting a better CPU could cost as much or more than a medium quality GTX, due to having to get a whole Motherboard to support it. It might be easier to get a demo copy of Maya than have to rely on Cycles. How many users in Blender World ( good name for online magazine or blog ) are professionals as opposed to noobies and amatures, Do you know? What kind of graphics set up do you “John Williamson” use/have or Mr. Brecht Von Lemmel? Can you tell me/us/anyone?
Thank you for your being a major player, your work rocks.

Iam using bumblebee for my GT 540M in Ubuntu 12.10

This link help me installing bumblebee with the GUI

I am using a 2010 Mac Book Pro for Blender, I have upgraded my NVIDIA GeForce GT 330M 512 MB with CUDA. Previously CUDA didn’t show up under: User Preferences> System>Compute Devices. after researching I realized I may have an out of date graphics card, as many of you know you can not simply swap cards on a Mac. I went to NVIDIA’s web site and there it told me my card could work with CUDA but I needed to download the toolkit and then run the driver. I downloaded the toolkit then ran it. it downloaded then disappeared. I ran a search for it and was offered an upgrade for my card. I ran the upgrade and now the option shows up in User Preferences> System>Compute Devices. I select it then select under render options GPU etc. when I try to render it processes very quickly but all I get is a black screen, or it still takes for ever to render. any ideas?? thank you, Helojammer.

Hi All,
Please kindly help me to solve my problem with the slow performance of
Cycles Rendering in Blender 2.67a

Issues:

  1. In my office, I use Blender on Acer Notebook Aspire 4740G, Intel Core i5
    with standard 2Gb of RAM, ATI GeForce 310M 512K. As this system does
    not support Cuda GPU rendering (version 1.2), I have to relied on the
    power of CPU. When rendering the BMW Car Benchmark Test, I got a score
    of 24 minutes. have tried to change the tile size, but it doesn’t help.
    It is very slow while I had to use this notebook for most of my
    working time.

2, In my home, I have a desktop computer with Intel Core i5, ATI Radeon
HD6740 1Gb, RAM 8Gb, SSD drive. In rendering the BMW, it scores 6
minutes with the power of CPU as Blender doesn’t support this graphics
card. I use this computer only at night and with a limited time.

  1. Assuming that the slow performance of Cycles Rendering on my
    notebook caused by a low RAM capacity, today I bought two slot of total
    8Gb of DDR3 RAM and install it. But I do not notice any significant
    improvement. The rendering time is still very slow, sometime 22 minutes
    and sometime longer than before I added more memory.

Questions:

  1. Why the same CPU speed and amount of RAM in my two computers
    (both running on Windows8 64bit) results in difference performance (6
    minutes to 24 minutes)? What is the real cause? Is it caused by the
    presence of SSD drive? Is Cycles Render engine needs more than 8Gb of
    RAM so that it uses the harddrive for processing space?

  2. How to utilize my additional memory which I have bought and could not
    be returned? If I spare some of my memory as RAMDISK and move the
    Blender application files into it, will it improve the Cycles Rendering? Does
    Cycles uses harddrive space for rendering? How to change the location to
    RAMDISK?

  3. Is there any other solution for me to get a better performance on my
    notebook without buying a new one such as the Lenovo Y-500 with GTX
    graphics card?

Thank you very much for your kind help and advice.

I have a free pci-e 1x slot in my system. I know that pci-e speed is not important for rendering, but I’d like to know if does it affects the preview rendering mode in viewport.

P.S. I never got an answer to my questions in this forum, I hope this time I will be lucky…

I have tried to place the Blender program into RAMdisk (DataRAM product).
No significant improvement I found in the matter of Cycle Rendering.
Still wondering how to improve the rendering speed of my notebook (Core i5 with NVidia 310M graphic card).

Hello,

I’m running a Dell laptop under linux Mint. I have an intel core i7 and an nvidia 650 card. I have everything installed normally, optirunning Blender isn’t an issue, cuda’s running fine. When I run the optirun glxgears/spheres etc. I’m getting much faster scenes than on CPU. However, performance is surprising me. I have a heavy scene which takes about 6-7 hours to render on CPU, and closer to 10 hours on GPU. My tiles are relatively large and divided by the size of my scene etc.

What can I do to get results closer to what I would expect?

I have horrific performance when using GPU to render in Cycles - my entire computer comes to a sluggish crawl and it eats up all my available RAM, the 2GB on the GPU and then the 6GB on the motherboard.

Stupidly, when rendering with “CPU” my computer runs as smooth as glass, albeit the render takes an eternity to accomplish.

Somewhere there is a hideous memory leak occurring.

I don’t mean to reply to a post that may be long forgotten, but if this helps anyone else, I had a similar issue to this which I solved by adding another video card.

I used to render in Cycles using my one and only video card (GTX 580 Classified) which had two displays connected to it. When I rendered something, it would slow everything else down so I couldn’t really use Windows or any other application (Chrome) while the render took place without it all lagging. It was some CUDA errors (related to a different issue) that lead me to a Blender Wiki page about GPU rendering that said the only way to solve an issue I was having, was to add a second video card to run my displays, and the other for rendering.

So I tossed in a cheaper, smaller GTX 650 that can run three displays concurrently (what I wanted anyway), and set Blender to use my GTX 580 for GPU rendering. When I did this, it solved half my issues, I can now use Windows, Chrome, watch movies, etc while Blender is rendering without any lag whatsoever. So this is good.

And the bonus is, my GTX 580 renders even faster now, but I am not using both GPUs combined. I think the 580 renders much faster now because it’s no longer running any displays, it just sits in the PC rendering for Blender. In the BMW test, with my 580 as the sole card in the PC connected to my displays, it averaged about 50 seconds. Now, when I added in the 650 to run my displays only, the 580 renders the BMW test in about 30 seconds. So far so good.

To add, I am NOT using SLI (my board doesn’t even support it), these are two independent cards, but adding the second weaker cheaper card to only run my displays really benefited my setup.

I understand you are using Windows and all other programs from a lesser card though? correct? The card used for rendering in Blender does not even output to screen?
That is all good if you want the benefits of a good card only for Blender (besides your other card is high-end also).

  • I guess the card that runs your operating system GUI must be the one for the Blender GUI? so necessarily (?) the better card is not connected to the monitor
  • Is there no way to switch cards, once out of Blender, back so that the better card is the main card for other software? other than physically pulling plugs
  • How is a card “in the box” but without output configured in Windows 7?
  • Inside Blender which of the two cards renders the 3D window when it is in Render mode preview? Will that get any faster with a separate card dedicated to rendering or only the “final” renders [F12]

Correct, at least when I did have a lesser card. I returned for GTX 650 for a GTX 770 then later got a 780 Ti. But the increase in better results I posted regarding using two cards are the same.

  1. The lesser card would be used for Windows (or any other OS), the GTX 650 I had supported three displays at the same time. The better card (GTX 580 Classified) was NOT connected to any displays.

  2. Not that I know of. If I wanted to play high end games with my better card, I would have to change the physical displays to the better card. I do not know of any games that supports switching through software (or any other application). Unlike Blender in which you can choose which card to render with regardless of being the display card.

  3. I have multiple cards connected to the multiple PCI-E slots on my motherboard, they are not in SLI. I basically have one card doing nothing but Blender rendering. And because it’s dedicated to Blender, it renders much faster then using it as a display card.

I should note that the the GTX 580 I use for rendering seen the significant performance increase when I did this, but my GTX 780 Ti did not when I tried it the other way. Overall the 780 Ti > 580 Classified in my experience with Blender rendering, combined they rock.

  1. I believe the viewport is rendered by the same card your displays are connected to. When I used the 650, it would lag when I used a Subserf Mod set to higher than 3, however my 780 Ti when set to 5, still runs pretty smooth (580 set to 4), so to me that means the viewport uses the display card, not the render card I have set.

Hope that helps.

It does help indeed! and thank you.
Because one question leads to another…

Windows recognizes both as graphics cards? (though one is unconnected)

  1. Not that I know of. If I wanted to play high end games with my better card, I would have to change the physical displays to the better card. I do not know of any games that supports switching through software (or any other application). Unlike Blender in which you can choose which card to render with regardless of being the display card.

I assume that in addition to physically switch plugging you have to “tell” Windows through software? Do you get a black screen till you do or is it like plug-n-play: switch the cables and automatically have the other card take control?

  1. I believe the viewport is rendered by the same card your displays are connected to. When I used the 650, it would lag when I used a Subserf Mod set to higher than 3, however my 780 Ti when set to 5, still runs pretty smooth (580 set to 4), so to me that means the viewport uses the display card, not the render card I have set.

That might be a problem in workflow if your lesser card is mediocre… and a related Q
When [F12] render: Does the render show the tiles being formed as the unconnected card is rendering? or do you only see the end result (through the GUI card) once the image is fully created?

-Christos