Hi guys my name is Paolo, and I need your help. I want to know what is the cheapest GPU that works in blender Cycle.
Cause now I only got a very low Gpu “Nvidia GeForce 210 1Gb.” I’m planning to buy a new one but I dont know what/which will work on Cycles. I also searched about Gpu that works for cycles and it gave me bunch of High End GPU. i was just hoping I can find the lowest end of these GPUs that works fine in cycles. I hope you guys can help me thanks in Advance.
Hi guys my name is Paolo, and I need your help. I want to know what is the cheapest GPU that works in blender Cycle.
The GT640 works surprisingly well with cycles, and is faster than my slightly dated quad core (4 years old). I went for the zotac 2Gigs with fan model (they also do one without a fan for a few pounds less) as amd dropped support for my 4 year old card.
Beware, GPU onboard RAM is hard limit your scene. Cycles must load all internal data to GPU RAM, it cannot “swap” with system RAM after it, and you just cannot render scene after textures+triangle data hit that limit. That is one of reason everyone try to get hi-end model. Other is speed, of course.
I’d recommend taking a look at the benchmark and find the best GPU you can afford. Remember that if your GPU isn’t at least faster than your CPU, there is no point after all. And heed storm_st’s comment about RAM! If you get a 512MB GPU for example, you won’t be able to render very complex scenes…
Hi guys Thanks for the response I just want to tell you a little flause in my POST… I’m actually pointing out about the GPU rendering option. Now I got Nvidia Geforce 210 and blender Cycles works just fine. My Problem is that I cant enable GPU rendering option. And I wonder if it is the GPU coz theres only specific GPU that enables GPU rendering in Blender Cycles. Thats what my real problem is… again I’m So sorry.
The blender cycles works fine I just cant enable GPU rendering Now I only got the CPU rendering which is quite slow.
To be honest with you I don’t think it really matter. I did a test on a GTX-260, and it rendered so slowly that my old PC with an Intel Core2Duo was much faster, and that card is (I think - I haven’t checked the benchmark) quite a bit faster than yours again. So unless I’m mistaken about your card, you should stick with CPU or start saving up for a more powerful GPU. Sorry…
Oh, and the Geforce 210 has only 512MB of RAM, and on Windows at least, 2-300MB can easily be taken by the operating system, meaning that anything more complex that the default cube will kill Cycles with an out of memory error. If you have an internal GPU that you can use for Windows that will help, but even with the full 512MB of VRAM available it will be extremely tight.
Ah I see. hmmm. So Should I use GTX-260? Would GPU rendering works on that GPU? as I said the cycles are working and it only got cpu rendering. I want GPU rendering.
maybe you got a little mis inerception about what I am after… let me tear this a little bit more
#1 Can I enable GPU RENDERING with my current GRaphics card??
#2 If yes, How?
#3 If not, do I need to buy a new graphics card?
#4 What Graphics card? suggest to me one. not a High end graphics card if possible (I believe it is)
My graphics card Got 1Gb RAM and I am also using Intel core2duo CPU.
You have mentioned to me to start saving up for a more powerful GPU. can specify what kind of more powerful GPU should I take?
#1 and #2: Check if you have set your settings like this. (You really should spend some time reading everything there, not only that FAQ entry)
#3: If you cannot chose your GPU in the menu in the link above with Blender 2.65, then no, you cannot use your GPU with Cycles
#4: Go to the Blender Cycles Benchmark, compare performances, do your research, find the fastest card you can afford and buy it.
If you can afford it, the best card at the moment is the GTX-580 (not the GTX-680 which is gaming optimized), but again: use the benchmark and find the best tradeoff between what you can afford and the performance you get!
I wouldn’t buy anything less than a 560Ti if you want to use Cycles GPU rendering. You must also keep in mind that if you’re using a computer that came with a 210, it probably doesn’t have the beefiest of power supplies, and the motherboard may not even have full PCIe16 capabilities. Do a lot of research on your particular setup before you try to upgrade, as it can be costly if you don’t know what you’re doing.
Actually, the PCIe speed doesn’t matter as long as you have a physically large enough PCIe port (for example a PCIe8x port that is physically PCIe16).
An AMD Radeon HD 6950, Radeon HD 6870, or the Nvidia 560Ti will do you well. I can’t recommend you to immediately select an Nvidia or AMD mostly because I haven’t tested an Nvidia card myself.
I am currently using a HD 7950 for Cycles, and I gotta say it works nicely.
What build are you using for the AMD GPU rendering? I didn’t realize un-patched builds had OpenCL support working (at least not with full functionality) just yet - I was under the impression that the only 100% functional Cycles rendering is with CUDA (which means no AMD)???
From the official OpenCL page: “The immediate issue that you run into when trying OpenCL, is that compilation will take a long time, or the compiler will crash running out of memory. We can successfully compile a subset of the rendering kernel (thanks to the work of developers at AMD improving the driver), but not enough to consider this usable in practice beyond a demo.” This means in short only NVIDIA card will render in Cycles. Am I missing something?
Yes, you are immediately assuming that OpenCL must be IMMEDIATELY working for AMD cards to work. OpenCL and Cuda are only means to accelerate the process, neither is actually required to work as long as you have a GPU.
EDIT: Never mind. Looks like OpenCL is required for the GPU. Really a shame though. I could understand that it’s not blender foundation’s problem, but at least leave the feature in, and keep tweeking at it. Yeah, I’ve been using an OpenCL graphicall branch XD.
Ok, that explains it - was getting seriously confused there for a moment
beware of the fact that OpenCL and Cuda are really really really different technologies.
OpenCL do not requires any GPUs or CPUs, OpenCL requires an OpenCL capable piece of hardware, that’s it, and there are many OpenCL capable devices on the market today including CPUs, GPUs and dedicated OpenCL cards.
Cuda on the other hand is really limited and only works on a selected range of products from Nvidia ( https://developer.nvidia.com/cuda-gpus ) , this also suggests that under certain OSs ( MAC OS X ) Cuda is basically useless and non-existent because it’s not a real cross-platform standard or an industry standard, it’s just a framework from a brand and it’s subject to marketing decisions.
Regarding Blender, OpenCL is on hold for no credible reason, the official reason is about AMD’s drivers, but I honestly think that they just don’t want to or they can’t develop OpenCL at this point in time and Cuda is way cheaper than OpenCL.
I was with you all the way until this:
I’m pretty sure the Blender devs are pretty honest about the issue that (which a developer friend of mine confirmed as a known fact by the way) the state of OpenCL in AMD’s drivers is not too encouraging (and the open drivers under Linux doesn’t support OpenCL beyond some proof-of-concept commands), which realistically means that if you want performance for your money, at present, an NVIDIA GTX card is really your best (and using official releases, only) option. I’d love for this not to be so, as I strongly believe they should’ve standardized on OpenCL a long time ago, alas this has not happened.
I don’t understand your comment about CUDA being cheaper than OpenCL though - what did you mean by that?
example: you have 3 phones, 1 is suddenly broken: you make no phone calls because only 1 of your 3 phones is broken ? What about the other 2 ?
The part that doesn’t make sense to me is the fact that AMD it’s not the OpenCL world, it’s a big contributor, but there are other brands and OpenCL runs on every OpenCL capable hardware; if AMD offers bad OpenCL driver you can use Nvidia cards, CPUs, even libraries that can offer OpenCL emulated via software: for example, basically all the MACs out there have cheap integrated APU with CPU and GPU and they offer OpenCL even without using AMD drivers, the same it’s true for Windows and Linux and any other platform where OpenCL is available. I agree that the AMD drivers are not even close to be really good drivers ( Nvidia doesn’t really shine either to be honest ), sometimes not even the official SDKs from AMD are working ( try the GLES SDK under Linux ), but if AMD doesn’t work for you there are plenty of other options to develop with OpenCL.
Cuda is cheaper because it shorten all your workflow, the testing phase is virtually not-existent and the behaviour is much more consistent and predictable than what OpenCL can offer. For a programmer being cheaper means “the same thing coded and compiled in less time”.
And If you ask me this is counterproductive for the Blender foundation because now they have 2 versions of the same engine ( Cycles ) when they can concentrate all the efforts on OpenCL and simplify everything with a much more powerful solution ( OpenCL uses both CPUs and GPUs, not just GPUs ) without wasting resources on a Cuda engine that is born to be limited in every possible way, for example now MAC OS users can’t benefit of any new Cycles features because under MAC OS there is no Cuda and no high profile GPU.
Ahh, now I get your point. I agree that OpenCL would be a lot better because it is an open standard, offers a level playing field and a one-fits-all approach. Plus, in theory, I could use everything including my toaster as a rendering device. However, since the compute abilities of all devices except high-end GPUs (and certain custom hardware that I think we can ignore as not-viable for most Blender users), this doesn’t really matter. The difference between rendering a scene on my quite recent Intel i7 and the previous generation GPU from NVIDIA (in favor of the GPU) is so significant that realistically, only the GPU matters. Some time ago, I tried setting up my own little farm, using every device I had that was capable or running Blender, and it turned out to be a bit of a joke. I ran it over the x-mas holidays, as I was away for two weeks and didn’t need any of the devices. When I came back, 95% of the work had been done by a single GPU - the GTX-580. The rest hadn’t contributed any significant output beyond a lot of heat
Currently Intel isn’t quite there yet, so unless you can afford your own professional renderfarm , or have extreme RAM requirements (like Project Mango did - but they ran Teslas, so that’s pretty much the same), modern GPUs from NVIDIA and AMD are your only choice. For those, since AMD can’t get OpenCL drivers to work properly, we’re left with only one GPU provider. If you want to squeeze all the juice you can out of an NVIDIA GPU, you use their own CUDA architecture, as it is optimized for their hardware.
If I was lead on Cycles, my reasoning would be simple: we need OpenCL and CUDA support. Today OpenCL doesn’t work properly on any rendering device with decent performance (meaning: just stick with CPU rendering in stead - it will be faster). However, CUDA gives very solid rendering performance, actually by far better than anything else by a wide margin, so until AMD gets their act together, CUDA gives my users the only real gain over regular CPU rendering. Thus I shall focus on CUDA for the benefit of my users.
Personally I hope AMD do get their act together and quickly. This would help shift focus on to OpenCL, which hopefully would make NVIDIA care and make drivers running OpenCL just as effectively and efficiently as CUDA does today, and we could all forget about the whole CUDA thing. But that’s not how the world is today, and until that happens, I will stick with CUDA devices exclusively for rendering, because egoistically, I just want the best rendering performance I can afford, and that’s that
As said before, It may not really be AMD who’s the cause behind it. AMD is not the only ones who can use it, Nvidia can as well, and so can intel. if it was AMD’s fault, than OpenCL wouldn’t still be available for the other architects. If it was the compiler, I could understand, never the driver.
I think it’s simply that NVIDIA has everything to gain from CUDA being the parallel computing standard, so they won’t really do much with OpenCL unless they have to.
Intel isn’t quite there yet in 3D performance, but will perhaps consider this in the future for their GPUs (should be interesting as the Sandy Bridge chipsets and onwards has very tight CPU/GPU integration, so getting max performance out of both together using OpenCL I’d imagine would be easier than with discrete GPUs). And there is of course the Xeon Phi, which if I remember correctly is a bunch of parallelized Pentiums.
AMD seems to have been lagging in high-end GPU computing and is now hopefully getting OpenCL on their agenda for real, but whose efforts thus far seems somewhat unimpressive (though I might not be fully up to date).
I was reading a thread at StackOverflow about CUDA vs. OpenCL recently, and from a developer standpoint it was thusly summarized:
- NVIDIA having a more mature compiler, a more stable driver on linux (linux because its use is widespread in scientific computing) and has been around in practical use much longer
- AMD has recently stepped up their game. They now have both BLAS and FFT libraries. Numerous third party libraries are also cropping up around OpenCL.
- Intel has introduced Xeon Phi into the wild supporting both OpenMP and OpenCL. It also has the ability to use existing x86 code though, limited x86 without SSE for now
- NVIDIA and CUDA still have the edge in the range of libraries available. However they may not be focusing on OpenCL as much as they did before
In short OpenCL has closed the gap in the past two years. There are new players in the field. But CUDA is still a bit ahead of the pack. But from Blender’s perspective this matter little if they can’t compile their Cycles code using OpenCL. After all, as the devs said:
OpenCL support for AMD/NVidia GPU rendering is currently on hold. Only a small subset of the entire rendering kernel can currently be compiled, which leaves this mostly at prototype. We will need major driver or hardware improvements to get full cycles support on AMD hardware. For NVidia CUDA still works faster, and Intel integrated GPU’s are unlikely to give any speed improvement over CPU rendering. (emphasis mine).