Just thought i would make the tread before someone else did.
Awesome! I just got my GTX 260 yesterday. I cannot wait to throw an OpenCL renderer at it (when they eventually show up.)
nice gfx card!
Do you have one?
I have an ATI HD 4850 currently which works fine with Blender, but not as good as Nvidia. It’s failing so I decided to go back to the other team and found this locally for $135. I thought that it was a good deal for such a card. I’m a gamer as well so having good game and Blender performance are important to me.
It would have been better to wait 2-3 months. ATIs DX11 cards are released (X2 soon), NVidia will follow in a few months so the older graphic cards become cheaper. And btw the new ATI cards are insane.
I have a gtx 260 192sp, it’s runs very smooth. Only trouble i’ve been having is it gets a bit hot but that’s because the fans controller only puts it up to 80% speed top
This is fantastic news :D, although it depends a lot on developer uptake among the various industries.
Yup. I’m totally aware. But I don’t have over $300 to spend on the new ATI technology and the newer Nvidia tech will also be decently expensive. Also, my card is failing now so waiting is not a real option for me. The ATI boots to a black screen for three or four boots before I can actually get to a desktop.
I figure that in a year or so I’ll just move this to one of my other PC’s and then upgrade when the newer tech is more reasonable.
Maybe one day ATI will use hardware instead of hacks. I know it seems rude, but fact of the matter is there is nearly always some issue with ati, and I used to be a loyalist(ATI that is)…I( just got tired of the crappy driver the catalyst crap…all of it…BTW I’m not trying to start a war here…I just know this is more “splitting hairs”. I’m personnaly ecstatic about openCL…and hate directX…(the whole you must use windows BS).but who is going to back it? I am un-educated on the issue.
Well usually their hardware is superior to Nvidia in performance and features. Its their drivers that suck, but spec and all I haven’t seen the OSS community step up and do anything either. What does that mean? Its really hard to write gpu drivers. ATI has always been a DX11 house and that is where their support usually lies. Maybe opencl will change their attitude if they are truly committed to supporting other platforms.
I’ve always chosen nvidia over Ati purely because of the better documentation and better drivers. For me “it’s really hard to write gpu drivers” doesn’t really cut it… These are multi-million (billion?) dollar companies and a large portion of their job is to write drivers so there’s no excuses. I’m not going to divulge into Nvidia vs Ati discussions because I’m just going to end up being biased towards Nvidia xD.
The new Ati cards are beasts though :D, it seems they’re around 2x faster than the previous series.
On the note of OpenCl, Don’t microsoft have their own equivalent which is getting a lot of attention too? It’s direct-something (direct compute?).
Yes…Direct Compute is the DirectX version of OpenCL. ATI will have their OpenCL and Direct Compute drivers ready by years end. GPU accelerated rendering is coming and so far I’ve only heard about people using Cuda or OpenCL to write them (although I know very little about this versus others here on BA, so I’d love to hear the experts chime in.) Let’s just hope that more companies stay on the more OS neutral OpenCL side instead of going the Windows Only route.
ATI also has(in many cases) faulty memory(every card I had took a crap on me, or made my computer crap), or cheap capacitors(I guess they all do though)…but I digress…sorta, like I said I hate catalyst and their drivers in general.
What’s the difference between opencl and opengl?
OpenGL is Open Graphics Language and is used for graphics manipulations. OpenCL is Open Computing Language that allows programs to be run on a graphics card instead of on the CPU. V-Ray showed off their new renderer running on Cuda (Nvidias inhouse version of OpenCL) and it was tons faster than the CPU version. OpenCL isn’t the be all, end all though. Some calculations are not great for GPU’s to handle. There are some jobs that the GPU is great at. I love the speed at which my ATI can convert video to a different codec versus using a CPU solution.
It is hoped that when OpenCL gets out there and starts to mature, there will be rendering engines that will be tweaked to be able to harness some of the power of the GPU to the do the calculations for rendering (of which currently the GPU is only being used to represent the scene on the screen.)
I see. Thank you.
It would be awesome if the Blender Internal render got at least a checkbox so it could render using the GPU (since this is not platform dependent, and only partially hardware dependent since it seems many chips should support this). Seems like a pretty huge project, so either it fits into 2.5 because its so big, or doesn’t, because its too much. I’d also say wait until big rendering changes or updates are done before making it work with the GPU.
My bias lies with ATI just because they actually support the OSS community and are actively trying to adhere to some of the standards that the xserver team has been implementing. Nvidia currently has no plans to implement anything other than what they already have.
Funny how people always associate OpenCL with GPU programming. It’s designed to make it tons and tons times easier to code for multicore platforms, and in the process include the gpu if wanted.
OpenCL will probably be the standard in applications and Directcompute will not be as used.
Directcompute is designed to utilize the GPU and nothing else which means OpenCL > directcompute in so many ways.
OpenCL does way more as in actually making it many times easier to code for multiple CPU cores and as well utilize the GPU.
Afaik OpenCL manages the task assignment for u so u don’t have to change that much.
OpenCL is supririor to directcompute.
Oh and btw., AMD is actually contributing to OpenGL.
One of the new hot things with Dierect3D is the tessellation feature, well guess what, AMD has made that for OpenGL.
Hopefully this will make it to core or ARB in OpenGL 3.3 so it’s a universal thing and not ATI only :).