CUDA binary kernel not found

It is an error in host_config.h in /usr/local/cuda/incude/host_config.h , you have to delete or command one line in it.
After delete it look like this:

#error – unsupported GNU version! gcc 4.5 and up are not supported!

#endif /* GNUC> 4 || (GNUC == 4 && GNUC_MINOR > 4) */

#endif /* GNUC */

Cheers, mib.

Hmm… My host_config.h file appears to have already been modified to fit your description. I’ve uploaded what I have to pasteall. The relevant section is lines 80-86.

Reading through the logic, it appears that the variable GNUC = 4 and GUNC_MINOR = 5 (because my most current version of gcc is 4.5). Unfortunately, it doesn’t seem to recognize that I also have gcc 4.4 installed - to get around this exact problem.

Hi, i found the wiki page about:

http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/GPU_Rendering

Cheers, mib.

OK, I think I’ve reached some conclusions. My setup:

  • GeForce (9400M+9200M) in Ubuntu 11.04-64.
  • Using the official Blender 2.61 release, with the standard included libraries
  • Installing the CUDA toolkit (most recent version), the most recent drivers from NVIDIA, and gcc 4.4
  • Commented out line 82 in /usr/local/cuda/incude/host_config.h

When switching to the Experimental GPU rendering mode, the CUDA kernel compiles successfully after some complaints (and 250 sec). However, during rendering the CPU usage remains high, and it is clear that utilizing the GPU mode offers no performance benefits. This is the behavior predicted by on the Blender wiki (thanks to mib2berlin for sharing this helpful link!).

Library files also works with Nvidia 8600 GT on a Macbook Pro 4.1! :slight_smile:
Blender 2.6.1 official release.
Thanks bat3a!

I’m using Blender 2.61 on 32bit Windows 7. My graphics card is an 8800gt. I get the “CUDA device suppoted only with shader model 1.3 or up, found 1.1” error when it’s set to “supported”. I tried bat3a’s lib folder (I added the contents to C:\Program Files\Blender Foundation\Blender\2.61\scripts\addons\cycles\lib as I assumed the 2.60 folder mentioned earlier is only if you’re using 2.60) and set Feature to “Experimental”, Device to “GPU”, and GPU type to “CUDA”. Now I get the error “CUDA error: Out of memory in cuLaunchGrid(cuPathTrace, xblocks, yblocks)”. This happens even with an empty scene, so I don’t really believe it is a memory issue.

I searched 8800gt on these forums, and people have shown renders done on the 8800gt, so it’s obviously possible. Any ideas what might be the problem? Thanks :slight_smile:

Try this build. unzip the .7z
you can run it as portable just create a config folder inside of the 2.61 folder

http://graphicall.org/177

It’s alright. I figured out what the problem was. I had the latest display drivers, but I didn’t realise there are different ones for CUDA support. Once I installed the CUDA ones everything worked fine, and quite a bit faster than my cpu might I add.

The libs file link provided is broken now. Can someone please upload the libs file again and pass on the link on this thread

Thanks

Yes if someone could please post the libs file in this thread it would help a lot. The other link doesn’t work anymore.

this is the libs again, but it will be outdated very quickly, so you should find another way to keep it working for you.
x86
x64

Know somebody how to make this working on OSX?

哦哦,不错的贴子,我来看看拉!static/image/common/sigline.gif怀孕吃保胎灵 黄体酮胶囊