Blender can't render with my Nvidia 1070 GFX!!!

It gives me an error "Cuda binary kernel for this graphics card compute capability (6.1) not found!

I am using Win10 64 bit, Blender 2.77a
GFX: Geforce GFX 1070.

It renders without problems on my Titan (same pc).

The 1070-card is only secondary, because I was told this is how Blender best performs the rendering. Titan is primary gfx.
I don’t know if this has anything to say about this problem. Anyway, I’d be glad if you could figure out a solution for me!

No, it will definitely be officially supported in a few months. The problem is that compiling Blender with support for the 10x0 cards requires the new CUDA Toolkit by Nvidia, which isn’t officially released yet - therefore, Blender/Cycles doesn’t yet officially support it either.
If the test kernel linked above doesn’t work properly, that’s a bug and should be reported. However, it may not work as fast as it eventually will.

Here is what the Octane devs have said about supporting the new cards:

Just an update for everyone: The last two days I spent some time on trying the CUDA toolkit 8 release candidate, but there are multiple issues with it so a release based on this toolkit wouldn’t work. I’m in touch with NVIDIA, so let’s see what they can do.

It’s going to take some time to get the new version of CUDA and the cards working as they just came out. CUDA (and OpenCL for that matter) isn’t the priority for Nvidia (or AMD), but they will have DirectX and OpenGL ready before the cards launch. It’s just the nature of the beast. There are far more gamers than there are 3D artists. :slight_smile:

Where can I download a pre-built Blender version which supports my 1070 card? I know there is something about a sm_61 kernel to download, but I don’t know how to compile Blender. I’d like to download a version which already supports it. Can anyone give me a link to a working Blender version?

Hi, a user post a compiled kernel for GTX 1070/80.
Copy it to your other kernels on your system, search for sm_52 for example.
This is experimental, build on Windows and not very fast but at least you can use your card.

https://developer.blender.org/T48544

Cheers, mib

Thanks. But why is there no difference AT ALL between my Titan and my 1070 rendering default scene with cube and 1 sun-lamp?

Also, what is the best tile setting when rendering with Titan and 1070? I did a test when I got the Titan card 3½ years ago, it was 768*768 which rendered the fastest.

But what about this new combination?

Hi, what do you mean with “No difference”?
I use the Auto Tile Size addon and best for my two cards is 2 or 4 tiles, check out.

Cheers, mib

I experience wierd artifacts in my render with two cards, as opposed to one card:

Two card render (Titan+1070):


One card render (1070 only):


Look at the large green bush. It is like if it is offset by some pixels. Why?!?!?

It would be more precise if you had done image difference in image editor or Blender’s Compositor (and probably submitted this as a bug):

http://www.pasteall.org/pic/show.php?id=104697

Edit: However 1070 is not yet officially supported afaik. It’s probably in works and no need to rush bug reporting.

Do you mean no difference in terms of performance? Don’t use a very simple scene like that for benchmarking, the base rendering overhead is likely going to be dominant there. If possible, you could do us a favor and test the BMW27 benchmark (and maybe some of the other scenes in the benchmarks suite) with different tile sizes (like I have done here) and post them in the GTX1080/1070 thread.

Having said that, the Titan uses a high-end chip, about as large as it could be. The 1070 uses a mid-range chip that is about half as large as it could be. The 1070 has slightly more transistors and theoretical FLOPS but also lower memory bandwidth than the Titan, so at least on paper it shouldn’t be a surprise if it performed similarly. Many of the gaming-related optimizations aren’t going to kick in for CUDA, either.

It is like if it is offset by some pixels. Why?!?!?

There could be an issue in the way ray origins are calculated due to different block sizes (pure speculation), or maybe it’s somehow different due to the new CUDA SDK being used. Either way, you should report it.