Nvidia unveils new Turing architecture

That is a pretty big assumption. I’m sure Nvidia will provide some troubleshooting and technical support, but I doubt they will put a team of their programmers on it like AMD had to for cycles OpenCL. Maybe they will, but don’t be surprised if you have to settle for a really fast CUDA card without RTX support for several years.

right now RTX 2080 = 1080ti for rasterisation , so if you buy a 2070 and blender will never support rtx for eevee you are screwed

I’m willing to bet that nVidia will make sure that the implementation of RTX in major 3D software will be made. They even made the license to work with Blenders.

devs need to make a statement asap for the new buyers :confused:

I prefer it, if they make a solid, well founded statement and take the time they need to do so.

@LazyDodo do you have any pointers for testing with the cuda 10 sdk? i’d like to take a look at https://developer.blender.org/T56858 as a first step to getting the 2080 working, but am not sure how to compile a new kernel…

in intern\cycles\CMakeLists.txt there is this chunk of code

if(WITH_CYCLES_CUDA_BINARIES AND (NOT WITH_CYCLES_CUBIN_COMPILER))
	if(MSVC)
		set(MAX_MSVC 1800)
		if(${CUDA_VERSION} EQUAL "8.0")
			set(MAX_MSVC 1900)
		elseif(${CUDA_VERSION} EQUAL "9.0")
			set(MAX_MSVC 1910)
		elseif(${CUDA_VERSION} EQUAL "9.1")
			set(MAX_MSVC 1911)
		endif()
		if(NOT MSVC_VERSION LESS ${MAX_MSVC} OR CMAKE_C_COMPILER_ID MATCHES "Clang")
			message(STATUS "nvcc not supported for this compiler version, using cycles_cubin_cc instead.")
			set(WITH_CYCLES_CUBIN_COMPILER ON)
		endif()
		unset(MAX_MSVC)
	endif()
endif()

change that to

if(WITH_CYCLES_CUDA_BINARIES AND (NOT WITH_CYCLES_CUBIN_COMPILER))
	if(MSVC)
		set(MAX_MSVC 1800)
		if(${CUDA_VERSION} EQUAL "8.0")
			set(MAX_MSVC 1900)
		elseif(${CUDA_VERSION} EQUAL "9.0")
			set(MAX_MSVC 1910)
		elseif(${CUDA_VERSION} EQUAL "9.1")
			set(MAX_MSVC 1911)
		elseif(${CUDA_VERSION} EQUAL "10.0")
			set(MAX_MSVC 1999)
		endif()
		if(NOT MSVC_VERSION LESS ${MAX_MSVC} OR CMAKE_C_COMPILER_ID MATCHES "Clang")
			message(STATUS "nvcc not supported for this compiler version, using cycles_cubin_cc instead.")
			set(WITH_CYCLES_CUBIN_COMPILER ON)
		endif()
		unset(MAX_MSVC)
	endif()
endif()

then do a make release 2017 x64 nobuild

then go into your build folder, openup CMakeCache.txt and look for a line that says CYCLES_CUDA_BINARIES_ARCH and add sm_75 to that list

then run rebuild.cmd and hope for the best, if that doesn’t work drop by on irc this forum isn’t really the ideal place to help people with build issues.

2 Likes

heres pablo responding to a question about RTX implementation in blender :

If someone wants to buy a bleeding edge card, they have to bear the consequences of that. The developers shouldn’t have to bend over backwards to support the 1 percenters who can drop thousands of dollars on a card.

If you can afford that much hardware, then buy a license for octane or one of the other renderers that will have RTX support pretty soon.

That’s the thing about bleeding edge hardware, who’s blood do you think it is? The consumer.

Can someone educate me regarding the new NVlink feature. Does it mean that stacking 2 cards with 8GB will automatically result in 16GB VRAM as far as Cycles or other apps (e.g Substance) are concerned? Or is this another illusive feature that needs explicit software support? Tnx

Until Nvidia starts piling on the VRAM in their new cards, this would be a way to get around that. The memory available will increase the more cards you add.

However, doing that from generation to generation will become very expensive (since you would at least need the top end Ti model and up to 3 cards to create scenes of a complexity that rival anything done with the CPU).

rtx 2080 or 2080ti are gaming card , theres is no point for gamer to have twice as much memory , if nvidia do that they lose money on potential quadro user

yes im thinking about switching to redshift or octane, but im afraid there is some downside about that …

Redshift is amazing and it’s the one thing I miss from my switch to Blender.

Once Redshift’s Blender plugin is released it’ll make a lot of renderers here happy.

yes but what about redshift and eevee conversion ? what about the brand new 2.8 overlay system on top of cycles ? what about being always steps back between new blender realease ? what about the loss some cool shader editor features ? what about all your library of files made in cycles , now needing a shitons of rework ? what if cycles is lesss laggy ? what if theres not my beloved “ctrl b shift z” and only a dom render window … im quite confuse , but well lets not speculate on something not even out yet

Well yeah, native renderers will aways have the best compatibility with the base software. If you care about those features, it’s best to stick with Cycles and accept whatever limitations that comes with it (including developer support, as this thread has shown).

One thing to note is that this is especially true for Blender because of the GPL. I have asked the question if it is time for the BF to look at ways to start moving away from it to something more permissive, but the majority of the community are firmly against changing it.

Why the license is a major blocker to a proper implementation, to do it in a deeply embedded way (ie. directly accessing Blender’s systems) would trigger the license’s viral nature and force the commercial vendor to release all of their source code to the public (not just the plugin, the whole deal). The only way around that is the unsustainable practice of maintaining a special build of Blender available only to customers (as how it is being done now with Vray and Octane). The problem though is that you need to have a different build for every proprietary plugin you want to use, and because of this Blender will always have somewhat crippled support no matter what the BF does.

3 Likes

It looks like the massive die size Nvidia used for Turing could be leading to yield issues.

If it is yield issues, then the first company to produce a working Infinity-Fabric type system for the GPU will have a good chance of dominating the market for years to come (in which AMD is currently the closest to achieving).

At this point, I wouldn’t even look at a 2xxx card until the dust settles a bit.

RTX 2080 vs GTX 1080 Ti
bmw27, pavillon: faster
classroom: tad bit slower
koro and fishy_cat: much faster

Of course, it is pure CUDA shader work - Blender needs an RTX codepath (RT cores - ray-triangle intersection, Tensor cores - denoising) to be able to take advantage of all the new hardware elements found in Turing.

1 Like

I think vram stacking is a quadro-only feature at the moment. At least that seemed to be what was suggested in the LTT video about it.

1 Like