Nvidia unveils new Turing architecture

I have a feeling you’ll see this stuff in consumer hardware/software pretty soon, it’ll be the NVIDIA Hairworks of Lighting. Developers will do a half-assed job implementing it and gamers will complain about it being slow (or proprietary).

NVIDIA registered the name “Geforce RTX” and the rumor is that both the 2080 and 2070 models will be branded as such, which could mean they throw in a couple of tensor cores on those.

I doubt there’s a lot (if any) actual raytracing-specific hardware in those chips anyway. Even without hardware support, DXR has a software fallback for older cards.

yeah , nvidia are douches … if only the market was more stressful like the current situation with cpu …

1 Like

I don’t know if things will change that quickly.

A lot of gamers act as if Nvidia is the rightful owner of the GPU market and as such all competition must be destroyed. They currently have the advantage of being seen as a ‘lifestyle’ which is also what keeps people going back to Apple stores to upgrade every year.

AMD will be struggling to climb back into a competing state no matter how much of a leap their Navi cards are.

The real question is well EEVEE support the new real tracing core? Assuming they have a Geforce version, which is likely along with the AI core.

gamers are like us they just want the best perf , if amd destroy next geforce line up , gamers will all switch to amd , no doubdt about that

look at what happend recently in the cpu market … intel was also the king of the hills , then amd hit hard , and everyone is praising threadripper changing “side” like nothing happend

lets see what will happend with the first amd 7nm chip in 2019

1 Like

If you trust Nvidia’s charts and graphs, while there is a fallback, it performs some six times slower. So there probably is some kind of rt hardware in there. I’m curious if they just beefed up their cuda cores to handle it, they’re somehow using the tensor cores (does that make any sense at all?), or maybe if they added the rt functionality as a separate area of the chip.

The Turing cards have extra RT units on the chip.

As far as i can tell the only api that currently supports raytracing is DirectX Raytracing (dxr) given EEVEE is fully opengl at this moment one of two things need to happen for eevee to support this. 1) EEVEE get ported to directX or 2) nvidia makes the raytracing available though some other API that plays well with openGL.

I doubt that’s going to happen, that would make certain Eevee features Windows only (and in a community where a large number of users use LInux, the BF would face a firestorm of criticism and accusations of reneging on the push to be platform agnostic).

Hi!
I was skeptical about the real implementation of this new architecture,
but things will go faster than expected.

Ton Roosendaal himself said this on his tweeter :

“NVIDIA released Optix (ray-trace library) as part of drivers install, making it compatible with GPL and Blender. Another great step is NVIDIA releasing material library MDL as BSD. NVIDIA is happy to support us making it all happen. More details in coming months.”

Plus, Vulkan is a candidate for Rtx technology. So good for multi platforms.

sources :
https://twitter.com/tonroosendaal
http://on-demand.gputechconf.com/gtc/2018/presentation/s8521-advanced-graphics-extensions-for-vulkan.pdf

2 Likes

Wow. Does this mean blender will be able to use the Nvidia AI denoiser? That would be great.

just got a post on this ai denoiser in right click select btw …

and nvidia , if they revolutionize the market of the cg and video game they cant possibly keep this tech for themself right ? they have to share it otherwise its not legal ? right ? i hope so …

It seems yes.

They don’t have to share anything, it’s their private (intellectual) property. Others are free to develop competing solutions, NVIDIA is not the only entity to have applied machine-learning to denoising.

However, if NVIDIA is first-to-market with a good-enough solution, they may well take 80-90% of the market, as they have succeeded in doing with CUDA.

There are some monopoly laws that are supposed to prevent total market domination, but they are invoked very rarely.

i though thoses law could help in this case… well then, evolutions of gpu is going to be directed bynvidia greed if amd dont realease something badass in 2019

No. This isn’t actually such a big deal, professional raytracing is a niche, previous hardware raytracing solutions have failed in the market. AMD and NVIDIA could’ve added it years ago if they saw the demand.

Games won’t be authored for raytracing until the consoles support it well enough. Until then it’ll be some “Ultra” setting on PC that most people won’t use because it won’t be that big of a quality difference relative to the cost.

Intel will be entering the desktop GPU market soon. There will be plenty of competition so that NVIDIA won’t be able to rest on its laurels.

Considering that, what’s wrong with NVIDIA being the dominant market player in some area when they have the best product? I don’t see people complaining that AMD is greedy because it wins all the console contracts, or Intel being greedy for having by far the biggest market share in GPUs overall. Why doesn’t NVIDIA get sympathy for being so unsuccessful in the mobile area with their Tegra chips? :frowning:

1 Like

I wouldn’t worry. AI as a practical technology for the everyman is very young. However good nVidia’s initial solution may be, you can count on it that within the year someone will come up with someting better, long before the technology makes any serious inroads into average gamers’ homes.

How about using the RT core for speeding up Cycles renderer? Vray, Arnold and even Redshift are using the new architecture for that.

RT is more for cycles than eevee I think so yes.

I looked though some of the developer materials, looks like they are offering integration with the RT cores though 3 API’s

  1. OptiX
  2. Vulkan Extentions
  3. DirectX Raytracing.

2 seems to be a future thing? cause i didn’t see a whole lot on it, 3 is pretty much unusable to us given we’re opengl only. 1 used to be an issue in the past due to the binary dlls we needed but can’t redistribute with blender, but there are rumors those issues have been addressed.

so 1 could work for cycles, but it would require quite a bit of rework.