Nvidia unveils new Turing architecture

Unsurprisingly, the new architecture will have dedicated raytracing hardware. The article says their high end 10,000$ Quadros will be able to do 10 Gigarays per second, which, from my back-of-the-napkin calculations, amounts to around 20 rays per pixel per frame at 4k60. Note that’s single rays, not samples in the Cycles meaning.

No word on consumer products at this point, but it’s probably a safe bet that it will be less than that.

Discuss. :smiley:

3 Likes

Somebody buy me one so I can port Cycles. :wink:

In other news, the MDL SDK is now under the BSD license, making it possible to integrate it into Cycles: https://github.com/NVIDIA/MDL-SDK

7 Likes

Perhaps Ton can do a pitch for a whole RTX server while working the room at Siggraph. :sunglasses:
Should fit under the stairs… we get Sergey a bike with a generator to power it and keep him fit while he works on depsgraph…

2 Likes

lets just hope we will get a rtx architecture for consumer market …

How this could be supported in Opencl renderer ?

Depends on how Nvidia exposes that functionality. If it gets an OpenCL extension, one could use that directly. If not, the Vulkan extensions might make it possible to use it hybrid OpenCL/Vulkan kernel where the main render kernel runs on OpenCL and all intersections go through Vulkan to RTX.

We should wait for more information though. At this point, it’s all just speculation. We’ll see if CUDA 10 gives direct access to those cores or not.

$2300 for the entry level RTX Quadro card is very consumer pricing by nVidia standards to be honest :slight_smile: They usually milk enterprises a lot harder than that. I don’t think we will see GeForce level priced RTX cards anytime in a next few years.

1 Like

You think so? I’d wager we’ll get an announcement of consumer rtx cards next week at gamescom. This stuff will be very useful for gaming, so I’m not even expecting it to be pared down all that much at the consumer high end.

I could see them shipping GeForce cards without (or with disabled**) tensor cores. That way, they can serve the gaming audience power-efficient GPUs with lots of graphics capabilities while preventing them from cannibalising Quadro and Tesla sales.

The main feature of Volta are the tensor cores, which are almost exclusively for machine learning. The Titan V was marketed for machine learning, not for graphics. At the same time, with ray tracing coming to DirectX and Vulkan, the door is open for specialised hardware for gaming.

** That way they could just bin Quadro chips with defects in the tensor cores into the Geforce cards.

Oh I’m sure there will be a lot of binning going on, especially with that 750+ mm2 die. But I don’t think they will disable the tensor cores entirely since that just makes too good of a selling point, and also you want rt hardware in consumer hands so engine developers start seriously developing the software side of the new hybrid raster+rt era in realtime.

It will also be interesting to see how AMD responds. It’s not like they haven’t known about this for a while now.

In the demos, you can see sort of “painterly flickering”, especially on the car ones. This means it’s most likely being upsampled, antialised and denoised on the fly. And I’d bet that it’s doing something similar to the Optix Denoiser. So I’d that those tensor cores are required to run machine learning based algorithms that turn noisy aliased realtime raytraced output into something that actually looks acceptable. Therefore, I am not sure about feasibility of RTX cards with the cores disabled. :slight_smile:

1 Like

I may be wrong, but to my knowledge the tensor cores are useful for the training phase of the AI denoiser, but not so much for applying the trained net. OptiX’ denoiser already runs at interactive rates on Pascal hardware, no tensor cores needed.

1 Like

No, you are probably right. I know nothing about the technology. My assumption was that if they are making a Raytracing specific GPU, that it would not make much sense to make that same GPU also machine learning GPU. For that, it would make more sense to make machine learning specific GPU. So my reasoning was that if there are Tensor cores on the Raytracing specific card, then they are there probably to aid raytracing in some way. So I am just shooting from the hip :slight_smile:

Or perhaps machine learning workloads are similar in nature to raytracing workloads, so the card could excel at both…?

https://blogs.nvidia.com/blog/2018/08/13/turing-industry-support/

seams like blender cycles is mentioned… for sure redshift render engine will use the new tech… Blender cycles only is mentioned that use cuda. however no idea if we will use optix.

ray tracing rendering is a “gaming tech” also now , lets just hope they will not give us a low vram geforce for not concurencing the rtx 4000 @ 16gb … this smell bad

They are responding, to a point at least.

No word from AMD on the gaming front though, though they have promised annual gaming products earlier this year.

1 Like

and for 8gb of vram … this is a joke for ray tracing , i think now for 2018 choosing a less than 12gig card for rendering is a big mistake… they said they will announce a 1080ti perf like card for 400e… … a revamp of the pro duo 16gig with this tech would just be amazing …

for average artist that dont work for pixar or disney i think the best alternative is this , or a hoping 12gig at least next gen geforce “ray tracing tech” card … but again they may concurence the rtx 5000 with this move and lower the vram to a miserable 8gb as some leak says…

anyone know where we could find raytracing becnhmark with quadro card ? this is amazingly hard to find …

I think the future for rendering on the GPU would have to involve faster connections to the motherboard so the unit can access system RAM without too much of a performance hit. Professional GPU’s are getting up to 16 or even 32 gigs, but high-end PC’s can have up to 128 gigs.

Though RTX sounds like a really good technology that AMD has no equivalent of at the moment, but it really depends on much Nvidia will gimp the functionality as we go from top end to lower end to the consumer space.

pro gpu like quadro have this technology , they can aslo share the vram , so if you have 4x RTX8000 its about 192Gb of vram … quite enough … in the near future vram and gpu will not be problematic … + having all the memory inside of the gpu is cool for multitasking .

as for amd , its all about the price for perf ratio … who care of having a single 6000$ card that is powerful if you can have 4x card for half the price that is even more powerful ? thats why i find so amazing that there is no benchmark industry grade gpu …

And this is exactly why they won’t do any such thing Ace. It’s just not good business for them. They want to be able to sell you GPUs at entirely ridiculous prices merely because they have a little more vRAM. :slight_smile: