Nvidia unveils new Turing architecture

(Ace Dragon) #123

A well known PC builder for CG artists has come out with information regarding the ability to combine the memory of multiple cards.

There is a bit of bad news in this, and that is the price of entry to have the memory needed to do large scenes is very steep. The feature requires two Quadro cards, two expensive NVLink connectors, and a third card for display output (which essentially disqualifies the vast majority of consumer-grade builds and prices out the vast majority of hobbyists, which may even include small studios). For that amount of money, you can get the new 32 core threadripper with at least 64 gigs of RAM to play with.

(captainkirk) #124

Is there any info about whether I can use an RTX card and my 1050 on the same board and render with both at the same time? I assume there won’t be a problem with it?

(burnin) #125

The NVidia’s nature of things prefers no generational intercourse, it’s kinda love-hate relationship but the right combination makes babies. Happy couple: Pascal 1060 + Maxwell Quadro 5000

The GTX NVlink & stick it up your… was clear from the start - if they say nothing, then it doesn’t exist. Must be programmed to compute :wink:

(captainkirk) #126

But to run multiple cards do you need NVlink? Isn’t that to basically make both act as one GPU? I thought that as long as the mothboard supported multiple GPUs it would work. Sorry if these are dumb questions. This is not my area of expertise.

(joahua) #127

It sounds weird if indeed the guy at Puget Systems is right.
This is from Nvidias own webpage concerning Turing:

Second-Generation NVIDIA NVLink

Turing TU102 and TU104 GPUs incorporate NVIDIA’s NVLink™ high-speed interconnect to provide dependable, high bandwidth and low latency connectivity between pairs of Turing GPUs. With up to 100GB/sec of bidirectional bandwidth, NVLink makes it possible for customized workloads to efficiently split across two GPUs and share memory capacity.

I would think that Nvidia would get into a lot of trouble from buyers of RTX cards, if this wasn’t true.

(J_the_Ninja) #128

It’s not necessary for applications like Cycles which can execute on both cards independently. Each card doesn’t even know the other is there, no linking or compatibility needed.

(captainkirk) #129

So I can stick any card in along with my 1050 and it will split tiles across both of the GPUs and my CPU?

(Dorro) #130

Isn’t the point of nvlink to combine the vram of multiple cards into a single addressable memory space?

(J_the_Ninja) #131

For those who paid triple for a Quadro, yes. For the rest of us, it’s a new cable for SLI

(joahua) #132

Anyway, here are a technical talk Nvidia did concerning Turing at the recent Siggraph:

Real-Time Ray Tracing: Real-Time Global Illumination

(Dodododorian96) #133

nvidia want to make money , they dont care if we cant use vram stack for rendering because they have their own specific lines made for that

(SterlingRoth) #134

Yes, It should split tiles between your two Gpus and Cpu.

I have a gtx 750ti and a quadro k4200 and they play nicely together.

(captainkirk) #135

Thanks. I was a bit worried that I wouldn’t have any use for my old card. Now I just have to wait and see if the 2070 is worth getting over the 1080.

(joahua) #136

A bit more about Turing and NVLink. This is from the official vray forum were Vlado (main vray developer) chipped in:

The Puget guys are wrong. We have tested NVLink here and it does work and it allows you to increase the memory available for GPU rendering with V-Ray GPU. Of course, the software that you use must be able to take advantage of this specifically - it doesn’t happen automatically for everything. For the moment we are not aware of any software other than V-Ray GPU that can use it, so no wonder the Puget guys couldn’t figure out what to do with it. I’m sure this will change with time.

Here is a link to the forum thread.

(burnin) #137

Sometimes is best to contemplate and first be aware of – food for thought…

(Grimm) #138

So I just did the plunge and bought a MSI RTX 2070 Armor card. Since I don’t care about games and just interested in rendering I’m hoping it will be a good upgrade for me. My old GTX 460 died on me (not supported anymore anyway) and my current GTX 980 is getting a bit long in tooth. This will double my vram and a significant speed up on rendering (> GTX 1080), also if the ray tracing cores turns out well, it will be the icing on the cake. :slight_smile:

The card costs $550.00 which is the same amount I paid for the 980. If this trend continues, by the time I need to upgrade again, I will only be able to afford a 60 series card. At least by then it might be faster than a 2070? I don’t expect to have the card in hand until next week, once I get it I will do some benchmarking on it. Nvidia just released a new Linux driver with support for the 20x0 series cards, so I will need to install that as well.

We will see how well it works out.

(Dodododorian96) #139

i have less and less hopes for a rtx support in blender …

(burnin) #140

Because, basically it’s useful only for gaming… too much limitations, too much work for that extra that offline renderers already do.

Don’t ever forget, to investors (capital), ‘machines’ VS ‘ppl’ is the name of the game. Machines are way, way cheaper. What’s even more pervert, is that ppl on the bottom of social ladder nowadays can only compete against machines and are loosing the battle. So guess, which one will get the next job. :robot:

(Stefan Werner) #141

The cards started shipping not even a month ago, and everyone is busy preparing the 2.8 launch. Give it some time. You’re not expecting the developers to drop everything, start working night shifts and porting Cycles to DirectX just to deliver one feature to one small group of users?

(Grimm) #142

That might be the case, but you never know, I think the devs are pretty keen on getting the latest hardware working. So far I’m pretty happy with the performance boost from the 980, even with the number of Cuda cores about the same (2048 vs 2304) it’s about 2 times as fast for rendering.