CUDA 8 easing the GPU memory limit

With CUDA 8, NVIDIA is making good on an old promise of providing unified virtual memory for the GPU, allowing developers to transparently fall back to system RAM without effort (at least for static data).

In this way, the hard GPU memory limit is essentially lifted.

The bad news is, it’s going to be available on the Pascal architecture only. NVIDIA so far has not announced any consumer/workstation products with this architecture, but if you’re currently planning any a major GPU purchases, you may want to hold off on it for a while.

On page 15 of this CUDA presentation, they show an (eventually constant) performance hit of about 40% for a memory footprint beyond 70GB (on a 16GB GPU) on a large-scale fluid simulation.

Nvidia already announced the Tesla P100 and the GeForce GTX 1080 Ti And Titan based on Pascal, with 16GB and 32GB variants, should be interesting to see in action!

Can you please link to an official source for the 1080Ti?

:confused: Uf… uf… if those who make the production can count as official - let me google that for ya: nvidia 1080 ti

Was revealed yesterday by Nvidia CEO at the 2016 GPU Technology Conference!

That is not true. The chip they revealed (GP100) has so far only been announced to appear in the Tesla P100. There was no Geforce GTX1080 or “new Titan” reveal. There are lots of rumors and purported specs floating around the web, but nothing official.

The GP100 is a very large chip, made with a new manufacturing process. The yield at this point may be too low for a consumer product. It’s not a certainty that any of the next consumer GPUs will use GP100 at all, they may well have different chips based on Pascal or Maxwell refreshes.

Does the GTX 1080 run games at 1080p?

What was announced was a cut-down version of the GPU for the GTX 1070 and 1080, and yes, only the GP100 will have the full thing, official announcement fort these cards will be at computex 2016!

you do realise i did do that, and could not find an official source… all linking that makes you look like is a idiot.

Nothing is confirmed until that announcement happens… i cannot count the amount of times i have seen spec sheets change prerelease. hence why speculating is pointless and waiting for the announcement is needed.

I guess we’ll see next month then, but it would not be the first time Nvidia leaks specs at GTE to make official announcement a month later at Computex!

In any case, I look forward to a 16GB card, assuming that the specs hold!

Yes, and yesterday i was born:yes:… maybe you haven’t learned nothing from history, maybe you don’t comprehend the meaning of words you so blindly pass by.
NVidia has a schedule to keep. Public announcements are made at occasion. Official is just another word to pleasure you in ‘oh, my God’.

You knew the answer to the question, you read my sentence and you wished to cause me pain by degrading, diminishing my existence. Good for you :stuck_out_tongue:

No, I was wondering if you actually found a official announcement that I didn’t? I dont automatically assume people are lying on the internet when there is a claim… i just ask for a reference.

Ah thats too bad, but truth is also that the GPU RAM is much faster accessed and in general operating faster.
Cards constantly get more RAM as well - since I left the poor GTX 570 behind I never had any out of Memory issues anymore with my work.

Very good news, thanks for the info. Can’t wait for official details

Maybe some love should be given by software developers for heterogenous computing and APU architectures by using OpenCL. Expensive marketing hype is not good for progression.

What are you using now?

Cuda 8 is compatible with Maxwell for my gtx 970?

Nope

10chars

Cuda 8 is not even announced, your card will be supported for sure but not the new memory management.
It is Pascal only.

Cheers, mib