Nvidia Volta & Maxwell GPUs

Both of these looks exciting for Cycles:

Huang said that the future Volta GPUs would have an aggregate of 1TB/sec (that’s bytes, not bits) of bandwidth into and out of the stacked DRAM, and added that this would be enough to pump an entire Blu-Ray DVD through the memory in 1/50th of a second.

The future “Maxwell” GPU, which looks like it will come out around late 2013 or early 2014 if this roadmap is to scale, has its own memory enhancement, which is called unified virtual memory. Simply put, unified virtual memory will let the CPU to which a GPU is attached see into and address the GDDR memory on the GPU card and conversely will allow for the GPU to see into the DDR main memory that is attached to the CPU in a system.

But will Nvidia decide to allow users to enjoy a general OpenGL rendering speed that is at least on-par with the last few generations (ie. something that is not surpassed by the old 8800GT I have in my old machine).

My new ATI card does not seem to have this problem so much (at least in not having to enable certain options to make high-poly meshes workable).

It also depends on if they decide to maintain current CUDA speeds on the GeForce line (because I’m not sure if they might intend to require users to upgrade to a new Titan-esque GTX line to get a good level of performance).

Overall, the specs. on these future architectures look pretty good, but I still wouldn’t fully discount the possibility of having to spends thousands of dollars if you want full, unbridled performance.

Now I want to hear the people who said that the GPU rendering is not the way to due to the fact of the limit in the memory. I know it might be some other problems but the benefits on speed is right there and it´s far more faster than the CPU methods. In the next 2 years we may clear the memory limit and then… who knows… I’m always on the side of the GPU rendering and this re-enforce my opinion, the future is so bright for the GPU rendering :slight_smile:

Memory will remain an issue for GPU rendering unless they put at least 16GB of stacked RAM on the GPU. Unified memory may seem like the solution for the lack of vRAM but the bandwith will still be an issue. DDR3 has up to 20Gb/s (Dual Channel). DDR4 could hit 40Gb/s. A GPU like the HD7970 or the NVidia Titan has like 300Gb/s. This could be a huge bottleneck for GPU rendering.

If the bottleneck is the motherboard, lets get rid of it and just run video card for our desktop operations. We will store all our data in the air so there should be no performance issues with that.

what will we do when we get everything pixel perfect and can match the real world…???..maybe we can shift our technology into kite flying technology!

Maybe we already are pixels?

And on top of that, the GPU’s architecture will have to expand in a way that makes it a lot easier to program general tasks for. (ie. one reason why there’s no easy way for Cycles to have an optimal hair rendering solution for the GPU compared to the CPU).

There’s a number of things in 3D that are still very difficult to get working on the GPU (at least in a way that’s a lot faster than the CPU). The big challenge would this, how do you make programming for the GPU as flexible as programming for the CPU without sacrificing a lot of speed in the process? Because I’ve even heard of Otoy running into situations where it wasn’t easy to code a feature for Octane because of its pure, GPU-based nature.

I can’t speak for others, but at least for myself: I never meant that GPU rendering would never be the way to go, just that it isn’t the way to go right now.

And it’s not just because of memory limitations, it’s also because of lack of standards support. What’s nice about CPU’s is that, by and large, you can write once and compile-and-run anywhere (i.e. hardware vendor independence). Whereas GPU support tends to be a bit of a nightmare right now, where even within the same vendor you often need hardware-specific code to get it running on all cards.

The GPU landscape is just too wild right now compared to the CPU landscape, and a lot of development effort that could otherwise be spent on other things has to go into wrangling the wilderness if you’re trying to support GPU’s widely.

Once things are to a point where you can write code once, and with only minor easy-to-make tweaks get it to run on a wide range of GPU’s from multiple vendors… then it will be time for GPU rendering.

That’s not to say that trying to support GPU rendering is a bad thing. It’s not, it’s a great thing, and will help push the industry towards the above goal. But it’s always going to lag behind CPU support simply because it’s a pain to develop for right now.

Now I want to hear the people who said that the GPU rendering is not the way to due to the fact of the limit in the memory. I know it might be some other problems but the benefits on speed is right there and it´s far more faster than the CPU methods.

It was always obvious that the memory limit was one to be alleviated, sooner or later. It’s also obvious that you need a system of streaming in textures if you want to deal with hundreds of gigabytes of texture data - whether you are on the CPU or not.
I doubt you could honestly find somebody who really made that “it will never work due to memory limits” straw argument that you’re putting up with.

What wasn’t (and still isn’t) so obvious is how the programming model of GPUs will change. Developing complex applications (and by that I don’t mean things like Photoshop filters) for heterogeneous platforms such as OpenCL will likely remain unreliable and unproductive. Platforms like CUDA will progress and improve more quickly, but remain proprietary.

Also: The speed benefit isn’t really that significant for GPGPU rendering. We’re looking at a price/performance difference of maybe 2x or less compared to CPUs. At the same time, we can see the GPU break down at certain tasks like running the hair code (it’s actually slower!). At the moment, it’s unknown whether that problem can be solved - and it will take more engineering effort just to figure it out.

Maybe… http://www.simulation-argument.com/

Is it that complicated with coding for GPU? Ah darn …

For me in my workflow the 260$ GTX 570 is a blessing and the workflow is perfect. Might not be what others need.

I am curious if at one point they also can connect CPU and GPU and fuse them together as what Intel and AMD tried since some time.

From what I’ve heard other developers saying, yes. I don’t have first-hand experience, though, so grain of salt.