Slow render, 1080Ti

I replaced a broken Titan (first generation) with 1080Ti, and I’ve noticed that even with simple materials (texture into BUMP’s Height node with simple glossiness mixed at .05 with diffuse), my render takes significantly longer time. First of all, Updating Shaders is much slower (twice as slow as with Titan) and the actual rendering is 1/3 slower than Titan at 256*256 tile size.

Why and what can I do about it? I was told here on Blender Artists that 1080Ti would be faster than my Titan.

The scene is quite simple, few polygons, with Titan+1070 = 8-12 sec depending on view, with 1080Ti+1070 = 12-22 sec.

The high-end cards and especially the Titan dGPUs have a huge amount of CUDA cores and became very complex in design in recent years. To take advantage of these special features and insane computational power you need complex scenes (mesh, shaders, effects, etc). As you stated, your scene is very simple, I guess the dGPUs just get up to speed and the render is finished.

I have just finished benchmarking the 1080Ti. Here are the render times from Blender Institute-prepared scenes for reference with Blender v2.79b:
bmw27 linux: 02:13
bmw27 windows: 02:17
classroom linux: 04:29
classroom windows: 04:37

Would it help to render it as an animation? Then the GPU gets up to speed for the next frame, or would it start all over again with the next frame?

I bought this card because I wanted speedy render times for animations. Under no circumstances can I render an entire animation with each frame rendering in 2-4 minutes each, anyway. I’m talking about a total length of animation of 2-4 minutes, that’s a lot of frames to render - especially at this slow speed! That’s why I kept my scene simple.

Perhaps Evee is going to save me?

It’s not an old car, it doesn’t take time to “get up to speed”. :wink:

2 - 4 minutes per frame is pretty good for a single GPU system. High quality output can easily be in the 10+ minute range, even with a multi-GPU system.

Evee isn’t even up to Alpha testing yet so any saving there (and that won’t be significant) will be at some undefined point in the future.

Well, yes, it does! Search for CUDA GPU warm-up. If the GPU is not pushed, the power management will turn down the clock on the memory and shader to save power (while sitting idle). It might take a second or so to get speeds back up again. Of course, if you are benchmarking for several minutes this time is negligible.
Nonetheless, I just wanted to highlight that the workload of the OP is not very indicative of the true performance of the dGPUs.

The scene is quite simple, few polygons, with Titan+1070 = 8-12 sec depending on view, with 1080Ti+1070 = 12-22 sec.

The other thing you are probably measuring is the precompute and post compute tasks… precompute is setting it all up for render, creating the BVH, image loading, shader creation, synchronization with the gpu… and post compute is any comp setup and finishing tasks. All of these tasks are CPU bound.

This means that what you are measuring is more than just the GPU speed.

The other thing is, are you rendering via commandline? otherwise there is a lot more overhead with the blender interface aswell.