GPU upgrade for Eevee viewport speedup?

Hi,

I find myself working a lot with the material preview mode in the viewport to manipulate textured, shaded geometry (no animation). Sadly for my purpose solid mode with texture display is not (yet) cutting it.

So I have a bit of a performance problem running Eevee on my GTX 1070 in the scenes I’m dealing with. I’m wondering if upgrading to the RTX 2080 Ti would yield a significant boost to the viewport speed or is this a situation where too much might depend on other parts of the system?

Is going by video game benchmarks a useful metric to pick a card to use with Eevee?

Hi.
GTX 1070 is a very good graphics card, you shouldn’t be having major problems with Eevee. Before spending money with an update on the graphics card, you should make sure that the problem is not due to some other component of your machine, or something not related to OpenGL or because some own Blender limitation.
The best way to know this is that you share an scene where you have problems for other users to analyze. You should also give more information about your system, such as OS, the amount of RAM, CPU, the resolution of your screen (1080, 2k, 4k?)

1 Like

Well I can’t really upgrade the rest of my machine, I’d have to replace it. 4790K @ 4 Ghz, 32 GB RAM, the usual.

I can see from benchmarks that the 2080 Ti is about 1.6 times the performance of the GTX 1070. I’d be happy with that kind of speed impreovement if it were the case.

My scenes are static geometry, not very complex but with rather elaborate shaders and large textures.

Here are some eevee screengrabs, note the scene statistics:


Three or four lights, contact shadows enabled. This is the kind of view I’m working in and I’d really like it to be faster.

If you do indeed upgrade, I’d go with 2070 Super if I were you. You could get two for around the same amount as a 2080Ti. I have three EVGA 2070 Super Hybrids. Me likey. But yeah, 1070 shouldn’t be too shabby for EEVEE…

But that does not seem like a real upgrade - none of them do? My card is four years old now so it’s well due for a replacement.
I’m not doing GPU rendering, just want the viewport to be as fast as possible.

I’m kicking myself for not having gone with the 1080 Ti back before the Bitcoin craze.

I’d personally say it is a significant upgrade. The OptiX/RTX bells and whistles makes 2070 Super quite formidable. Yes, I GPU render (with E-Cycles), and I need at least three GPUs for my work, but I have to think that if you are working in any kind of professional capacity, the upgrade to RTX is dough well spent. And you would have the ability to actually enjoy some GPU rendering, should the need come up. I haven’t looked at the benchmarks lately, but I’m pretty sure that 2070 Super renders more than twice as fast as 1070 in Cycles/E-Cycles, when using OptiX.

Sure but viewport speed would not seem like it’s affected by the RTX stuff (and I don’t know what Optix is)? That’s really all I care about for working on game assets. :slight_smile:

Of course if someone can come forward and point me towards a comparison that has the less pricey Nvidia RTX cards give a major jump over the 1070 then I’d be happyy to save some cash. Prices for the 2080 Ti are a joke …

I’m not looking for a 10 or 20% performance increase though. At the lousy fps I’m getting that would probably not feel different at all.

It is not necessary, your machine is good enough.

Is this an OpenGL/Gaming benchmark? If this is CUDA / Optix bechmark, it has no influence on Eevee. Anyway, I guess that RTX 2080Ti is also better in OpenGL.

It would really be necessary to see if Eevee/OpenGL is responsible for the bottleneck. Remember that in Blender 2.8 there are many things that are not yet optimized and generate slowness in viewport, such as Subdivision/Multiresolution modifier. As I said, it is best that you share a problematic scene for other users to analyze, including RTX 2080Ti users for them to see if they notice improvements.

It is very important that you mention your screen/display resolution here. Eevee/OpenGL is as with games, the higher the screen resolution, higher graphic capacity and vRAM is required for higher fps…

Can’t really share scenes I’m afraid. But suffice to say my issue is not with subdivision modifiers. The only modifiers I’m really relying on are data transfer and they don’t seem to slow down much either way. Lights however do have an impact.

Beyond that I’m on a twin monitor setup but the Blender view is running on a single screen at 2560x1440p

The benchmarks I’m referring to are simply gaming FPS scores for various videogames. The stuff they list in video card reviews.

Thanks for your input so far!

Oh and the view is fast as soon as I switch to Solid mode so I guess that rules modifiers on the mesh out anyway.

Well, I think you can do a simple test. You open a problematic scene where you notice slow viewport. You play animation and while you orbit the view you measure the fps (upper left margin of the viewport).
Then from OS display settings, you reduce the screen resolution to 1920x1080. You open the same scene and under the same conditions while you orbit the view, measure the fps.
If you do not get significant gain in fps, I do not believe the graphics card is responsible for the slow viewport.

^ that… plus, Thomas, what version nvidia driver do you have? If you’re using Blender 2.83+, then you might want to make sure you are pretty up-to-date, like 441.xx or better.

My driver is … old. Neolithic age or thereabouts. I need to check what happened there because I do remember installing new drivers since 2016. Driver panel says otherwise. Lies!

At any rate scaling the viewport or downsizing the Blender window greatly increases the viewport performance. Perhaps a magnifying glass would be the appropriate solution then. :slight_smile:

WOW! The hair is great ! Did you make it yourself - in Blender?

Thanks, yes that’s my day job. Blender is the ideal choice for it.

How about a couple questions?

It appears that the hair mesh is made of extruded planes, rather than cubes. Is that correct?

And, I noticed that the face mesh is triangles, rather than quads. Is that your standard way of modeling? For any particular reason?

Thanks for sharing !