Why is blenders Viewport performance so slow/even slower?

Hey everyone!

I haven’t been on here nor used blender for close to 6 years but started again recently because of some of the cool changes in 2.8. For the last 6 years I have been a proud Maya user and at times I missed blender but I’ll be honest, I know 2.8 is in its early stages with a lot of things but I am kind of shocked at the overall lack of viewport performance. It is not a deal breaker for me and I know blender would be “just fine”without me. Not trying to rip on it… it’s just a really amazing tool and the one I really spent the most time with but I feel like my enjoyment is sorta being throttled.

My main area of concern is high poly counts and playback. In maya I am working scenes with close to 30,000,000 polygons when subdivided and can play back animations of that in the viewport at close to 80fps. Meanwhile, blender 2.8 starts throttling on me at around 600,000 polygons, some how downgraded from the 3,000,000 I was throttling at in blender 2.7. I have heard countless times that the infrastructure is just not built for that blah blah blah… i am sure that plays a part but the roll back in Already comparably poor vp performance is concerning. I have watched people running scenes in blender with polycounts in the 10s of millions so I know it can be done, And I know not everyone is using a ryzen 64 code cpu to get those numbers. I will admit I do not have the most up to date hardware but I have not needed it as a maya user where a 4770k and 32gb of ram and a couple gtx1080ti cards has been enough to seemingly do what blender would cost me thousands more to do.

I am POSITIVE there has to be more to why blender seems to be so slow for me and I am curious if anyone has any hardware related optimization tips to run more polygons in the viewport.

Also, given the giant plummet in VP performance from 2.7 to 2.8, does anyone know if this is normal or expected, and if this issue is on the road map for being resolved or if these types of issues are being considered a “priority” or if the priorities right now are just getting new features in place?

Thanks everyone and it’s good to be back after so long! :blush:

It’s normal. Blenders current performance isn’t where it needs to be, some of it is actively being worked on. I doubt it reaches other DCCs level anytime soon, but atleast some cases where it’s almost unusable are getting improved.

Edit: I’d suggest trying out the alpha version of 2.9 and see if there’s anything that improves in your use cases.

Edit 2: subdivision perfromance was/is broken, the fix for some cases is in 2.9. Reading through this: Blender 2.8 Viewport Performance might give you some idea what some of the performance sinks are.

I appreciate it, and its good to know im not toatally crazy. Just feels like such a performance blow. Generally speaking I am a huge blender advocate and love talking it up over maya but when I have something large production level I need to work on I roll it right back to maya because I can trust somewhat without fail that it can take whatever number of subdivisions I throw at it. Even without smoothing I have run particle systems in maya with over 2 million hair particles as faces and it takes a moment to setup but once its up I see absolutely no performance decrease by having an additional 2,000,000 faces in my scene. Meanwhile blender doesn’t like 2,000,000 hair particles WITHOUT faces. It is very frustrating. but I am glad it sounds like it is at least noted, normal, and probably something will be done about it. I will be happy if I can just model something more than 6,000,000 polygons without any seriously notable throttling.

The other big thing I noticed is, Maya will take a bigger hit on complex shapes where just a really smooth primitive will make virtually no impact on performance. If I make a car the scene might cap out at 30,000,000 polygons before any intollerable lag…while I have benchmarked a basic sphere as high as 120,000,000 and was able to animate it and play back in the viewport at 35fps still… Nothing to write home about, its a sphere and I will never need a sphere that is 120 million poligons, but the principle is, complex shapes take the hardest hit and it doesnt seem to affect basic shapes as bad. Blender will throttle on a sphere no problem. I bump the subdevisions up when I make the sphere so there are even 200,000 physical faces, man, it just doesnt like it. It doesn’t care at all that it is a simple shape or a complex shape, faces are all the same to blender it seems.

I really hope they get this worked out. I really want to be able to use blender for bigger projects. I love maya and would never leave it if it didn’t have its on slew of problems that blender has already adressed but as it stands, I have to bounce between the 2 like, pick your poison.

For any other modeling application, my hardware meets and exceeds expectations. Sure a 64 core processor woul speed up even maya, or modo, but I just don’t need it to because it just works as is. I hate that right now that seems to be the only fix for blender.

I can work easily in Houdini with models that will practically freeze blender. Last time I tried to set something bit more heavier up in blender for baking in another app it took over an hour, for operations that could be done under a minute if there was no performance issues. Currently I do all my baking set up in houdini with no problems. From my experience, anything over 1,5 mil will start to feel slower. 3 mil is still workable, but getting near to 9 is pretty unworkable. It will work okayish for the few steps, but then the old (2.82) undo system will eat up tons of ram over few small moves. In houdini same data will stay under 8gb and basic transforms and edits won’t kick it up to 4 or 5 times that like in blender. And this is ignoring subdiv that can’t handle even some 50k meshes with decent topology without freezing.

Atleast they are aware of some of those issues, but setting 2.79 for target, which wasn’t that great either doesn’t feel like the best move.

Yeah, the subdivision modifier is really a kicker right now too. I thought maybe turning it off in the viewport might help but I have noticed that just having it on at all can really do a number on a scene. Are used to complain a lot about how my 5 million polygon scene would crash in Maya once a day, maybe twice tops, even when everything seemed like it was going smooth and nice. I have learned to except it but I really can’t cope with the fact that I pop on a subdivision modifier on some thing with a few hundred faces and then disable it in the viewport and I’m lucky if it doesn’t crash.

I dont suppose you happen to know what the relation is to why blender sort of handles all shapes subdivisions the same. I could right click subdiv and add 1,000,000 poligons to a sphere and it seemingly is just as slow as any other 1,000,000 poligon mesh. One thing that maya and 3Ds really excell at and most other apps too that I can see, is heavy subdivisions on a basic shape have less effect on performance than complex shapes, but blender seems to handle them all equally as terrible as one another. That is very strange to me, and of course its something I hope gets resolved. I don’t know if I am more curious why blender doesn’t do this or why maya handles it so well?

I don’t have enough knowledge on the subject to make educated guesses, but opensubdiv library that blender uses since 2.8 should be in theory pretty fast. So that only leaves the implementation in blender at fault. There’s been multiple improvments on the speed and fixes that have fixed subdiv problems that have been plain broken. Currently subdiv is only available on CPU, where the opensubdiv library supports GPU. So I’d guess whatever optimizations you can use with primitives just aren’t implemented or not properly implemented yet. Or maybe it’s there, but bottlenecked by some other performance problem.

Its possibly a performance issue with windows for me right now. I have noticed a slow decline for around 6 months now with my whole computer slowing and freezing up a bit but maya still seems to work fine in the time inbetween so I dont think a thorough debug will help the poly count issue but I am sure it will help me figure out why windows is acting up, which may help a bit.

Just tried blender 2.9. A lot of bugs seem to be flushed out. Even seems more stable than 2.8 in terms of perfomance but for me at least, it seems to be coming with a trade off. For some reason anything material related is super problematic. It will be working, working, working, then my fans run to full blast and my computer just DOES not like it. Even if nothing is rendering, just changing settings in the shader editor is a nightmare. The problem seems to get worse when there are more than a few textures in a shader. One or 2 doesnt seem to be a problem. Runs pretty smooth but loading complex node trees, im shocked. It’s like all of blender is about to come crumbling down. I really love 2.8/2.9 and I fully recognize these are alpha and beta builds… but man these dont share half the stability of 2.7. I know this is because of a lot of the changes I just said I loved, but this is generally tough. I really hope they get this smoothed out quickly.

Eevee’s material code got a refactor recently. Might be something related to that. There’s still tons of stability and performance debt from 2.8 that I hope get improved.

There are a few things here. One is Blender is more laggy than it was because they did a quick fix to make it more stable. This will probably be fixed shortly. 2.9 should be way better. Two is running Blender in Linux is supposed to be faster. Three is for Character animation in particular Maya has GPU acceleration. This makes Maya have the fastest character animation playback of any software.

My suggestion would be to turn off the subdivision modifier for the Blender viewport. This is a good idea in most programs to make them run faster. There is a plugin called Level of Detail Manager to make your scene have less geometry. To make the viewport run faster there probably is a plugin to make things further than a certain distance get turned off. It should also be kept in mind that even with a 4k picture you have 8,294,400 pixels. If you have 30,000,000 polys in something that is more than 3 polygons per pixel. Kind of an overkill. LOD should definitely help with that without losing any visible quality.

Edit: LOD is not currently working with 2.8+

I see. That’s good to know and I’m glad to hear some foxes are on the way. 2.9 is definitely a step in the right direction so far. And as 30,000,000 it’s a little bit of an exaggeration and certainly rare but I’ve hit 25+. More frequently than not that is a temporary number for another purpose and then baked down to a lower number. For instance, hair particles with textured faces applied in high density to then later be baked down, each end 2000 hairs *8 by 8 * 30 which I will then bake down into a mesh with only maybe 10000 faces. It doesn’t usually stay 30,000,000 for long but it doesn’t mean I never cross paths with the number. Only exception to that is a 12,000+ pixel wide photo for large high DPI print but also, rare. My point was more just to point out that it can do it, but I gotta say, I don’t get May numbers in modo or C4D either but they still do better than even blender 2.7. But I’ll give 2.7 credit where it’s due. I could tollerare up to 8,000,000 polygon in viewport In 2.7 and that’s around the high average of what I typically need I would say so it definitely wasn’t always like this.

I guess in a way it’s reassuring to know that this is all happening because of some kinks in implementation of things that in general will probably bake blender better…

I’ll look into that plugin. So viewport is all cpu intensive then? Blender isn’t falling back on the gpu to handle large poly counts?

I’m not sure what Blender bottleneck is. It could be drawcalls which is just a limitation of GPUs realtime performance beyond it’s raw speed.

I should mention the LOD is not currently not working with Blender 2.8+.

oof, ok, ill keep an eye on it. It still seems like it will be useful if it gets updated to work in 2.8 so I still appreciate it.

Adding my current experience to this topic. I have a project where I make different (Low Polly) assets. Each asset has it’s own collection. After finishing each asset I make a new collection and hide the others. (Exclude from view). Blender is getting slower after each asset. I have one empty collection in view and creating a new mesh like a cylinder takes seconds before it pops up. New documents work just fine. I have a pretty ripped Gaming PC (Ryzen7 GTX2080 Super) with 32gb ram and SSD so I guess that shouldn’t be the problem. Blender v2.83.2

Edit: I tried 2.9… same thing happening

Yeah I have never felt this held back in terms of viewport performance before. I know there is a lot that goes into it and it takes a team off people who know every bit of blender front to back to really know how to safely ““fix”” everything, but man, I really just want to be able to have a little more faith in blender when I use it. Over in the other thread I was talking to a few other maya users like myself and we all sorta have varying experiences with maya and blender. Similar experiences in maya but blender seems to be super stable for some of us and for me and a few others it’s really sketchy. I don’t know specifically where the problem lies but it certainly is raising some questions for me about if it is a blender issue specifically or an optimization issue.

1 Like

maya is also much older than blender and a riper software , its like a 10 year old vs a 20 year old. yes im buying a new computer (5900x) so i get better performance.

Try Blender 3.0 viewport faster than Maya, C4D, similar to Houdini, only 3ds Max is faster.

Thanks for the heads up. I plan to!

What are you talking about?. Blender (1994) is older than Maya (1998).