Blender Edit Mode Performance

It’s good to make some tests right now to see the difference with the final version.

About Edit Mode:
Surely there are still missing CPU optimizations that will come after solving Crashes.
I have not read anything about projects to optimize those modes or functions where CPU is hard used. So at least for now what is expected is that 2.8 is not worse than 2.7. Do not make false hopes about many improvements in this regard until developers officially mention something about it.

1 Like

Anyone knows if the raw viewport performance especially in editmode (orbiting, selecting, moving) will benefit from mutlicore cpus? I allways thought it would mainly depend on gfx card power…

I do not know.
I forgot to mention before that the graph of CPU usage was when move vertices.

By the way, to do tests anyone could monitor CPU and GPU while keeping monitors apps on top while working with Blender.

I don’t think that CPU usage reflects the actual viewport performances.

Ignore the RAM usage, Houdini has the whole scene opened.


I do not know other programs, but in Blender developers have explained these things many times:

Depending on Object, Edit, Sculpt Multires/Sculpt Dyntopo Modes, some things are multi thread CPU, some other things are single thread CPU, and other things are GPU tasks.


In theory it should be, there is still a lot of dependencies on CPU for performing features that programming languages need to do the basics.

But there is a ton of things when it comes to performance, too many threads, lack of usage of hardware acceleration instructions (see SSE, MMX etc) , I/O access, memory leaks and memory management.

In the end its the GPU that plays the larger role but modern CPU have accelerated quite a lot lately taking advantage of their gpu like technologies. Questions is how much Blender can utilise of that when it still have CUDA issues.

For example ZBrush has been able to support tons of polygons from its very early days because it was probably the first 3rd app to perform RAM compression. It’s a bit like cheating but it allowed for million and then billion of polygons. This is only one of countless techniques. It has nothing to do with how well it takes advantage of CPU/GPU.

The art of optimization is pretty close to necromancer magic, its dark, you have to sacrifice code readability and is a pain to maintain like zombies keep coming from their graves.

Hence why developers avoid it like the plague.


Thx for the info. I think max uses kind of the same “cheating”. Especially with quadro max performance driver, 2009version was awesome fast, but when using snapping in extreme zooms, it doesn’t snaps precisely no matter the units setup. Now with nitrousvp it also works fast with consumer gfx card but snapping is still not tight like in blender. Seems like a trade…

Someone with a fast multicore CPU could check if effectively, more cores equals to a faster viewport, because as you could see from your graphs, both versions don’t use all the cores.

I think that Max has a problem with the single precision floating point snapping algorithm (unless they updated it and it’s still buggy).

i have a strong i9 14core and 2x1080ti but can’t see any boost/advantage working in edit mode atm

Edit: to be precise I only focused at selectionspeeds selecting loopcuts etc. will check transforms later…

but xmas was early this year: cycles gpu/cpu hibrydmode gives me renderspeeds compareable to vrays

1 Like

All the threads are being used. The height that I have marked with the white arrows correspond to Multi Threads task and is being used less than 50% of capacity. Higher curves correspond to single thread tasks and are used to more than 90% of capacity.

Not only depends on how good CPU is, there are also Blender limitations in this regard (as I think it had also been explained by developers or some user).

1 Like

Cycle is indeed very fast, you can set up your lighting and see the results in seconds.

I think I read what you wrote earlier wrong, as all the parts were multithreaded. Yes, indeed, the problem is in Blender’s architecture, it slow down to an alt no matter the hardware after reaching a certain polycount.

From looking at the other software, though, it seems that they use the GPU more than Blender does, and that’s probably that that gives them a boost in editing performances, because 2.8 doesn’t have any problems at displaying large numbers of polygons in Edit Mode, while 2.79 slows down quite heavily, so it can handle the load better for sure.

Another thread that may result useful in this regard, with interesting comments from Blender developers such as “Psy-Fi”:

1 Like

Thx, interesting stuff👌

the software which has the most impressive viewport is 3d coat. You can still sculpt 50 millions poly without multires with ease. Zbrush can’t handle that

1 Like

That Houdini example has the speed I would like to see Blender 2.8 get to for editmode (just amazingly fluid there).

But right now, the devs. are also very busy fixing bugs, crashes, and other issues that are preventing the Spring team from being productive (though optimizations have been made already).

1 Like

To use single floating point precision in max you need a quadro.

1 Like

I’ve tried to replicate the scene with the cube and it’s also slow for me. Tbh I work in max and it’s not rare for me to have meshes far beyond 100k polys. Sometimes they go up to 750k and “edit mode” in max runs it smooth. I rly hope blenders edit mode will have more performance boosts.


Don’t know if that has already been said in this thread, but do some of you have bad performance with popovers. For example when I want to subdivide a mesh, if I try to click multiple times on the right arrow to increase the number of subdivision, it doesn’t when, you have to wait a little time between each click. And it doesn’t have any relation with the computation of the subdivision because if I try do double click on the 3D cursor toggle for example, same thing, and it doesn’t happen in the menu or the properties editor. Do you have the same thing happening ?

Well, when you can see the wireframe (in edit mode) then you are right, it’s not really necessary to use AO, but it does help get a better sense of the shape of your model versus it being disabled. When the wireframe is not visible, and especially on hard surface shapes, then it helps a lot to see AO.

I was just worried that they were just going to use the old AO, which is either grainy or laggy. Eevee AO doesn’t seem to suffer from these problems, but the new workbench AO looks like the old one in terms of render quality. To be honest I should probably just stop worrying though.