I certainly hope that there will be serious improvements. Mainly because if fancy overlays don’t get “under control” I can’t see blender in serious game modeling production which heavily relies on being able to control high poly meshes in modeling mode. Not to mention vfx, I mean really, blender should be able to handle any scene with 20mil polygons (raw ones not talking subd here) on 1060gtx equivalent card.
I am sure it will be improved. If the benchmarks some people posted here are to be believed, right now 2.8 is worse off than 2.7x in terms of viewport performance, and that’s obviously not acceptable given 2.7x was already quite slow compared to other software. I don’t think they can afford regressing on that front.
I wouldn’t be surprised to see major performance improvements over the next couple of months. Brecht and Clement have a lot of experience with writing fast code (and Brecht’s expertise will help with more than just OpenGL stuff).
As long as those two are on board and developing in paid positions, there should be no need for great concern right now.
Maybe a tip for the viewport, for those high-res-mesh film scene objects, try to make better use of collections.
Now that there can be more such ‘layers’ enabled / hidden, and nested. That should make it possible to handle large scenes better. Ea hide all and unhide only the few objects that you need, and work on them. Nesting is really great for this.
Well benchmarks are to be believed. I’ve done tests myself because main interest for me in new blender is view port performance and eevee. And it’s bad. I can’t do any modeling on some of the meshes i’ve transferred from 3ds max without frustration.
Although retopology with 500k high poly mesh is much better now. Still not the best experience because if I import very high poly model with more than few 500k meshes blender isn’t happy about it.
I rely like blender so don’t get me wrong with my comments about performance. I can see it getting equal if not better than many commercial software packages out there but it needs to have similar functionality as the rest.
Yes that is why we have collections now for blender. It’s how its supposed to work in almost every flagship commercial package. You hide something scene works faster, and you can easily manage what to hide and what not. But I think blender still needs a lot of viewport optimizations. Viewport mode degradations, fast navigation modes, ability to turn off aa because atm it’s constantly on. etc…
Can you confirm that it is the viewport speed (e.g when you rotate, pan) or does the performance drop when you modify something? In my case even heavy scenes are snappy when opened, but Global Undo makes it unworkable (screen gradually starts going white, spinner for several seconds). If so, please try disable and see if it makes a difference. Here’s example of 3 minutes of work in a relatively light scene (note ram):
Ok that is insane ammount of ram for 20k mesh. For me viewport performance outside editing models is quite snappy even with a lot of stuff going on. Still can’t handle 20mil good tho. But when going to edit mode… any kind of edit is hell. Maybe I should mention in every post that I am referring to modeling performance not viewport per se. Anyway I have tried to change global undo. No difference on my side.
I do not know the default hotkeys, so it’s difficult for me to test 2.8, but undo changed after 2.79 (probably same code in 2.8). Viewport speed is snappy, but things become very unresponsive as you manipulate scene and geometry.
Example from one of the more recent 2.79 builds:
This is a game model. Ignore edges/faces stats, it’s separate scene in a file. However It’s relatively light with only a few HP parts.
Here’s a simple face with displacement modifier:
To reiterate, while I am incompetent to test 2.8 with default hotkeys, i assume undo code is same as in latest 2.79.x builds (nightly*). I do not know if this helps anyone, but for myself I have identified Global Undo as the culprit. It makes everything uneditable and slow as it throws all that data to memory on smallest of edits (unrelated to actually pressing undo). This makes it very hard to use in production, even with relatively simple assets and scenes. Disabling it, saving preference, reopening blender does the trick. Viewport speed itself is excellent.
Still, being able to spin around and edit a film-quality scene at good framerates would be a great thing to have in Blender (which in a sense would simply be catching up the industry leading solutions).
If we want a mass movement of professionals moving to Blender, than super fast viewports and editing is an absolute must. What the core team can’t do is forego major optimization in favor of hiding behind a workflow filled with the hiding of objects.
I am on the latest 2.7x builds myself (with the undo changes) and strangely enough, the impact from the undo changes seem to have no major ill effect.
On my Ryzen 2700X machine with a GTX 1060 GB card. I appear to still get good speeds with transforming very high polygon objects (one million faces or more with modifiers). In edit mode, selection is still pretty snappy, and the only time things get real slow is when I actually move vertices (but this has been the case with such objects for years).
On a mesh with 8 very dense cubes (together having more than a million verts with no modifiers), rotating the camera lags a little, but not to the point of being unusable, and box selection does not crawl.
Perhaps something is off specifically with 2.8 then?
This is exactly my perspective as well…as I stated earlier…it is pointless to make something shiny if it runs like a turd with a bow on it…
To be fair, there’s a difference between designing for performance and implementing all of the elements of that design. An airplane is designed to fly, but it’s not getting off the ground until you make the wings and bolt them on the body. However, you can still implement everything around those wings so that stuff is ready when they are.
What kind of design mistakes are in the viewport code?
Exactly. One obvious example here is that the shaders are not optimized for performance, but clearly favor the look. This is simply the current implementation. This does not mean that there will never be better performing shaders or that there will never be shaders which were built for performance and not primarily for the look. The design still allows this sort of adjustment in the future.
well to be doubly fair…you would also design those elements(wings, engine etc)based around the initital principle…wether it be performance or an aesthetically pleasing look…so sorry, but your analogy makes no sense to me.
EDIT: and to be clear I’m not saying I’m against ‘eye candy’, that would be plain stupid…I’m saying there are more important things to be considered beforehand.
I think you’ve missed my point. I’m not talking about eye candy at all.
Let’s try it without metaphors. Blender 2.8 is designed for better performance. The OpenGL upgrade and new drawing behavior is the infrastructure that will make performance increases possible. So although it’s designed for performance, we’re not yet seeing the benefits of that design.
Ironically Eevee is mostly about ‘eye candy’ and not an increase in performance. Mainly because the problem with the original viewport did not lack in performance but in its ability to take to a large extend take advantage of the existing hardware technology, lagging seriously behind other commercial software and of course even cycles.
Which forced people to use mostly Cycles for live preview of materials and other eye candy the viewport could not afford to do.
Of course eye candy and hardware accelerated performance go hand in hand. If you want to achieve optical orgasm software has to take advantage down to the last millimeter of hardware acceleration.
So the goal with Eevee is not to deliver a faster viewport but a much more photorealistic viewport using real time rendering technology. But in order to do that the hardware technology that boosts performance will have to be taken advantage of.
Unfortunately for Eevee the fact its tied to OpenGL , a technology that slowly comes to its end as the main driving force of open source graphics, and especially an almost a decade old version plus the fact that new technologies that we have available, like Direct3D , Metal and of course the usual suspect, Vulkan can easily outperform OpenGL 10 times or more.
Ironically Cycles existing in diffirent galaxy because unlike Eevee it does take advantage of latest technology such Cuda 9.1 that not only is not almost a decade old or even a year old, but 52 days old according to wikiperdia.
Of course an initial goal of Blender was to be able to run on old hardware which excused the old OpenGL releases and outdated viewport but 2.8 is a new beast.
Well first of all Blender Internal is out which means that only Cycles is the off line render engine which means when it comes to rendering 2.8 is definitely not targeting older cards anymore. Of course Cycles can run without CUDA but that makes things even worse for older computers cause it means even worse performance for the shake of being able to operate Blender in a barely usable state.
I will have to say personally always found Blender approach on performance a bit confusing to say the least, like it cannot make up its mind whether it wants to be old hardware friendly or top performance friendly. Unfortunately its the usual case of not being able to have your cake and eat it too.
Sorry if it is a bit off the ongoing conversation, but did someone try the new light cache system from Clement Foucault ? Is there any feedback about that ?
Eevee is one thing, the workbench engine is another!
workbench is a sliding scale of visualisation and performance. Textures, Shadows, cavities, speculars etc can be enabled if needed for the task at hand or disabled for faster performance.