Hi, yes, originally I had everything hidden in my original project. Then I created a new project (the one I uploaded) to demonstrate the issue, but forgot to make things invisible. You can try yourself: the performance is the same no matter if things are visible or hidden. In my case with an Intel i5 and a GTX 1060 it’s 4 FPS.
In case you didn’t notice, I updated my post earlier with the solution which was to lower the Resolution Preview U value in each curve. Originally I thought it was a problem with Blender 2.8, but then realized the problem was the user
This is simply laughable. Yes, its VP and anim playback performance has certainly improved drastically over the years(as has Max my main program), but Maya is riddled with crazy legacy bloatware that can make operating the program seem like being on a knife edge at times. There are so many little buffers, and caches, and histories, and residuals that have to be cleared and flushed and deleted or at any stage the whole program just crashes. To say that ‘every part of the software is fully optimised’ is pure fantasy. Maya probably has the oldest architecture of any of the main DCCs.
Sadly I have to agree. I like Maya for some of its features, but I experienced so many crashes while rigging characters. I think one of its problem is everything is over complexified, even doing simple stuff is tedious because most of features are node based (hyper graph), making it a pain for simple tasks. And yes the history system may be good for some tasks, but a pain for others.
Sometimes then it is not only a problem of Maya or other big DCC in itself … but also drivers, hardware, etc… many OpenGL API and variations, and so on …
Is down hell for the blender devs, with old and new drivers … I guess it’s also true for closed source devs … to be compatible the multi-platform, being linked to other external devel library, interfaces library, and external dlls)
it is not easy at all to maintain the maximum possible stability.
VSE and performance in ffmpeg. (mostly investigating to make them better optimized)
in the comments:
This ffmpeg setting will enable hardware acceleration -hwaccel auto and this one -tune fastdecode will allow a much faster playback in Blender(going from 20 fps to 30 fps on my low spec computer with a h.264 encoded file) and this one will use all threads: -threads 0 I don’t know which ones are enabled in the Blender build of ffmpeg lib, but I know from testing alternative codecs for proxies, that these settings will make a difference in encoding/decoding speed(though some of them may be h.264 only).
I suspect that in the coming months we will have piece by piece interesting leaps forward in terms of performance …
because they will adopt a multi-threading system different from the internal one built in blender …
Rather than trying to make our own optimize task scheduler implementation, we should switch to TBB. If we use it everywhere, we can have a single thread pool for Cycles, Mantaflow, OpenVDB and the rest of Blender. It could also help improve performance in some areas.
Bastien Montagne comment:
I hope that this partial switch to tbb does not bite us in some unexpected ways… would have rather switched entirely…
But in general, yes, think switching to tbb would avoid us spending a lot of time re-inventing the wheel, since there’s already a state-of-the-art lib available for that.
I posted the attached wiki, didn’t you see it?
is a parallel-threading computing library developed by intel …
Actually are verifying if they can use it instead of their built-in library, since it is already well developed and effective … so they will save a lot of time to “reinvent the wheel” and can devote themselves directly to the optimization and the best use in their instruments.
well, at the time when I wrote the comment, there were above all many “annotations” concerning blender performances … now the list is continued, because it is a list that notes all the activities that are carried out, so for now there have only been bugfiixing, and no added features for now. (because of the blender conference)
By the looks of it, it should offer some speedup with OpenSubDiv by way of reducing overhead (which according to developer discussion seems to be a major pain point in performance).
Bought new PC, R7 3700x, 64GB RAM, 2070 Super, new M2 drive. Performance is still bad especially using subdivision surface or undo. It’s hard to work on high detailed models.
In 2.79 modeling (with subdv) and undo was faster, but object mode with hundreds of meshes was very problematic,
in 2.81 modeling (with subdv) and undo is really slow, but there is no issue with having hundreds of complex scene and operate in object mode.
Since Blender want to be compared to softwares like Maya or 3Ds Max, I think this issue should be considered as number 1 to fix. I heard already some complaints from my friends that tried 2.81, also asking me what’s wrong
@edit
I don’t want to be impolite. Just want to say that from professional perspective this issue keep Blender slightly in the background. If more people will notice that issue, they will bounce of from that software as fast as they wanted to test it.
I’m really happy what Blender 2.81 is right now, and looking forward very optimistic to future Overall, great work devs!