Hi, yes, originally I had everything hidden in my original project. Then I created a new project (the one I uploaded) to demonstrate the issue, but forgot to make things invisible. You can try yourself: the performance is the same no matter if things are visible or hidden. In my case with an Intel i5 and a GTX 1060 it’s 4 FPS.
In case you didn’t notice, I updated my post earlier with the solution which was to lower the Resolution Preview U value in each curve. Originally I thought it was a problem with Blender 2.8, but then realized the problem was the user
This is simply laughable. Yes, its VP and anim playback performance has certainly improved drastically over the years(as has Max my main program), but Maya is riddled with crazy legacy bloatware that can make operating the program seem like being on a knife edge at times. There are so many little buffers, and caches, and histories, and residuals that have to be cleared and flushed and deleted or at any stage the whole program just crashes. To say that ‘every part of the software is fully optimised’ is pure fantasy. Maya probably has the oldest architecture of any of the main DCCs.
Sadly I have to agree. I like Maya for some of its features, but I experienced so many crashes while rigging characters. I think one of its problem is everything is over complexified, even doing simple stuff is tedious because most of features are node based (hyper graph), making it a pain for simple tasks. And yes the history system may be good for some tasks, but a pain for others.
Sometimes then it is not only a problem of Maya or other big DCC in itself … but also drivers, hardware, etc… many OpenGL API and variations, and so on …
Is down hell for the blender devs, with old and new drivers … I guess it’s also true for closed source devs … to be compatible the multi-platform, being linked to other external devel library, interfaces library, and external dlls)
it is not easy at all to maintain the maximum possible stability.
VSE and performance in ffmpeg. (mostly investigating to make them better optimized)
in the comments:
This ffmpeg setting will enable hardware acceleration -hwaccel auto and this one -tune fastdecode will allow a much faster playback in Blender(going from 20 fps to 30 fps on my low spec computer with a h.264 encoded file) and this one will use all threads: -threads 0 I don’t know which ones are enabled in the Blender build of ffmpeg lib, but I know from testing alternative codecs for proxies, that these settings will make a difference in encoding/decoding speed(though some of them may be h.264 only).
I suspect that in the coming months we will have piece by piece interesting leaps forward in terms of performance …
because they will adopt a multi-threading system different from the internal one built in blender …
Rather than trying to make our own optimize task scheduler implementation, we should switch to TBB. If we use it everywhere, we can have a single thread pool for Cycles, Mantaflow, OpenVDB and the rest of Blender. It could also help improve performance in some areas.
Bastien Montagne comment:
I hope that this partial switch to tbb does not bite us in some unexpected ways… would have rather switched entirely…
But in general, yes, think switching to tbb would avoid us spending a lot of time re-inventing the wheel, since there’s already a state-of-the-art lib available for that.