The performance of blender has gotten. Computers have also become faster. I believe blender needs to implement speed control on some of it’s function, such as sculpting. Sculpting is fine with current computing power, but in the future blender could suffer from being too fast. Many a great programs are gone now, because of speed problems. They are too fast.
It this some kind of sarcasm I am simply unable to detect clearly?
I think the concern is that really quick and close together update ticks could lead to massive instability, which it has. I ain’t no engineer and this is deep level stuff.
Sculpting has already been capped to save performance very recently.
I see where you are going, but that sculpting cap was to get a higher ‘smoothness’ for the end user, wasn’t it? I mean, it’s not like it actually prevents crashing (AFAIK), it’s just to not waste computing power If I am not mistaken. More computational power of the PC wouldn’t make things worse tho in that aspect, it may just be wasted.
that’s just one example of a potential multitude of things. I’m just coming from the logic of old PC games breaking when they run at >60
I know blender is different but that experience taught me that problems can arise from these sorts of things.
Old pc games broke at 60fps+ because animations were specifically tied to that frame rate, so if it ran at 120fps it would run twice as fast. It wasn’t that it was unstable, it was just a design choice that didn’t scale with changing fps
I stand by the mantra that a computer can never be fast enough, although there is now an obvious cap on per core/thread already. I’m more interested in the rise of the RiscV infrastructure and how widely distributed CPU/GPU as an on demand cloud (hate that word) service will be integrated into and influence the development of Blender, especially AI assisted workflow and decreasing advantage of large studios that have huge render farms…
I remember years ago, in one program (unrelated to blender), you could click and hold a vertical separator without moving it to cause 100% CPU usage. Holding it long enough would eventually cause the program to crash, supposedly due to the main application being starved of necessary CPU cycles, or some kind of memory leak. In any case, it was a HUGE edge case scenario that would likely not be worth the time to fix.
Your post reminds me of those games that you install that had slideshows that were intended to be time killers while you waited for the game to install very slowly to your harddrive. But with today’s SSDs, the slideshows are unreadable because the game installs in less than 10 seconds.
I have spent a lot of time in unreal engine fooling around with tickrate based code, versus time based code. Sometimes, you can produce code that works flawlessly at 60fps, but breaks at 15 fps, or 144 fps. Then, you have to re-write your code to be based on real world time, rather than tick, and this sometimes adds complexity.