Viewport Animation Performance?

Hello :slight_smile:
I have a basic performance question. I have a 3.4GHz Athlon II X3 processor, but the animation in the viewport only uses one core. Is there a way to change this? The only solution to increase the performance, that I know of, would be to build a computer with an Intel Core i7 OCed to 4.5GHz or something (and that’s not practical to my needs). I know that I could turn on frame skipping, but that’s not really increasing performance, just trading losses.

Thanks.

the viewport make an heavily use of OpenGL, which uses your video card usually, i don’t think that the CPU is so relevant, also the AMD’s one are not so good about multi-threading but i don’t think that’s the case to blame the performance of your CPU.

Hm… Interesting.
I did some tests…
With a 268k vertex and 450k polygon character model:
viewport orbiting seems low ~15 fps (negligible CPU use of course)
walk cycle animation is 0.3 fps, and viewport has an equally slow response (one CPU core at 100%)

With the same character at 20k vertex and 36k polygon:
viewport orbiting flawless
same animation is ~10 fps and viewport has an equally moderate response (all CPU cores moderately in use)

The tests clearly verify your claims, which make things all the more curious, because I experienced contrary results yesterday with around 200k polygons. One CPU core was max, the animation was about 4fps, and the viewport was responding with flawless framerates (suggesting the GPU was not stressed). The best guess I have is that my system was being flaky for a short time by restricting my CPU a great enough deal as to cause it to be the bottleneck - giving the GPU enough “spare time” in between frames to fluidly refresh the viewport.

Anyway, I appreciate your help. :slight_smile:
I’m glad I was able to diagnose the problem (as not persistent). Looks like I may need to consider a new Linux OS, though.

don’t mix different topics, i know that they could look similar but one thing is talking about the viewport and another thing is talking about high-resolution mesh.

Basically when you move your viewport OpenGL is envolved, when you modify your mesh your CPU is envolved, when you have an high polycount your RAM and your memory controller are heavily used.

Open the task manager for your OS, add an object on the scene, subdivide this object several times and you will see:

  • the CPU usage will go up only during the subdivide process, after that time it will return on low level
  • after the subdivision you will have more RAM occupied
  • when you act on your viewport you made a call for the data in your RAM and this implies making heavily use of the memory controller

Oh, okay. Those are dynamics with which I’ve never had to deal or consider, especially RAM speed. I’m using 2x 4GB of DDR3 1600MHz RAM in dual channel mode, btw.

In wireframe mode, does the model put significantly less load on the GPU?

It would seem as if my GPU is strong enough that my memory controller would bottleneck my system over the GPU in most cases, but that doesn’t seem correct. Under what cases would my GPU be the bottleneck?

Basically I’m trying to figure out how to test the limitations and capabilities of my computer for each of the three factors so that I could work around them.
I did a few searches on these factors related to 3D modelling, but it looks like I’ll have to do some digging to find for which I’m looking (and tricks to work around them).

you can test the limits of your machine considering all of the possible uses one by one.

if you need a fast access to what is allocated in the RAM you need a fast controller and a fast RAM module.

everything you see, everything that appear on a monitor is produced by a rendering process which is a process usually managed by a video card.
less things to render = faster response; points, lines and faces are things to render.

you have also to consider 3 basic and really important things:

  • your hardware without your software is useless ( aka don’t think about the hardware without considering the software )
  • the software drives the hardware
  • your hardware configuration defines the “top level” that you can achieve, but usually the software significantly decrease this level, simply because you need a part of your computing power to run your OS and other software, also with the modern quality-level standard about the drivers, a big part of this power is wasted, and is the software environment that defines your real performance level.

You also have to consider some explicit limitations that your OS could have, for example under Windows the TCP/IP sockets are limited by default, so your networking experience could be a better worst comparing it with what you may experience under a GNU/linux distribution; this means that no matter how well written your networking drivers are, no matter how fast your card is, you can never overcome this limitation imposed by the OS.

Basing on my experience the biggest software-related problem are always the drivers, especially the GPU driver and the chipset driver; the biggest OS-related problems are the memory management and the driver handling by the kernel.

honestly i use Ubuntu for my workstation and i experience a better workflow than under Windows, especially when i have to read/write an high amount of data from/to RAM ( the linux kernel has a better memory management ) and i don’t have an infinite amount of software components on the way like the UAC, updates on background, reporting tools, etc etc.

You have to find and care about your software setup to reach a really good level of performance.

You can also consider that different softwares can do the same thing in different way and this can make a huge difference; think about Zbrush sculpt mode vs Blender sculpt mode, Zbrush can easily handle an highpoly mesh when on the same configuration Blender can’t reach a decent amount of polygons.

Try your configuration under Ubuntu and see if you experience a better behaviour.