I’ve been examining Blender Benchmarks lately, and everything is measured in seconds per render (“shorter is better”). For years, I’ve been boiling my renders down to “frames per minute” and I’m curious if anyone else is doing the same. You find this number by dividing 60 [seconds] by how many seconds it took to render. Thus:
10 seconds to render = 6.00 fpm (fastest)
60 seconds to render = 1.00 fpm
90 seconds to render = 0.66 fpm
120 seconds to render = 0.50 fpm (slowest)
In a real-world example, I was looking at some Mac options, using the BMW benchmark as the main reference for CPU rendering. An iMac Pro took 201 seconds, and a Mac Mini took 318 seconds, thus:
iMac Pro 3.0GHz Xeon W 10-Core = 0.30 fpm
iMac Mini 3.2GHz i7 6-Core = 0.19 fpm
In my opinion, it’s quicker to discern with “fpm” that it would only take two macMinis to overrun the iMac Pro, at 0.38 fpm (0.19 + 0.19).
Likewise, a custom PC with a Threadripper 2990WX 32-core delivers a whopping 0.73 fpm.
Again, quite quickly, I can determine that it would take 4 Minis to match that (0.76 fpm), which is actually fairly close in cost, depending on what else you’d be using those machines for. (That’s 4 medium-end workstations!)
These are useful metrics for me, especially when rendering animations. I can say, “I need this done in 24 hours; there are 9000 frames, so I need to be rendering at least 6.3 frames per minute.” After a few tests I know about how fast each computer will render that project in fpm, and by adding those together I get my total fpm. This tells me whether I need to optimize my render to hit the deadline, or if I’m in good shape to reach it in time.