Blender Benchmark - opendata.blender.org

3 Likes

Has this project been abandoned? Nothing has been updated there for a long time and the latest version is still 2.93

It’s not abandoned. Cycles X changes required more work to it but they’re working on it. I even dare to speculate it’ll roll out in the next week by the looks of this branch commits but so far no official ETA.

1 Like

Just a question… probably a bit stupid but… :slight_smile:
I’m considering a new pc build and I’d like to put a 4080 but it’s very expensive and I’m reorienting to a 4070 but, as I’m not sure that 12 GB VRam will be always enough, I’d like to understand how much difference I’ll get using CPU when VRAM will not be enough.
My question is: the data is unified by CPU and GPU? For example…if a CPU has a score of 500 and a GPU a score of 5000, could I say that the GPU is 10 times faster than CPU?

Exactly, as long as the GPU is only using it’s vram.

and for the 12g, it really depends on what scenes you work on

1 Like

It can be somewhat dependent on the scene you are rendering, along with various other settings. Either way, once you run out of VRAM, render times will pretty much skyrocket.
On my own recent testing, I only tried the BMW scene using CPU, a 5900X in my case and my new 3080 Ti could render the same result, 15x faster.

Yes, obviously as long as VRAM is available…
My question rises as I’m questioning myself: I don’t know if, in the future, I’ll have a job that go over my VRAM… if this will happen and I’ll be constraint to use the CPU, how much slower I’ll be compared to the GPU (if hypotetically the GPU has the needed VRAM)?
So, if the data results are unified, I can approximately know how slower will be my rendering using CPU and I can compare with the PC I’m using at the moment (both GPU and CPU results).

Thank you for your replies

I mean it does take a bit to go through 12GB VRAM. A lot of high mesh objects, think large environment with no instancing and/or and this is more likely, a whole bunch of high-res textures.

While 16, 24 or more is nice, you can always render foreground, mid-ground and background in passes.

Either way, unless its a 64+ core CPU vs a low end GPU, then having to CPU render will always be orders of magnitude slower. But at the end of the day, only you can know how likely CPU rendering will be needed.

Just to put the things into something more real.
Actually I’ve a i7-6700 and a gtx1070.
If I compare this CPU and this GPU with the new ones I’m thinking to buy, I’ll get a x6 with CPU and x13 with GPU… but if I compare my 1070 vs new CPU I can see that my GPU is a little faster than… so, in this way I know that, in the future, if all will going wrong with VRAM needs, I’ll render with my new CPU as long as I’ve my old GPU :slight_smile:

I’ve been monitoring my VRAM on every render. I bought the Mac Studio with 128gb unified memory and was curious if I was ever going to use more VRAM than I would have had on a 4090, which is 24gb.

I just rendered an 8k poster with zillions of instanced geometry (GN), lots of 4K textures, a bit of displacement. I came very close to 24gb vram (and 42gb app ram). The same render on 4K used about half VRAM.

Most of my 1920 scenes right now hover around 10gb vram. I would find 12gb limiting for the coming 5 years for scenes that use a lot of different PBRs, like a shop with many products or a wide shot of a city.

1 Like

Thank you for sharing your VRAM experiences :slight_smile:
I think I’ll save it for my knowledge.