We have a workstation that we have just upgraded from 2x 3090s to 2x 4090s. Our render times have reduced but not by as much as I expected. Could someone help me understand if our Benchmark performance is okay or slow?
4090’s have all power connections connected. I’ve checked in Nvidia smi and they appear to show the full 600w power available. Nvidia Studio drivers are up to date. Windows is fully up to date. BIOS is up to date. Gigabyte 4090 BIOS appears up to date. Not sure what else there may be?
I can share any other information that may be helpful We’re just not sure if the performance improvement is correct over our 2x 3090 setup. I’d guess we’re about 50% faster but I was expecting more.
I can see power draw for both cards and they get to ~280w in blender benchmark. I’m still not sure if its just one card not working or both cards working at half capacity?
Do you know if the Blender Benchmark score is correct? I thought that 4090’s scored over 12,000 unless I have my scores mixed up?
Ah okay, I didn’t realise the scores were added together, I thought the peak score was the one recorded. Thank you very much.
I haven’t seen a way to test both together but I can check this evening when the workstation isn’t being used. Thank you very much for your help.
I see the 4090’s are drawing ~280w while rendering an animation. If the 4090 can draw closer to 450w, is there a way to optimise to get a higher workload on the card (more power equals more work) or does it not work like this?
Rendering in Cycles might not push it to the max power.
There are also voltage limitations and code ones. If your cards do more than 12000 each then they are okay.
What are you rendering with? Cuda or Optix? Are both GPUs selected in preferences>system? Is CPU selected too? I would try a heavy scene to compare 2x3090 vs 2x4090.
Both GPUs (we deselected cpu and this reduced frame time by about 5%).
I can get details on frame size, samples etc tomorrow.
The render speed seems better now that we’ve done more comparisons but I’m still surprised about the power draw. Bullit’s reply regarding code and voltage limitations makes sense (I understand it may not be perfectly optimised). But I was perhaps naively thinking they would be working harder.
Keen to learn more about optimisation. I can get more settings tomorrow from my colleagues workstation. They use it, I just helped build it.
Is there a way to share a setting file with you all to show how we have Blender configured?
A few things to look at. First I don’t think I’d use the benchmark tool, just download the scenes and open in full Blender, that way you can check preferences, enable/disable one or the other card, make sure OptiX is being used, etc.
You will likely have to push up the render resolution of the test scenes, just a single 4090 would rip through Classroom at 1080p so fast it would be a usage blip on a graph. Push samples as well, just because you can.
Then for monitoring the GPU’s, grab GPU-Z, that way you can be sure exactly how much power is being used by both, utilization levels, etc. There is no reason that Cycles shouldn’t push both GPU’s to near 100% usage and it’s base power, so around 400-450W.
If things really don’t look right, maybe try one GPU in the system at a time, just to make sure that each GPU is fully working correctly.
Some important things to check first that no one mentioned (I also have 2 gpus)
Use NVidia Studio drivers (gain 10 to 20% in general rendering speeds, at least)
Tile size? Auto tile size is bad when using 2 or more gpus. Finding the most efficient tile size for your gpus is an art no one wants to admit, but it’s scene and hardware dependent, and no, autotiling is rarely your friend, especially when using multigpus.
NVIDIA 3D settings in NVIDIA panel, some settings are counterproductive, and overriding software settings is sometimes good and sometimes bad.
Check also: how many screens do you have and how are they connected to your gpus! As crazy as it seems, connecting all your screens to 1 gpu only is not correct. Windows only uses 1 gpu for screen rendering, but the connections do matter. I have 3 screens, one connects using digital port and one hdmi to 1 gpu, the other hdmi connects to the other gpu. Makes my whole system faster. (When you connect all your screens to the same gpu, you stress that gpu and loose performance, and also the other gpu is not correctly detected for other tasks).
Your CPU, BIOS and MOTHERBOARD… now that you have 2x4090 maybe your PCI lanes, firmwares and settings might be bottlenecking your hardware and it’s more noticeable than when you had 2x3090
Good luck
Using multigpus systems is as of this date still not a very straightforward process.
For rendering animations I think the fastest option in multi gpu systems is running parallel blender instances rendering different frames with different gpu’s in parallel.