Anyone know what’s up?
Did I miss a setting or am I not understanding them? There’s no advanced materials in the scene, no modifiers, no animation… just a few imported CAD models and a HDRI sky…
Anyone know what’s up?
Did I miss a setting or am I not understanding them? There’s no advanced materials in the scene, no modifiers, no animation… just a few imported CAD models and a HDRI sky…
Render with CPU then render with the GPU and compare the timing.
if you click the arrow next to copy, you can select a different sensor, like compute_0 that will give you better information
It’s using 5%- if it’s not a complex scene, it may not need to go over 5%
Thanks everyone for the information!
Ok, yes, there’s definitely a huge, huge difference… what confused me is that my 3090 usually sounds like an airplane about to take off, but not here.
Yes, thank you, it wasn’t the ones you showed, but it led me to finding it eventually.
It’s not a complex scene, but takes a very long time to render (about 20 minutes at 4K), which is also what confused me (in addition to the 3090 fans not spinning up).
Is there any reason you are using such a small tile size? the cyclesX refactor really reduced the need for small tiles, perhaps increasing the tile size to its 2k default might improve performance as well?
I’m actually experimenting with that right now, and disabling tiling is even more confusing, because that stresses the CPU quite a bit, while leaving the GPU still at 9%…
Still, the frame time seems to be about 20 minutes.
Disable the CPU in the devices if you are using an old CPU like that. I disable my CPU and only use my 3080 for Cycles since 3080 is still much faster than 12900k I have here. Hybrid rendering is much slower with the current Cycles and the recent RTX GPU cards.
And one assumes you have the 3090 selected under OptiX in the preferences with no CPU selected for rendering.
The CPU will just slow it all down and OptiX is WAY faster on RTX cards compared to CUDA.
Yes, the CPU seems already to be disabled:
I know the ocean modifier consumes a lot of CPU in animations, but there’s nothing like that here (and only a single frame), so I’m still really lost as to why the CPU does anything at all during this render… but then again, the CUDA cores seem maxed out as we’ve seen now, so maybe it’s correct.
You have CUDA selected there, instead click the OptiX tab and just have the 3090 selected. It will then render twice as fast.
Nice, looks like you’re right… but what’s even more confusing now is that it dropped the GPU utilization from 9% to 3%…
Compute_0 and Compute_1 which @SterlingRoth got working, are still idle for me…
Task manager is rubbish for GPU monitoring for the most part.
Instead get GPU-Z (https://www.techpowerup.com/gpuz/) the Sensors tab will show you a lot better as to what is really going on.
Ok, if that’s more accurate, then I guess we’re good…
Thanks again!