I’m running background batch rendering using CUDA, and my system is almost non-responsive when the rendering is being done.
I want to limit the system load Blender is putting on my system to be able to still do other work.
I tried using nice or cpulimit, but it doesn’t seem to help limit the GPU load. Obviously they affect the CPU side of the process, while majority is happening on the GPU.
The PC I’m using has a built-in Intel GPU, but I don’t know how to use it for display while keeping the Nvidia GPU available for rendering.
Is there a way to prioritize GPU tasks or somehow make my system responsive while GPU is used for rendering?
I don’t care if the rendering will take twice as long, I just want to get rid of the system lag
I’d love to be able to limit Blender’s GPU load to 75% so it leaves 25% of the GPUs time every second to do other stuff (like drawing system interface) without hiccups.
I know it’d be best to have a separate machine to this kind of server rendering, but it’s not possible right now and I don’t want to be disabled for 5 minutes every time a rendering job is started.
Hi, it is not possible to limit GPU usage atm., Cycles try to use 100% of the GPU.
I used Intel HD 4300 from my i5 for some time with two Nvidia cards and it was working out of the box when I connect display to the board video output.
Depends on your board you have to switch from PCIe to onboard if “Auto” is not working.
I never got this to work on Linux because Intel driver are not supported on Opensuse, I got no hardware acceleration on Intel for Blender.
Batch render may work, never try it, but Blender does not start.
I had Windows partition these time and there it was working.
Another way is to add a cheap card for display only, I bought a GT 630 2 GB used for 30 € lately and this is more than enough for Blender.
while [ True ] ; do pkill -SIGSTOP blender; sleep 0.1; pkill -SIGCONT blender; sleep 0.05; done
It pauses and unpauses all Blender processes so they can only operate 25% of the time.
I’d also want to make this smarter, so it can apply to only certain process, and automatically detect if the user needs extra GPU time or not (becasue why make things render 4 times longer when I’m not doing anything?
This is more a Linux that Blender-related issue I guess.
As far as I know the cpulimit does the same thing basically, but it has a feedback loop that regulates the paused/unpaused time ratio according to process’ CPU usage. But it didn’t work as expected for blender GPU rendering so I’ll try designing an individual solution.
while [ True ] ; do pkill -SIGSTOP blender; sleep 0.1; pkill -SIGCONT blender; sleep 0.05; done
It pauses and unpauses all Blender processes so they can only operate 25% of the time.
I’d also want to make this smarter, so it can apply to only certain process, and automatically detect if the user needs extra GPU time or not (becasue why make things render 4 times longer when I’m not doing anything?
This is more a Linux that Blender-related issue I guess.
As far as I know the cpulimit does the same thing basically, but it has a feedback loop that regulates the paused/unpaused time ratio according to process’ CPU usage. But it didn’t work as expected for blender GPU rendering so I’ll try designing an individual solution.
I can’t delete this post. Actually I have no luck so far. Rendering with GUI doesn’t lock up my machine, but rendering from terminal does no matter what I do for now.
I see a problem that no matter what I do with the Blender process, once it sends a task to GPU it’ll execute till the end blocking the whole GPU until it’s completed. Maybe setting up much smaller tiles will help making this time shorter, but it might add a big overhead.
Hi, this was implemented at some point in Blender but reverted because of problems with older cards.
Than forgotten.
I try to poke the developer was involved in the implementation.
Hmm, maybe it could be still there wit ha warning that it may crash Blender or something? In the “Experimental” Cycles mode? I don’t even know if it couldb e used to proritize regular GPU usage over the CUDA stuff anyway.