i just created a post in right click select about implementing out of core rendering inside of blender , thats mean no more vram limit , when the render goes out of vram , the data is stored on system memory
i know my post is a bit flashy , i just want a maximum upvoters as possible , i think that vram limit is THE biggest flaw of blender because it litterally put expensive pieces of hardware (gpu) , useless, for big scene and other vram eating workflows… i know redshift is coming for blender , octane is already there, but they didnt/will not use cycles materials and probably will not be compatible with eevee so its quite a big turn off for me
please consider upvoting , it will help every blender users that use gpu
i know this feature is on the “important feature that will be soon implement” list , but its been three years now , , i also know theres a patch out there that try to do it , but its not as effective as how redshift or octane do it
Cycles has had basic out-of-core support in the nightly builds for quite awhile now. The long dev cycle for 2.8 means that there was work done on Cycles almost a year ago that still isn’t in a release build. The support isn’t a good as Redshift, since it doesn’t support mipmapping/tiling of textures so there’s no way to partially-unload a texture, but at least the render doesn’t puke because you had 75MB too much stuff.
The link I posted was the commit where it got checked into master. It’s in the nightlies now and will be in 2.80. Like I said, the way in handles out-of-core is pretty simple/clumsy and has more performance penalty than it maybe could have, but it WILL prevent your render from failing because you slightly exceeded VRAM.
Well it’s actually broken right now, that’s it’s main limitation, it doesn’t work. I reported it as a bug a while back but just ignored im afraid. Which means don’t count on it working anytime soon, Hoped as Brecht is full time now maybe some of the bug back list would get fixedbut no sign of that.
Here’s the Bug report page if you want to add your bugs there people are experiencing maybe it will bump the bug report to being looked at.
It would be good to be able to clearly see somehow (some red warning icon for example) when the feature of using RAM is being used when you render with GPU. So if the scene is barely surpassing vRAM capacity, the user will realize that the feature is being used and therefore try to reduce the use of vRAM by modifying the scene.
Or perhaps directly an option where user can choose to use the feature or not, enabling or disabling it.
I do not know if I’ve been clear. For example, without the feature you clearly know when your scene exceeds vRAM capability because you have an error message. So you could try to modify the scene.
With the feature, you do not know exactly if the scene exceeds vRAM capability. Then users might prefer to modify the scene so it fits in vRAM, instead of using the feature and having render time penalty.
hey , dont forget about gpu based users , having 4 gpu’s with a -30% perf drop as promise is still better than running a render on a quad core cpu … out of core rendering is a must have for any gpu based workstation
What a coincidence, I was just checking out about this topic until I stumbled upon up here. I am actually quite curious what is the state of OCC at the moment (master 2.8 and also dev. 2.81)? I know that it is not that important for a lot of people but still this does seem to be pretty mandatory to me.