Check out the new addon for blender called Turbo Tools. It’s aim is to add tools to dramatically speed up various areas of Blender.
The first installment gives Blender real-time compositing, caching, resaving file output nodes without re-rendering, automatic render storage and recall, publishing, real-time automation of nodes and interactive performance settings to allow for super fast playback in the compositor (even while tweaking parameters)!
Thanks, the addon is really good, it is a great idea, but I’ve found some issues that it would be good to fix… In my case it is not useful.
In the case scenario that you have a lot of render layers like I have (25 layers), 2 problems you can find:
1- You lose the original connections that are replaced by the cache nodes, and if you want to get back to the original connections, you have to reconnect them by hand, which is not a good thing.
2- All the cache nodes for each render layer are created all in one place, disorganised.
The option to revert back to the original connections you had before caching, I think is very important.
And the cache nodes should be positioned automatically next to each render layer.
Hi, thanks, I’m glad it’s proving useful. Version 2 is due imminently by the way, which has some massive new features not related to the compositor. Thanks for the feedback by the way. You can revert to the original connections by selecting the render layer node and then clicking cache/uncache. If it’s cached then it will remove the render layer cache node (delete the cache files) and move the connections back (providing it still has the necessary sockets). If you don’t want to delete the cache files, be sure to back them up in a different directory.
I’ve just checked the second issue you mentioned, and I can’t reproduce it. All render layer cache nodes are generated directly below the render layer cache node they belong to. Could you upload somewhere a cut down file that demonstrates the cache nodes being generated in the same place, and then send me the link to [email protected] and I’ll look into it
I’ve also just improved the behaviour of the uncaching of rl cache nodes. Rather than deleting the actual cache, it now only removes the cache nodes and moves the links back to the render layer node. It seemed a bit too risky that people could delete a full animation worth of rendered frames, plus this way the cache can be automatically recreated without the need to re-render, allowing you to still create standard cache nodes upstream and change frames.
This’ll be in the next release major feature release 2.0, which should arrive in the next few days (possibly today), if you purchased from gumroad you’ll get an email when it’s released, I think the Blender Market also sends out alerts.
And then I couldn’t get back to the original connections with the original nodes.
I had too many layers (now I have 30) so I cannot afford to lose the original, because sometimes you are rendering previews at half resolution, but some other times you want to render at higher resolution. But cache/uncache doesn’t work for me to get back the initial state of the nodes.
Thank you very much for the update! Seems that it has very good fixes, I will check it out!
This looks a bit to good to be true Have anyone actually tried this? It’s been out a while but not much talk at all considering the claims. I mean a realtime compositor and renders that’s almost a thousand times faster really seems to good to be true!
Hi, it’s actually slower to process than the standard denoisers (approx 10 seconds), but it provides results that could take 2 or 3 times longer to obtain with OIDN/Optix from the render settings panel, or if you have very detailed textures upto 120x longer to achieve with OIDN/Optix . The 960x is how long the kitchen scene takes to get clean without any form of denoising.
I only recommend that Turbo Render be used on complex scenes, because on scenes that can achieve a good result in under 20 seconds, the additional processing will be unnecessary and actually slow down the renders.
Hi sorry, I don’t get notifications on this site. Could you contact me via the support email if you’re still having problems, or is this the old problem with render layer cache nodes appearing in the wrong place? If so that’s fixed now.
Ah I see, so the render itself is the same? This is more of some better but slower but in the end more efficient denoiseing then?
And the compositor speedup is something else I guess? Because seeing that working in almost real time looks like magic compared to the horribly slow default compositor. I am really curious about this, just don’t have the time to check it out myself right now
That’s exactly right. The compositor speedup is the real powerhouse behind it all working. When I initially started writing turbo render, I realised that what I wanted to do was impossible with the current compositor due to there being no way to overwrite the render layer node’s data directly. So I set about coding a full caching suite, which initially was going to be just powerful enough to enable turbo render to work, but it turned into a bit of a monster. In the loosest possible terms it’s similar in a way to Nanite in Unreal 5, in so much as it streams render results straight from the cache files on the hard drive which are generated during rendering, effectively meaning you can play back full animations and never use more than a few GB of ram at a time. All the other stuff, such as real-time caching, publishing, resaving of file output nodes without re-rendering, secondary cache for better performance etc, all grew out of what was going to be pretty simple when I started.
I think because I’m a profession 3d artist myself, I just wanted to automate as much of my work as possible so that I could get back to the important stuff, such as dying repeatedly on battlefield 1