I was playing around with the new real time compositor. Seemed to be working really well (except for the not yet supported nodes), until I tried rendering the image. Suddenly the compositing was everything but real time. Even rendering the (slightly modified) default cube with some compositing, raised the render time from .3s (without compositing) to 15.9s, back to utter slowness as it used to be
I suppose the real time compositor isnāt used for final rendering because of the missing nodes? Is there a way of forcing Blender to use the new compositor for final renders, even though Iād be missing out on some nodes? It kind of feels almost sadistic, giving us a taste of how fast compositing could be, but not letting us use it for final renders
Have you considered that this has nothing to do with the real-time compositor? That is to say, even if you disable the real-time compositor, rendering the default cube with your compositor nodes will still take 15 seconds. You can very easily confirm this to be true
i donāt think they mean that the realtime compositor is āslowing downā the final render composite any more than it normally is, its just that it isnāt being used despite being much faster. which makes sense as they pointed out that it doesnāt support all nodes yet. i actually forgot this was the case myself; luckily my composite effects only seem to add a half second or so per frame.
i thought i remember an early test build when the compositor had to be enabled in the experimental preferences that there was an option to use gpu acceleration for the final composite too, but i donāt see it anymore. also i could have been hallucinating.
But itās an old attempt at speeding up the compositor, and itās not related to viewport compositor.
For now the viewport compositor is only meant to work in the viewport.
Maybe by doing a viewport preview you can force it to render to disk.
So yeah itās realtime preview, but not realtime rendering, a bit like eevee.
Has that OpenCL option ever helped for anyone? I never noticed a real speed increase. I just tested it, it went from 16.49s without OpenCL to 16.39s with OpenCL, well within the margin of error for one measurement
Iām quite disappointed the real time compositor only works in the viewport. Having a compositor with a usable speed was the main thing missing from Blender for me, so I was really looking forward to having that real time compositing in 3.5
but why have a non real time compositor if it can be done in real time? I donāt see why it seems so normal to you to have both a non real time compositor and a real time compositor?
They need to be implemented in completely different manner.
3D Viewport drawing and drawing static 2D image in Image Editor are done in different parts of the code with different engines.
That sounds redundant, and given that thereās a āviewport render imageā (apparently the only way for now to use the real time compositor for real renders, for now?) it also sounds wrong, as that does exactly what you say āneeds to be implemented in completely different mannerā ?
edit: āviewport render imageā doesnāt work with cycles apparently? Never knew that.
iām sure it will get there eventually. the idea of using the viewport render as a workaround occurred to me as well, although when i tried it, it didnāt properly handle the framing of the composite effect, showing as a window of composite within the camera frame (not even matching the viewport). might be a bug or limitation.
Image displayed in 3D viewport in render mode is created and passed directly from Cycles or EEVEE, so the compositor need to get the pixel/passes data directly from them.
Rendered image in 2D Image Editor is loaded from /tmp or \temp directory or from memory buffer (donāt remember exactly) and not directly from the rendering engine. So you need to apply all compositing on top of that data.
Iām afraid I donāt understand the real difference. What does āthe compositor need to get the pixel/passes data directly from themā mean? Why would it matter if itās first stored in memory, and how is it not always first stored in memory? The compositor always needs the full image to work with, as (eg) pixels on the top left might be replaced with pixels in the bottom right, no?
When that was released it was indeed providing some speedup depending on the nodes,
But itās old and Iām not surprised that itās not providing speedups right of the bat. Probably the documentation can provide insights.
I find thatās a bit unfair of a statement
And probably we donāt have the same definition of compositor.
If we talk about compositing application, like AE, Nuke, Fusion, the speed isnāt really the problem.
I personally find individually the nodes are quite efficient, the main issue is the lack of cache, thatās what makes software like Nuke being able to play real-time once the cache is filled. That also allows to store intermediary renders to speed up the rendering.
Since a cache is complex to add, and anyway it doesnāt fit well with the idea of a compositor within a 3D app, then the idea of viewport compositor is brilliant. Maybe at some point it will fully replace current compositor.
But I also suspect a quality loss when using RT compositor and it might be better to have something more reliable for final rendering.
Anyway, you might want to look into Natron or Fusion if you need a more advanced and affordable compositing app !
Non-realtime (old) Compositor loads the image from RAM or SSD and performs the comp on the CPU.
And GPU accelerated Viewport Compositor performs the comp on the GPU.
There is a plan to improve the performance of non-realtime Compositor by offloading some computation to the GPU but the data would need to be passed to VRAM from RAM/SSD anyways. So there still will be differences between comping in the viewport and comping rendered 2D image or image pass.
Compositor does not need all the data. You can comp on the individual passes or even on image chunks. But that has a lot of limitations. What method will be finally chosen depends on many factors like performance or implementation difficulty or maintenance cost.
Other factors to consider- a render has more pixels than a viewport preview and a render composite works with all available data, where a viewport composite works only in screenspace. Yeah, a screenspace effect is faster than a full composite, thatās just the nature of screenspace effects. Not to mention, the render composite has more passes- usually at least Mist, Normal, and Z, none of which the viewport has.
Except for the access to more render layers, what more data does the non real time compositor get to work with that the real time compositor does not? Everything in the compositor happens after rendering and only uses the rendered image(s), no? So isnāt that by definition all āscreen spaceā?
Itās justā¦ Iām really not convinced by all arguments made so far. More pixels? Probably not if Iām using a 5K screen (Iām not, but theyāre not that rare :)) and rendering an animation. The image needs to be loaded into VRAM? That surely doesnāt take 15s, so how does that make CPU compositing still needed? Quality loss? Thatās an argument that people made against GPU rendering many years ago, surely thatās not an issue anymore, is it? More data to work with? How so, once render layers are supported?
Iām really trying to understand it. I can really see no reason why we canāt use the real time compositor for final renders other than that not all nodes are supported yet. And if that really is the only reason, I find it disappointing that we donāt have an option to enable it, even though it would be more limited.
The image for comp is not loaded into VRAM in case of old Compositor. I already told you having GPU acceleration in non-realtime Compositor is planned.
Old (non-realtime) Compositor = CPU = slow
New (GPU based) Viewport Compositor = GPU = fast
As for the difference between your rendering times (.3 and 15.9s) final F12 render is doing a lot more pre-processing before rendering starts, so there will always be some overhead. Also you didnāt gave any render settings that might affect the performance.
Yes, you said so, and Iām glad to hear that, but I just donāt see why it needs to be a separate system.
Render settings and pre-processing before render start donāt seem relevant if over 98% of the total render time is used for compositing, am I missing something that might cause such a huge difference? I was called out for being unfair when I said the current compositor is unusably slow, and that might have been a bit harsh, but >15s for some very simple compositing on a 1920x1080 image that gives me (nearly?) identical results as about half a second using the āviewport render imageā (that does use the real time compositor), is justā¦ not a good speed. I understand the current compositor isnāt fast but has more node types to work with, but the difference is just unbelievable. Or put more positively: Blender clearly can do what I want, itās a shame I have to go through such loops to do it.
Iām not trying to be difficult, Iām trying to understand why.