Real time compositor for final render

I was playing around with the new real time compositor. Seemed to be working really well (except for the not yet supported nodes), until I tried rendering the image. Suddenly the compositing was everything but real time. Even rendering the (slightly modified) default cube with some compositing, raised the render time from .3s (without compositing) to 15.9s, back to utter slowness as it used to be :frowning:

I suppose the real time compositor isnā€™t used for final rendering because of the missing nodes? Is there a way of forcing Blender to use the new compositor for final renders, even though Iā€™d be missing out on some nodes? It kind of feels almost sadistic, giving us a taste of how fast compositing could be, but not letting us use it for final renders :frowning:

1 Like

Have you considered that this has nothing to do with the real-time compositor? That is to say, even if you disable the real-time compositor, rendering the default cube with your compositor nodes will still take 15 seconds. You can very easily confirm this to be true

i donā€™t think they mean that the realtime compositor is ā€œslowing downā€ the final render composite any more than it normally is, its just that it isnā€™t being used despite being much faster. which makes sense as they pointed out that it doesnā€™t support all nodes yet. i actually forgot this was the case myself; luckily my composite effects only seem to add a half second or so per frame.

i thought i remember an early test build when the compositor had to be enabled in the experimental preferences that there was an option to use gpu acceleration for the final composite too, but i donā€™t see it anymore. also i could have been hallucinating. :upside_down_face:

2 Likes

You can use OpenCL to calculate the comp :
image

But itā€™s an old attempt at speeding up the compositor, and itā€™s not related to viewport compositor.
For now the viewport compositor is only meant to work in the viewport.

Maybe by doing a viewport preview you can force it to render to disk.

So yeah itā€™s realtime preview, but not realtime rendering, a bit like eevee.

Has that OpenCL option ever helped for anyone? I never noticed a real speed increase. I just tested it, it went from 16.49s without OpenCL to 16.39s with OpenCL, well within the margin of error for one measurement :slight_smile:

Iā€™m quite disappointed the real time compositor only works in the viewport. Having a compositor with a usable speed was the main thing missing from Blender for me, so I was really looking forward to having that real time compositing in 3.5 :frowning:

Where else should it work if not in the viewport?

1 Like

in the final render, thatā€™s ā€¦ the title of my post?

1 Like

Thatā€™s what the non-realtime Compositor is forā€¦

1 Like

but why have a non real time compositor if it can be done in real time? I donā€™t see why it seems so normal to you to have both a non real time compositor and a real time compositor?

They need to be implemented in completely different manner.
3D Viewport drawing and drawing static 2D image in Image Editor are done in different parts of the code with different engines.

1 Like

That sounds redundant, and given that thereā€™s a ā€œviewport render imageā€ (apparently the only way for now to use the real time compositor for real renders, for now?) it also sounds wrong, as that does exactly what you say ā€œneeds to be implemented in completely different mannerā€ ?

edit: ā€œviewport render imageā€ doesnā€™t work with cycles apparently? Never knew that.

iā€™m sure it will get there eventually. the idea of using the viewport render as a workaround occurred to me as well, although when i tried it, it didnā€™t properly handle the framing of the composite effect, showing as a window of composite within the camera frame (not even matching the viewport). might be a bug or limitation.

1 Like

Image displayed in 3D viewport in render mode is created and passed directly from Cycles or EEVEE, so the compositor need to get the pixel/passes data directly from them.

Rendered image in 2D Image Editor is loaded from /tmp or \temp directory or from memory buffer (donā€™t remember exactly) and not directly from the rendering engine. So you need to apply all compositing on top of that data.

1 Like

Iā€™m afraid I donā€™t understand the real difference. What does ā€œthe compositor need to get the pixel/passes data directly from themā€ mean? Why would it matter if itā€™s first stored in memory, and how is it not always first stored in memory? The compositor always needs the full image to work with, as (eg) pixels on the top left might be replaced with pixels in the bottom right, no?

When that was released it was indeed providing some speedup depending on the nodes,
But itā€™s old and Iā€™m not surprised that itā€™s not providing speedups right of the bat. Probably the documentation can provide insights.

I find thatā€™s a bit unfair of a statement :smiley:
And probably we donā€™t have the same definition of compositor.
If we talk about compositing application, like AE, Nuke, Fusion, the speed isnā€™t really the problem.
I personally find individually the nodes are quite efficient, the main issue is the lack of cache, thatā€™s what makes software like Nuke being able to play real-time once the cache is filled. That also allows to store intermediary renders to speed up the rendering.

Since a cache is complex to add, and anyway it doesnā€™t fit well with the idea of a compositor within a 3D app, then the idea of viewport compositor is brilliant. Maybe at some point it will fully replace current compositor.
But I also suspect a quality loss when using RT compositor and it might be better to have something more reliable for final rendering.

Anyway, you might want to look into Natron or Fusion if you need a more advanced and affordable compositing app !

1 Like

Non-realtime (old) Compositor loads the image from RAM or SSD and performs the comp on the CPU.
And GPU accelerated Viewport Compositor performs the comp on the GPU.

There is a plan to improve the performance of non-realtime Compositor by offloading some computation to the GPU but the data would need to be passed to VRAM from RAM/SSD anyways. So there still will be differences between comping in the viewport and comping rendered 2D image or image pass.

Compositor does not need all the data. You can comp on the individual passes or even on image chunks. But that has a lot of limitations. What method will be finally chosen depends on many factors like performance or implementation difficulty or maintenance cost.

2 Likes

Other factors to consider- a render has more pixels than a viewport preview and a render composite works with all available data, where a viewport composite works only in screenspace. Yeah, a screenspace effect is faster than a full composite, thatā€™s just the nature of screenspace effects. Not to mention, the render composite has more passes- usually at least Mist, Normal, and Z, none of which the viewport has.

Except for the access to more render layers, what more data does the non real time compositor get to work with that the real time compositor does not? Everything in the compositor happens after rendering and only uses the rendered image(s), no? So isnā€™t that by definition all ā€œscreen spaceā€?

Itā€™s justā€¦ Iā€™m really not convinced by all arguments made so far. More pixels? Probably not if Iā€™m using a 5K screen (Iā€™m not, but theyā€™re not that rare :)) and rendering an animation. The image needs to be loaded into VRAM? That surely doesnā€™t take 15s, so how does that make CPU compositing still needed? Quality loss? Thatā€™s an argument that people made against GPU rendering many years ago, surely thatā€™s not an issue anymore, is it? More data to work with? How so, once render layers are supported?

Iā€™m really trying to understand it. I can really see no reason why we canā€™t use the real time compositor for final renders other than that not all nodes are supported yet. And if that really is the only reason, I find it disappointing that we donā€™t have an option to enable it, even though it would be more limited.

The image for comp is not loaded into VRAM in case of old Compositor. I already told you having GPU acceleration in non-realtime Compositor is planned.

Old (non-realtime) Compositor = CPU = slow
New (GPU based) Viewport Compositor = GPU = fast

As for the difference between your rendering times (.3 and 15.9s) final F12 render is doing a lot more pre-processing before rendering starts, so there will always be some overhead. Also you didnā€™t gave any render settings that might affect the performance.

1 Like

Yes, you said so, and Iā€™m glad to hear that, but I just donā€™t see why it needs to be a separate system.

Render settings and pre-processing before render start donā€™t seem relevant if over 98% of the total render time is used for compositing, am I missing something that might cause such a huge difference? I was called out for being unfair when I said the current compositor is unusably slow, and that might have been a bit harsh, but >15s for some very simple compositing on a 1920x1080 image that gives me (nearly?) identical results as about half a second using the ā€œviewport render imageā€ (that does use the real time compositor), is justā€¦ not a good speed. I understand the current compositor isnā€™t fast but has more node types to work with, but the difference is just unbelievable. Or put more positively: Blender clearly can do what I want, itā€™s a shame I have to go through such loops to do it.

Iā€™m not trying to be difficult, Iā€™m trying to understand why.