- What your Compositing setup looks like (screenshot)
- What render passes did you enabled?
- Did you checked your CPU utilization during compositing of the final render. Are all cores doing work or only 1?
- Streaks and ghosting to create some lens flares. Apparently itās a WHOLE lot faster with eg just a blur effect, so I suppose Iām using nodes that are extremely slow? And probably those will benefit the most of the planned GPU acceleration? Then all will be will, except for my lack of understanding the need to have 2 separate systems
- Only ācombinedā
- None of the cores went to 100% and seemingly only 2, maybe 3 logical cores were really used while compositing:
If the majority of that 15s is spend on comp as you say, that looks like it is because the old Compositor is using CPU.
I assume you render on a GPU? Please specify.
This is how data is transferred during rendering as far as I understand it (anyone correct me if Iām wrong):
F12 render:
- Scene pre-processing on CPU.
- Scene data is sent to VRAM.
- GPU renders the scene.
- Rendered Image (or passes) is saved to a buffer (RAM).
- Compositor reads image/passes from RAM.
- CPU comps the image and stores it in RAM.
- Image is sent back to the GPU for display.
Viewport rendering:
- If you have Rendered Viewport Shading enabled you scene is already prepared and present in VRAM.
- GPU renders the scene and stores result in VRAM.
- Viewport Compositor reads scene from VRAM.
- GPU computes the image and stores it in VRAM.
- Image is displayed on a screen by drawing engine.
If you have a comp node that is expansive to calculate (like glare) it will be orders of magnitude slower on a CPU compared to a GPU. And R5 1600 is not exactly a speed demon.
Add to that the fact that with CPU based compositor data needs to leave GPU memory and that also takes time when you have big resolution render with lots of passes.
Maybe have an addition look at some definition of real-time (fro example Wikipedia):
Real-time or real time describes various operations in computing or other processes that must guarantee response times within a specified time (deadline), usually a relatively short time. A real-time process is generally one that happens in defined time steps of maximum duration and fast enough to affect the environment in which it occurs, such as inputs to a computing system.
So compositing is not in real time but ( for example: you machine would have to have the complete video material already in RAM to do so⦠):
- modern GPU compositing is very fast
- traditional CPU compositing has more filters implemented (yet)
⦠so therfore the different implementation/ usage of the composition rendering (not the 3D image rendering)ā¦
As you can see the development of blender tries to use the next best thing⦠and maybe you are expecting a bit too much (?)⦠like others also often say: for compositing there are other software better suited (yet)⦠because they do have a longer/ more full-fledged history of doing itā¦
So the (almost) realtime compositing is (for the time being) to make the compositing experience more convenient for the user (and maybe all GPU in the futureā¦).
Iām using Eeevee, I assume that always uses the GPU? I did use the OpenCL option for the compositor, but I believe I read somewhere long ago that that doesnāt work for all nodes?
Is the rendered image (for a normal render) kept in VRAM while rendering? I suppose not, but if it is, I donāt see why the extra step of saving it to RAM and then reloading it into VRAM would be needed.
Thing is: I always thought that the bloom effect in Eevee was a kind of post processing similar to what the compositor would do. Was I wrong in thinking that?
The glare node indeed seems to be the big problem here. Like I said: a simple blur is a lot faster.
Donāt insult my Ryzen 5, itās gotten very sensitive over the years
Ok, so now I partly understand why the old compositor is so slow, but still not why I donāt have the option of using the new compositor for final renders. As long as some nodes donāt work with the new system, it is a good idea to still have the old compositor handle everything for final renders, by default anyway, but it would be very nice to have the option to use the newer, much faster implementation.
In fact, I would have expected the devs to have exchanged the old slow nodes for the new fast nodes. But I suppose that wouldnāt work because of how the old compositor is incompatible with the new implementation of the nodes?
Either way, thanks for your patience, I understand I was starting to sound like I was just being difficult, and for that I am sorry.
Real time is very relative, and I never liked using that name in this thread, but I believe thatās what the developers call it?
I donāt mind waiting an extra second per frame for some complicated compositing, but it is frustrating to have to wait 15 extra seconds for something youāve just witnessed can be done in (near ) real time.
Maybe have a look at the source:
As a first step, this new back-end will be used to power the Viewport Compositor, a new shading option that applies the result of the Compositor Editor node-tree directly in the 3D Viewport. Artists will not have to wait for a full render to start compositing, allowing for faster and more interactive iterations on oneās projects.
In the long term, the goal is for it to power the existing Compositor Editor.
Yes, I missed that, thanks for pointing it out to me!
āReal timeā is both a blessing and a curse. A 3D game has to use it to get the job done, and therefore has to accept whatever compromises the technology may bring. (Hoping that the game player will never notice, as he is busy fighting-off hordes of nasty aliens ā¦) But if you are āsimply compositing,ā it doesnāt actually have to be āreal time.ā You just want it to take āfifteen secondsā instead of āfifteen minutes.ā If the algorithms used by real-time games are āgood enough for you,ā then congratulations: you just saved time. But the audience is never going to know, nor care, how long it took you to do it.
@Okidoki If only this was the first reply, it would have saved so much time!
@Gaeriel While the above post says that yes, the goal is to have the real-time compositor power the final render as well, to directly respond your question as to why this isnāt yet the case, the answer is:
Ran out of Time.
The following commentās excerpts are specific to the glare node, but I feel can be expanded to what Omar is doing in general.
In the context of the current real time compositor project, where we are aiming for a v3.5 release in weeks, there just isnāt the space and time to do the aforementioned process properly. So it is clear to me that we need get the real time compositor in a good state first in order to spend as much time as we need on developing the compositing workflow of the future.
While the real time compositor project on the surface seems like a project to get a fast GPU accelerated compositor, to me, perhaps more importantly, it is a project to get the compositor in a better shape maintainability-wise. All the old convoluted code is nowāto the best of my abilitiesārevers engineered, organized, and documented to ease future development. So this is just a slow start for the journey of creating the compositor of the future, so ride along and bear with me.
Sorry for not seeing and reading everything instanlty