Does anybody know if Eevee speed gains on Mac are expected with the metal port / Eevee next? I read that PC beta testers were seeing higher Eevee frame rates, was curious if the hugely parallelised Apple Silicon integrated GPU’s should expect similar gains from the new architecture.
Ooooh, very promising! I don’t mind about the delay due to full calendars, waiting for the new Mac Pro anyway. They’ll miss 3.4 anyway, might make it to 3.5.
I doubt that similar gains will come to my ageing Intel iMac Pro, although it might still be faster. I’ve been rendering client work with Eevee and very happy about time per frame already.
I’ve just noted that Eevee Next is no longer on Blender 3.4 Beta and has moved on to Blender 3.5 Alpha… 3.5 is a great number… reminds me of the big advancement that came with the 2.5 version… Eevee next on 3.5
it would be nice if we got some new effect nodes for realtime compositor , just porting the old ones just to keep compatibility with some 2.79 projects is a bad idea,
the glare node was never as nice looking and versatile as the glow node in aftereffects for example …
there are so many post processing effects coming over from the gaming realm ,
they could release a whole effects library based on shaders, it would be glorious !
also the modifications from gooengine could be trasferred to master they basicaly implemented their own post processing effect shader module already
thats how they implemented it in goo engine. they build the 2d shader with blender shader nodes and then look through a plane with such shader applied
i thought more of fixed effects though, like bloom or ao , but there are so many available. like a build in Reshade module , (maybe with a programming interface even so we can paste our own code)…
but now that you say it, of course it would be nice if they ported the shader nodes to the compositor too
In case you’re not around devtalk. They expect 3.5 to have initial stable release of realtime compositor.
Also on why spend time on porting old compositor nodes instead of implementing new and better methods:
I understand, and I want that myself as I mentioned before. But this is also the reason why I implemented the old methods, as I will outline below.
You see, the process of developing a new Glare node would probably start by elaborate discussions on what Glare methods to implement, what the user experience will be, which methods are computationally feasible, and numerous other considerations. Those discussions and investigations would take a long time because users like yourself care deeply about the new Glare and would like to get the design perfect and solid. So it will not be as easy as choosing some state of the art Glare methods and implement them.
In the context of the current real time compositor project, where we are aiming for a v3.5 release in weeks, there just isn’t the space and time to do the aforementioned process properly. So it is clear to me that we need get the real time compositor in a good state first in order to spend as much time as we need on developing the compositing workflow of the future.
Furthermore, deprecating the old Glare is an unclear goal. It is unclear if it should be replaced, augmented, improved, or just left as is. And this is subject to further discussions as well. Whatever the decision, implementing the old glare seems inevitable and not doing it now would leave the real time compositor in an incomplete state.
While the real time compositor project on the surface seems like a project to get a fast GPU accelerated compositor, to me, perhaps more importantly, it is a project to get the compositor in a better shape maintainability-wise. All the old convoluted code is now—to the best of my abilities—revers engineered, organized, and documented to ease future development. So this is just a slow start for the journey of creating the compositor of the future, so ride along and bear with me.
Yeah maybe at some point we’ll get to that !
I think what BF do, even a bit frustrating is clever.
Right now we have real time compositor which is going to be great but it’s unclear how much it can handle, like on a regular comp shot with a bunch of effect. In that case it’s always possible to fallback to CPU rendering, and use the viewport maybe just to preview some parts, so you get the best of both worlds.
Now if we have only some nodes that are CPU only ( it’s the case because it’s WIP) but some other nodes that are GPU only, it makes IMO things a bit worse especially when you want things to be reliable.
In some way having blender a bit more hacky, with little super options like that would be awesome, in the other hand I can see how it can quickly become an issue.
Realtime compositor is no longer experimental, will land in 3.5 stable.
The first milestone is finished with regards to implementing most
essential nodes for single pass compositing. It is also now documented
in the manual and no major issues are known.
It also got limited compositing region that allows it to use constant sized textures regardless of the viewport size. It’s only active when the camera has a passepartout of 1.
Input operations now have their domain in the compositing region, so users can create masks that have a constant size regardless of the viewport size, shift, and zoom
I downloaded Blender 3.4 and selected: (Preferences > Developers Extras) Then I see Experimental but I do not see option to select Eevee Next. Where are you seeing this?