Removed by Popular Demand
Still not working on ARM based Macs.
Updated the 1.13 Windows build to 3.0 alpha. Includes the fix for the firefly issue. Going over the tests files it should fix most of the issues with white pixels.
Will update Linux builds a bit later.
Edit: Linux Build updated to 3.0, although with no CUDA currently.
Update about 1.14:
The results with denoising are of mixed quality so I might look into redoing it. Doing it per render sample, instead of accumulating and then denoising all the samples leads to increased splotchiness since the blur samples have huge variance. It can work - but it needs more blur samples than is performance friendly. Also guided bilateral filtering with depth and normals can inhibit some noticeable artifacts. Accumulating and then denoising once when the maximum defined sample count is reached is more preferred approach, but it needs some slightly bigger changes.
I attempted a quick Windows build with POM, but the patch doesn’t apply cleanly. I might revisit this in more detail when I have some free time.
Since I’m back in Windows again (yeah, I’m flaky), I’ll give it a quick test out.
Preview of 1.14 is up on Drive. Includes denoising and mostly minor tweaks.
Only Linux build available for now and half res is completely broken at the moment.
Denoising is unoptimized, so it can slow viewport down quite a lot currently.
Will do a windows build later.
Edit: Windows build also uploaded.
best thing after blender is your blender build
… and life is too short 4 cycles
thank u man
- High Bit Depth Crash
- Half Res Trace give me an artifact
- I set 128 samplings, I think it’s better than 64
- also change denoise parameter for a better result
Aniway, good job.
It states : not found
Yeah, half res is fundamentally broken at the moment in 1.14.
I couldn’t replicate the crash with high bitdepth shadows.
The depth weight’s internal multiplier seems to be way too high in some cases, it was initially set up so high to minimize bleeding from one object to another if they had similar view normals. Decreasing depth weight can improve the denoising in some scenes.
Higher denoise samples work a lot better, but with a GTX 1060 the viewport isn’t very responsive anymore.
The UI doesn’t show it, but the actual blur samples count is: (2 * Sample Count) + 1) * 2, so the actual cost goes up pretty fast.
16 render samples - 10 (UI) filter samples:
16 render samples - no filtering:
The biggest issue is that it would make a lot more sense to accumulate some amount of samples, and then denoise the accumulated result in one pass. I’m not sure how to handle implementing at the moment, but I’m considering accumulating 4-16 samples - denoising them once and only then writing to the viewport. Should help to keep the performance impact on the viewport a lot smaller, with potentially better quality result, while still having some kind of preview without waiting for the max sample count to be reached.
Currently it’s denoising mess like this every render sample:
1 render sample:
And denoising that isn’t working that great.
1 render sample - 10 filter samples
There are ways to make the filter significantly faster, but one of the main optimizations has some smaller artifacts, so if I can reduce how often the filter is run for every frame that would seem to be a better initial improvement.
Something else I want to look into when I get the chance is taking the rays that don’t hit anything and adding user defined sky and ground colors to those rays based on ray direction, so there’s some fallback. Ideally it would later sample from the probes / world. It would also need some way to dampen the default indirect diffuse world lighting, since it’s occlusion without baked probes is only GTAO and that doesn’t handle larger scale occlusion at all with default settings.
I think there might be a blog post about Eevee next week (according to D. Felintos comment on the 2021 roadmap blog post) so it will be interesting to see if there’s a point to support this branch in the future also.
I might be able to set up a Mac build on my end sometime next weekend. No promises on the exact time though.
Let’s cross our fingers.
Not holding out much hope for ARM-based Macs, but I did make another build today based on
master and therefore now versioned 3.0.
It’s working on Intel/AMD Radeon Macs.
Very early first tests of improving the world lighting.
Currently SSGI is just added on top of default diffuse probe lighting resulting. World lighting in Eevee doesn’t cast shadows and is just slightly occluded by AO, adding extra light with SSGI on top of already too bright result leads to visible energy conservation issues, mainly visible as glowing intersections.
So I’m testing injecting world lighting into SSGI rays that don’t hit anything in screen space, giving some approximation of large scale occlusion in screen space. I’m not sure yet if I want to try to completely
replace the default diffuse lighting or derive additional AO factor from this to multiply the default world lighting pass with.
Examples with only world lighting and no additional light sources:
SSGI has no subsurface scattering present, just the default BSDF color input. Also the occlusion is somewhat better on specular / glossy component compared to default diffuse, due to screen space reflections replacing cubemaps when possible, but some light leaking is still visible.
I’m using a linear gradient in one direction in these tests to have some directionality to the lighting, but it’s mostly comparable to overcast lighting.
Notice large scale objects having actual occlusion. GTAO handles finer details very well, but it tends to excessively darken if set to larger size.
There’s also some areas that don’t get occluded as well as they should (last example - around first floor where the lions head sculpture is). There are some rays that shouldn’t add light at all included currently, but their contribution seems to be small enough that it wouldn’t drastically change the results.
If you look 3° image of the last figure, the default + AO has a better “bevel shadow” compared to the 4° image. If you could add an AO with a small distance to recreate an alternative to the bevel node(that works only in cycles).
Very interesting tests, been waiting for a good solution to this !! how does unreal engine do it? are they relying on light maps or? I haven’t seen exactly how their new real-time thingy works…
In UE4 their SSGI comes with it’s own AO that replaces the default one. It’s pretty soft and larger scale compared to the old SSAO, so it’s likely that they get their AO from the SSGI tracing - the overall look fits. With world lighting, the closest thing in UE4 to Eevees world probe lighting is their Skylight. It’s baked if it’s static, but if it’s movable there’s option to occlude it with distance fields.
“Screen-space traces handle tiny details, mesh signed distance field traces handle medium-scale light transfer and voxel traces handle large scale light transfer.” - seems like generally same approach, but with GI and voxel tracing and integrated into one system.
Overall it’s an area I’ve been frustrated with in rasterization. Since the introduction of PBR and image based lighting technically AO shouldn’t be included in the albedo, but realtime rendering fails to occlude in lot of cases. You can’t really bake lighting on character’s so we’re still stuck with faking AO with extra transparent geometry or doing extra fake AO in shaders. At least with Eevee the soft shadows work well enough that you can skip ambient lighting and fake it with huge amount of lights that are shadowed correctly, but that only is feasible with turntable style renders.
You mean essentially a extra AO pass that’s only used for the AO node? AO node works in Eevee, but it’s settings are tied to the scene AO render settings, so it has limited usability without rendering an extra AO pass.
https://forum.unity.com/threads/next-gen-screen-space-edge-bevel.913931/ something like this seems like the closest implementation that I could find to fake bevels in rasterization. I don’t have a good enough idea of the rendering pipeline in Eevee to speculate how feasible it would be to implement, but it should be done before any other steps are calculated, so masking edges to be filtered with AO should be carried over from the previous frame / render sample. Overall it seems quite involved to pull it off in Eevee.
Some of these tests are blowing my mind. I will keep repeating this, but how is BF not getting you on payroll and equipping you with a small team to support you is beyond me. It seems to me that the work you’re doing on EEVEE is revolutionary and mind blowing.
The problem with that unity bevel shader is that it only works with deferred rendering and Eevee uses forward rendering.
Maybe this can help?:
Mobile - friendly Bevel Shader (Unity) (quizcanners.com)
Or Eevee edge shader?:
Charly Barrera (gumroad.com)
I recently learned about forward vs deferrd rendering, I thought eevee was defered? That’s why wwe can see each pass and that’s why the old dof had these artifacts? I wonder how hard would it be to have both, since unity has them and and doesn’t seen to be a problem for them…
Is this working with nvidia GTX 1080 cuda cards? (no optix)
It works on my old 970, so yeah.