Nanite-like rendering, end of GPU Memory limits?

The main critique as far as I can see goes towards people praising those techniques beyond what they are by pointing out the weaknesses. Understanding what can not be done is just as important as what can be done.

1 Like

if i understand this correctly then nanite stores geometry more like textures. also with something like mipmaps and texture compression. and the idea of texture compression is that it is cheap enough to be decompressed on the fly while drawing. so nothing has to be (completely) expanded in ram actually. this results in the limitations it has. only static geometry,…

100% agreed on that.
For what is worth, the videos linked above on their channel are about half of it talking about the constraints, when it works, when it breaks, what to expect and how it isn’t really a magical bullet. A very honest approach.

But it’s just natural technology advancements:
For example, Blender has these light probes flood filling objects (forgive me, still quite the beginner at Blender to be honest) for GI which are quite limited but useful? Lumen real time GI (and reflections included) has then taken a combination of screenspace tracing, out of screen geometry cache tracing, global distance field scene tracing and probes filling on a scene. They have taken what works for each specific case and mixed them all providing outstanding results at more generic use cases and eliminating as much of the hassles for users with a more automagically out of the box approach.

Nevertheless, these conversations over here have been quite the motivational factor to get to know about what’s so good and so bad about them. They for sure have both angles.

Yes! Quite close on the accuracy of that concept wise, like id’s Rage mega texturing approach but apparently it doesn’t translate as much. If feeling passionate and like picking what’s going behind the scenes, on the Nanite video they explain a bit on this and why it is such a harder problem than textures (mostly related to textures being ‘filterable’ while geometry not so much), it’s quite interesting plus it shows A LOT of when, how and where it still breaks.
They try to stay at a one triangle per pixel metric, but the hardware is actually bad at this so they did a software rasterizer for these cases which is several folds faster than the GPU rasterizer, if I understand correctly, inside “mesh shaders” or “primitive shaders”… which I don’t know exactly what they are, seems different than compute, geometry or tessellation shaders.
Geometry can move, rotate and scale, what it can’t really do is deform… that seems to be the main limitation.
All geometry no matter the amount of instances, lights, etc is a single draw call. What only increases the draw calls currently is the amount of materials visible for those meshes on screen. Many lights and many ‘shadow maps’ is also a single draw call (somehow).

Don’t know if it is a good fit for Blender but it’s amazing tech nevertheless.

1 Like

When the people who worked or are working on the technical side of that kind of project are allowed to talk, you pretty much always get a very honest view. However, this sort of insight usually only becomes visible once the hype of the initial huge marketing campaign is over.

Again, no one here claimed that this wasn’t an amazing achievement. Yes, it works in many cases out of the box, but there are still many restrictions as soon as you start to look more closely.

Having worked with many people who love to join hype trains, I know they would easily pick this sort of solution because it works for pretty much everything in our upcoming project. They simply ignore that it doesn’t work for some cases which might be very important or which highly likely require some flexibility later on.

1 Like

I keep saying that if Dreams had a PC version with a mouse it would be really amazing. It needs mouse and keyboard support along with tablet support. It’s such an amazing piece of technology that it’s almost a sin it’s stuck on the PS4 probably waiting for Sony to kill the online servers like they just did with the Little Big Planet servers.

Especially since the GI method we’re talking about, Unreal Lumen, is entirely based on ray tracing.
There is very detailed information out there about how it works. It’s not rasterized at all.

It IS rasterized, and then it has a Radiance Cache ONLY for the GI. Highly interpolated as a cache/buffer.
The first shading pass is a RASTERIZER(OpenGL, DirectX, Vulkan, Console Shader, etc.) It even has all those runtime calls. Just the GI is irradiance cached and is a combination with SDF and screenspace techniques BTW.

Raytracing/Pathtracing means that you solve per pixel every ray including primary shading. The closest to that right now is Brigade from OTOY. Of course full of noise. Here i am excluding older cinema hybrid renderers like REYES/RT for ex PRman.

1 Like


once we have real gpu displace in eevee I think I could do anything you want almost in 60 fps.

2 Likes