Nanite-like rendering, end of GPU Memory limits?

I just thought about one thing, if Cycles microdisplacement would somehow be applicable to ALL meshes at rendertime per pixel/triangle, like in Unreal Nanite, would that basically make everything beyond 8G GPU RAM obsolete, like… forever ? OK, maybe volumetrics would still eat, but everything else ?

Would i be right ?

Nanite is rasterisation, not ray tracing. The big difference is that a rasteriser at no point in time needs to keep all geometry in memory, so it can tessellated to infinite detail. Ray tracing in most cases needs to have all geometry in memory and cannot efficiently tessellated on demand.

5 Likes

But there’s real-time ray tracing in Unreal so how it is working with Nanite? It takes object made with Nanite as a normal geometry that is simpler in shape, beeing an approximation of the Nanite object?

Or what about Eevee that rasterize the 3D scene? It can be a hybrid renderer with ray traced features. But could it have a something similar to Nanite?

Could you live with the restrictions of Nanite? They’re spelled out in detail on their web site, and include things such as supporting only opaque objects, no transparency.

And yes, they are ray tracing only against the coarse mesh, not the detailed one. It’s all right there in their documentation.

3 Likes

Maybe I’m missing something but I thought it was textures that blow through your VRAM. Has geometry ever caused an issue with VRAM limits?

Well, the amount of detail is tied to resolution since pixel sized triangles are smallest triangles (using default settings) so going from 1080p to 4K will also allow 4X more triangles to be rendered. With future higher resolutions this might go further…

I would honestly hope next gen gpus will ship 12-16gb by default and in theory this could be a pretty futureproof amount it seems.

No. You are wrong. Chuk_Chunk is closer to the right track.

If you’re working in the hundreds of millions of polygons, then 8GB VRAM can get tight real quick.

1 Like

Afaik ray tracing against SDF mesh happens only in SW raytracing mode. In HW mode, they raytrace against the full meshes.

And yes, nanite is currently a bit limited, especially in terms of shaders you can use with it. But the great thing is that you can simply turn anything that works with nanite into nanite meshes, but you can still use non nanite meshes in the same scene/level. It’s not all or nothing.

1 Like

In actuality, recent articles from tech. sites though (regarding the GPU shortage and related price hikes) suggest a couple of years where the peak of next gen graphics actually stagnate (because few will even be able to afford to run the latest and greatest otherwise).

As of now, our best bet for the near term is actually the advances being made in integrated graphics (either Intel Iris Pro or the new iGPU’s coming in Zen4), Intel’s new Xe GPU’s, or Vulkan leading to some major optimizations. It is not a great state of affairs, but to actually have pro level graphics rendering in apps. like Blender and Unreal may now remain quite expensive beyond next year even (especially as recent Covid mutations threaten to tie up supply chains once again).

Now I know of Apple’s M1, but its GPU capabilities are no RTX 3080.

1 Like

From their website:
"The following rendering features are not currently supported:
[…]

  • Raytracing against the fully detailed Nanite mesh
    • Ray Tracing features are supported but rays intersect the coarse representation (called a proxy mesh) instead of the fully detailed Nanite mesh"

Users never read documentation.

4 Likes

I thought the big take away with Nanites is the Auto LOD setup.

    • Level of Detail (LOD) is automatically handled and no longer requires manual setup for individual mesh’s LODs

If Blender can provide a modifier that can create multiple LOD automatically, it should be able to use that for both the viewport and Cycles render engine. Multiple LODs at lower resolution seems to take up an additional 30% storage memory. I don’t know the exact science between using multiple lower resolution meshes vs just a single high resolution mesh in memory, but I’m sure if you are not that close to the camera, the lower resolution mesh will win out in memory usage.

Zbrush has Decimator that works pretty well and 3DS Max has Pro Optimizer. It seems Unreal 5 has a slightly more reliable method for creation LODs at multiple levels automatically. Or they are just creating a low resolution mesh and baking a displacement map from the High resolution mesh, then just freezing at multiple levels to avoid real-time processing hits. Just stream in the correct LOD mesh. Honestly, I think this will eventually come to render engines. They are using similar current technology and auto setting things up with a fancy marketing name. Not complaining, it’s a welcomed next move to push others to the next level.

I think you mixed up lumen and nanite. Nanite cannot cast raytraced shadows from nanite mesh and it is mostly visible in self shadowing artifacts.

It’s not a project wide setting, you can use nanite on all objects which don’t utilize currently unsupported features only. It is also a simple checkbox in mesh settings so no setup time either.

Generaly you don’t need a lot of poly on objects that use opacity(something like densely perforated curtain etc) and in other cases like tree leaf you can use actual geometry instead of opacity map at no performance cost!

I actually tried using nanite on heavy tree model and even though this is really bad use, cause lots of tiny unconnected meshes(leaves) and lots of overlapping when you start packing trees together. It still provided massive boost…This is not smart use for game development for sure, but for isolated scenes and archviz purposes you can get away with a lot more.

So the benefits heavily outweight limitations and especially for people who are not in need for optimizing their projects for low end HW but instead want to use heavy cad data or scanned meshes for interactive presentations and rendering.

However this might not be possible for cycles but rather maybe for eevee in the future. Even though this is rasterizing, with the addition of raytracing and tech like lumen and nanite it would seem that pathtracing will soon be providing diminishing returns.

The cases where you’d switch from something like unreal to offline pathtracing are slowly disappearing. When it comes to realtime feedback you can see that people will gladly sacrifice a bit of quality and precission for the ability to work and render realtime. There are people even trying to use eevee for interior renderings when eevee doesn’t come close to features of unreal rendering.

So I’m pretty sure having raytracing, realtime gi and system like nanite in eevee even though with some limitations would push cycles to be used in only corner cases where eevee fails. And this would also bring a lot of people from unreal who don’t need all of the features of a game engine but want the rendering.

For interested, slides from SIGGRAPH 2021’s course:
“A Deep Dive into Nanite Virtualized Geometry”

1 Like

The things that Nanite does is almost exactly what production rendering did before path tracing. Renderman before RIS was pretty much exactly that. Virtually unlimited geometry resolution with REYES, extremely fast motion blur, view dependent tessellation and ray tracing on lower resolution proxies, GI using fast approximations and interpolation. Path tracing was slower. Path tracing didn’t do micro polygons nearly as well. Yet everyone switched to path tracing, because it was better on the bottom line. Render time isn’t everything.

4 Likes

I think brute forcing through large data sets will always triumph over complex solutions that either demand a lot of development time or lots of artists hours to optimize (often both) because paying a human to do something is always more expensive (and less effective) than having a computer (or multiple) deal with it.
Computers don’t need sleep and down-time.
If I had to do very complex scenes and I had to choose from either a UE5 or Clarisse workflow, I would choose the latter - having to deal with a pathtracer that scales properly is a lot less stressful than dealing with a real-time engine that can break under certain circumstances and needs custom solutions for specific use cases and lots of workarounds (not to mention the difference in quality).

2 Likes

You seem to be right, but at the same time, they do such a good job of it I can’t really tell that the nanite mesh in the reflections (even mirror reflections) is just a proxy:

There are some differences, but those are more of a consequence of no GI in the secondary reflected bounces, rather than simplified/wrong looking mesh.

Also, both Lumen and Nanite have limitations, but Epic has stressed many times, in fact almost every time they talked about either of them, that it’s early access alpha and that they are working on removing/reducing the limitations. So it doesn’t make much sense to judge its viability at this point in time.

Some sort of ‘irradiance cache’ for normals perhaps?

I wonder if Unreal’s code is making use of all 4 core types in RTX cards here (including the CUDA and Tensor cores), I guess we won’t know how they did this until the engine is ready for prime time.