Unreal Engine 5 - New Game Development Features and Workflows

Not point clouds. It’s called signed distance fields and I don’t think it inspired Nanite (I saw one of the developers mention a piece of research as inspiration that had nothing to do with SDF, but I could be wrong). Unreal has had that tech for shadows for a number of years, and Lumen now uses it for global illumination for static meshes as well (dynamic meshes are apparently lit with SSGI, like that Blender add-on).

EDIT: Some people are saying that Dreams does indeed “rasterize” their SDFs to point clouds… however, again, I saw a Nanite developer mentioning triangles approaching 1 pixels in size (all on Twitter) but maybe it has point clouds somewhere as well then.

EDIT: Ok, holy shit… in Dreams, SDF is just the storage medium, and near geometry is rendered with point clouds and far geometry with voxels…

I was just saying PS4 Dreams have inspired Nanite main developer, did not say he copied same tech, he was working on such feature long time ago before PS4 Dreams existed.

Back on track.
Nanite requires high hardware, but it stay constant, no matter how many polygons.

I was able to load a 10 million polygon photoscan of Ziggy.
Using UE5 Nanite meshes I was able to load 1000 instances of it at 60fps before I got bored. That’s 10 billion polygons and it didn’t even blink. It could have handled a lot more than this.

https://twitter.com/IonizedGames/status/1397636481913610241

Can’t imagine how great it will run when it will get more optimized and more mature after release and some major versions updates.

I only have 32GB of system RAM is that enough to run UE5?

https://twitter.com/EpicShaders/status/1398112901664215040

Unreal 5 runs fine on 8 or 16 Gb depending on what you want to do.
For the big demo, i don’t know, read demo description to find the minimum requirements.

3 Likes

Nice ligthing.
Do you have wireframe view ?

wireframe view is broken…UE5 on my end.

Nanite is yet another GPU-driven renderer using the two-pass culling approach we presented at SIGGRAPH 2015. Media Molecule Dreams is also using that same approach (independent research).

This approach provides super precise culling without any artifacts from using stale last frame data. And there’s no need for artists to tag occluders or create low poly occluders. It’s 100% pixel precise with no added cost.

https://mobile.twitter.com/SebAaltonen/status/1399350400822743047

1 Like

Great overview and testing

1 Like

Look good, but too much duplicated meshes.

Some more indepth info about Nanite.

1 Like

Thanks for the video.

I asked earlier in this thread, whether they already have tools to let artists choose what the highest level of detail should be for the final game. This is mentioned at 26:41 (and also before). Those tools are not yet part of the early access, but will be available later on.
This is going to be incredible in my opinion. The artists are going to be creating assets, not worrying about performance or disk space when it comes to the texture sizes and triangle counts. The decision how much detail is needed can be postponed and won’t require reworking the assets. This is likely going to be a huge improvement for pretty much any game.

4 Likes

Yeah, having the ability to just freely create the best art possible and scale it back later according to performance budget and platform will be a huge improvement to the content creation pipeline.

Imagine the possibilities of doing a remaster (if that will even be needed with this tech) of a game made with that pipeline, they could just years later change some parameters and all assets are bumped to a higher fidelity.

1 Like

It must be tested on real production environment but, you’re right, it will be a kind of revolution.
Anyway, it’s in alpha stage, let’s see what we’ll have in the next step.

2 Likes

https://youtu.be/TMorJX3Nj6U?t=3481

  • Still have UV and tiling detail maps
  • Only replaces meshes, not textures, not materials, not tools

Nanite meshes will still use UV, materials, vertex color not supported, vertex material perhaps not possible and not better than textures.

Nanite gain is

  • No object normal map bake
  • skip LOD
  • display movie quality static assets.

Let’s wait for the release and check if the gain and performance is better than similar visuals using 4 or 8K textures and balked normal maps.

Unless Epic has support for UDIM textures up their sleeves, expecting people to make 8K or even 16K textures (to get color detail as dense as the polygons) may be a hard sell (when considering that it brings back the concern for available texture space and how UV islands pack together).

On top of that, it will further increase the difficulty of iterating on an asset further after the initial pass is done, what I do not like seeing is the industry being stubborn about the old, rigid ways of locking in a step as ‘done’ before the next one begins. As by now, we should have the hardware for next-gen flexible workflows that allows you to go back and forth as needed without causing too much destruction (case in point, the Dyntopo rewrite project will reportedly preserve data while maintaining the creative advantage).

I know it’s not related to UE5 but thanks for posting this. This led me down the Dreams rabbit whole and now my mind is blown by it. It’s such an amazing piece of software that I’m flabbergasted more people don’t know or care about it.

UE supports udims, also there is virtual texture streaming. So not only you can use udims but also combine very high res textures and udim. Unreal is definitely becoming the offline rendering killer.

They are also working on modeling tools and although they are crap now, one day you might not even need additional dcc to make stuff in unreal.

If you go to their yt channel there is a recent video about nanite and in it they go through some numbers and comparisons.

They did mention that high res geometry models should have less memory and disk footprint than high res normal maps. This applies to cooked/deployed game. The project files still have to contain source material and it can be easily 5x bigger than final product.

As for performance. Even if the performance is the same or a drop of 3-5 fps. It is still worth it cause you dont have to build LODs and Bake normals!

That said, for games at least, you still need to consider disk space. Cause even if they have good compression techniques it might not be good idea to have 5mil. triangles on a small rock somewhere. Same as it wouldn’t make sense to put 8K texture on it.

1 Like

I have a question in regards to Nanite:
Nanite always streams/crunches your high detail meshes to a polygon soup that is consistent and in realtion to pixel density on screen.
If there would be a polygon for every pixel it would be ~8 million polygons for a 4K resolution.
Now my question is, can you as an developer DECREASE this max polygon resolution that is seen on screen, disregarding the polygon/pixel density.
Like can you cut it down to half or even less?
The reason I am asking - If you don’t want to pursuit photorealism, but a stylized art style, I see huge benefits in using Nanite but in a more efficient and economical way.
If you don’t need to have that density, you can basically double the performance, while cutting down on disk space and modelling complexity.

2 Likes