Unreal 5

This is a suprising and amazing trailer of their next gen graphics using with Lumen and Nanite new features.

Letting people use million polygons meshes directly from Zbrush in game without needing to make LOD or normal maps, running with fully real time global illumination.

Some official site details

The PS5 next gen bandwith and architecture should help and drive future PC motherboard design

Some features details

Thanks! Not vector displacement or tessellation. That was my plan for this problem for many years but it’s not general enough. Stay tuned for more info.

https://twitter.com/BrianKaris/status/1260660677036957697

Can’t share technical details at this time, but this tech is meant to ship games not tech demos, so download sizes are also something we care very much about.

https://twitter.com/gwihlidal/status/1260597164318711808

For me it looks almost more like displacement maps fro micro polygons details

." Jerome Platteaux, Epic’s special projects art director, told Digital Foundry. He says that each asset has 8K texture for base colour, another 8K texture for metalness/roughness and a final 8K texture for the normal map. But this isn’t a traditional normal map used to approximate higher detail, but rather a tiling texture for surface details.

“For example, the statue of the warrior that you can see in the temple is made of eight pieces (head, torso, arms, legs, etc). Each piece has a set of three textures (base colour, metalness/roughness, and normal maps for tiny scratches). So, we end up with eight sets of 8K textures, for a total of 24 8K textures for one statue alone,” he adds.

And shadows virtual textures

“Really, the core method here, and the reason there is such a jump in shadow fidelity, is virtual shadow maps. This is basically virtual textures but for shadow maps. Nanite enables a number of things we simply couldn’t do before, such as rendering into virtualised shadow maps very efficiently. We pick the resolution of the virtual shadow map for each pixel such that the texels are pixel-sized, so roughly one texel per pixel, and thus razor sharp shadows. This effectively gives us 16K shadow maps for every light in the demo where previously we’d use maybe 2K at most. High resolution is great, but we want physically plausible soft shadows, so we extended some of our previous work on denoising ray-traced shadows to filter shadow map shadows and give us those nice penumbras.”

Some compite shaders perhaps

“The vast majority of triangles are software rasterised using hyper-optimised compute shaders specifically designed for the advantages we can exploit,” explains Brian Karis. “As a result, we’ve been able to leave hardware rasterisers in the dust at this specific task. Software rasterisation is a core component of Nanite that allows it to achieve what it does. We can’t beat hardware rasterisers in all cases though so we’ll use hardware when we’ve determined it’s the faster path. On PlayStation 5 we use primitive shaders for that path which is considerably faster than using the old pipeline we had before with vertex shaders.”

Another big change is also the new licensing.
And the totally free multiplayer services free for any Unreal user.

9 Likes

Crazy isn’t it, gonna change alot of things when it releases!

Blender dev’s need to hurry up on a few key things for Blender like High Poly Mesh viewing and what not before this gets here though.

Need more info on Lumens also.

4 Likes

I just watched the video, read the blog post and I can’t believe my eyes :exploding_head:

Utterly amazing. The games made for this will be… well unreal.

3 Likes

and Epic :rofl:

6 Likes

I’m guessing you will need the future RTX 3080 GPU or above to run that scene at the settings that make it look best, unless Epic finds a way to scale Lumens and Nanite down to mid-range or even lower-end hardware.

It looks like Unity Technologies will be in big trouble at that point since Epic tends to get their tech. to a fully polished state before even thinking of moving on. The only saving grace for them is if UE5 is discovered to be as bugridden as their current engine once users start testing every corner of the code.

Both have their advantages.
Unreal have always been state of the art and C++ speed, while Unity catched very well about graphics if you take a look at their last tech demo and it stays easy to use and easy C#.

A 3D engine does not make a good game for you.
Already with latest Unity features, i doubt any indies would be able to make a complete game that would reach Unity latest graphics demo level :joy:

It’s lot more about very high speed transfer rate and special high speed SDD, and high volume motherboard bandwith communication about the PS5.
And there is also lot of work on the 3D engine, Epic designed Nanite and Lumen very specific for these new high volume and high speed transfer standards.

It should lead how next PC motherborads will be designed completly differently, it’s not only a matter of high graphic cards.

Also we will get new next gen features like new progressive real time LOD systems i guess.
And the virtualized micropolygon geometry is also a specific 3D engine feature tailored for such high bandwith technology, the high density million polygons rendering is a really huge step forward.

So you get a glimpse about what is the next gen triple A games.

Holy crap! If it’s as good as in the demo it will make work related to heavy CAD models so much easier.

Isn’t this what Euclideon were trying to sell with their ‘unlimited detail’? :smiley:

This creates embarrassment in many Rendering engines, of course, VFX company they will choose this technology… the end of one era.

1 Like

Unreal and Unity have already been choosen by some entertainment animations studios, because they don’t have to wait hours to render a frame and they still get enough quality for TV audience.
But that does not replace high end rendering software for movies or adverts for example able to make more complex effects not doable in real time.

We will be able to drop directly Zbrush models or photogrammetry models in the editor to make scenes, that will be a big game changer for games design.
But memory cost about textures will remain as well as UVmapping tedious work :joy:
(While someone should make new IA automatic UV mapping tool with clever and great UV better than the angle based automatic UV mapping)

But the demo remains impressive displaying colored triangles like a pixel :
UE5triangles

1 Like

It looks like Unity Technologies will be in big trouble at that point since Epic tends to get their tech.

They’re in trouble, but not because of any outside force. As long as they keep depreciating tentpole features while pushing incomplete “new” replacements on their customers, they’ll die of their own heat death.

6 Likes

The best looking Unity game I can think of is Escape from Tarkov. I play it. And on any rig, no matter how powerful, there are constant graphical hitches. The dev team is brilliant, and yet they can’t stop the hitching.

Name another amazing looking Unity game that got a lot of attention? I don’t think Unity is at the same level as UE. They haven’t even figured out Tarkov level games. I’ve played dozens of UE games that look better than Tarkov, and run like butter.

I don’t think you’re going to need a graphics card or rig that isn’t consumer level at this very moment. I have a rig based around an RTX 2070, and I bet you dollars to donuts, my PC is still going to be more powerful than a PS5. I use SSDs. I have RTX tech. I have plenty of RAM. I’m only pushing 1080 at 144hz. I think gamers that are trying to push 4k at 200+ hz will have an issue…but not me.

If this tech demo is running in real time on PS5 tech, without grahical hitching and looking like at least 60fps…well, I’m guessing the higher end PCs of now, will run these kinds of games. It’s engineering genius in software, not just hardware.

3 Likes

‘The Mandalorian’… cough… :wink:

And the move to real-time rendering is already going on for a while. Lot’s of companies already looking into it. But this looks impressive, let’s see how it ends up mid 2021 for the regular user of Unreal.

1 Like

I’m already aware about some one had already choice for Real time like Disney for Start Wars, but I really think most VFX can be made with these new technology and soon probably many drop these path tracer…




For me no more game, only few thing can’t do and now they has many tricks for many problems with U4 and soon that gap will be filled.

This is absolutely awesome. I love his comments around the statue half way through, no baking normal maps, no authored LODs. F***king finally. And the final sequence demonstrating “fast travel” through that environment… wow.

I wonder if they’ll finish their opensubdiv support too? I think right now it’s disabled by default but it allows you to just load in your cage base object and will subdivide to the final asset for use in game.

Of course, DCC’s won’t necessarily be able to do this. They need to keep a lot of additional things around in memory and elsewhere to support editing but I think UE5 will continue the trend of doing more, if not all, scene layout, sequencing, and shots rather than using DCC’s for that.

Additionally, DCC’s often have unfortunate hardware support criteria. Blender’s being nearly the worst to support the lowest common denominator out there. Maybe this will change now :slight_smile:

But you’ll got texture work and UV mapping to do, because there is materials and textures.
Sure real time progressive LOD and no normal map will be great.
I guess some studios will keep using normal maps or LOD for performance in some areas, but at least the choice to go full poygons will be available.

This will change with next gen games, but only latest consoles will use and siplay such high detail content first. PC will have hardware redesign communication and bandwith inspired by latest consoles but lot later.

Sure the no LOD and no normal map will be more and more a new standard for next gen, and will make real time creation content so much simple for what will be next gen hardware PC motherboards and components.

I think with new hardware there could be other rendering techniques getting more used.
For example point cloud rendering, if storage size could be lowered, while point cloud gaps could be filled using IA similar to what is able to do DLLS2.

what u think about CAD modeling for new ue5 & I think its be
much more popular than now

Oh my heavens, this is unbelievable. In a weird way it kind of sucks, there’s definitely no ignoring it though.

Couldn’t vertex painting be used instead? I didn’t read the tech details, but now, if the polycount is high enough… sure I don’t see the substance workflow disappearing so easily…