Hold on to your papers!
I had a look at the paper. As far as I have seen, there are no denoising improvements coming from this paper. It is all about improved sampling. Thanks to that, the job for the denoiser gets simpler.
It is quite impressive!
Kinda makes unreal5 look like PS1 now…
To be sure, all of it will be yours for the low price of at least 3000 dollars for the GPU’s alone (to maximize smoothness and detail, I imagine you will want a dual RTX 4090 configuration). Don’t forget the cost of upgrading your house to deliver the power that will be needed.
Though chances are an engine with this tech. combined with VR could lead to titles the average person can play in an arcade (essentially reviving that business model).
Do you think that the most reasonable choice is to switch to entirely to a new GPU architecture for good?
As for example what happened to CPUs the last 5 years was great, that within this generation technology and without any disruptive designs, it was possible to squeeze the best possible performance within certain restrictions and capabilities or the architecture.
Imagine how cool would be to get the ARM equivalent of GPUs soon…
The huge difference is that this is only a research achievement. They found algorithmic improvements which should help with performance, no matter what kind of hardware is being used.
PS1 graphics existed in actual games.
When it comes to UE5 graphics, many think of the graphics of technical demos. It is highly unlikely that this quality is going to be achieve in actual games in the coming years.
This is a research project improves a technique that is not yet established in games. In fact, the earlier versions of it are so slow, that most people I have seen, who could run it somewhat smoothly, still turn it off.
Probably not. People play in arcade Half Life Alyx and it is not the graphics. It is VR together with physics. It is more immersive when everything is working. Those graphics tech demo often shows very static world.
In my opinion, both hardware and game engine technology for excellent quality graphics has existed long time.
Major improvements to do UE5 techdemo things are:
- HDR background for lighting and reflections
It requires camera, software to stitch background photos and made them to HDR, path tracing/radiosity to calculate lightmaps.
- Real scanned object
Photogrammetry, laser scanning… Need tools to do that.
Some of the techniques are very old. There is lag to get knowledge and use technology in cheap workflow.
This very famous “ue5 subway demo” (as TheCherno reviewed in his video) was running at 5 FPS in real time. Which is good in terms of immediate feedback rendering, but a far cry from an actual game.
Is also because of the rarity of VR headsets, it kinda makes people want to try it.
Only 29% of gamers have a VR system.
This number may seem small in comparison to other gadgets owned by gamers, but this percentage represents approximately 55 million people.
https://findly.in/virtual-reality-statistics/
GPUs already have their own architectures. That’s what “RDNA 1/2/…”, “Polaris”, “Terascale”, “Ampere”, “Turing”, and so on are. AMD have already promised a further >50% performance-per-watt uplift for RDNA3 - making it a cumulative >125% uplift from the original RDNA, as IIRC they already did 50% for RDNA2? What would be more beneficial for this is if their and Nvidia’s raytracing got substantially improved - and I’m sure it will be. Currently Nvidia’s RT gives 2x the performance for I believe similar power draw in Cycles, nevermind other renderers like Octane. And with those tensor cores that make AI denoising much faster…
If you’re comparing it to how Apple’s ARM CPUs are vs regular CPUs, then I would definitely not want to have a GPU with questionable support for many programs, with misinformation pretending it’s blanket faster when in reality its main upside is that it’s great for laptop battery life, not always faster for laptops, and possibly a waste for desktops, and with it being tied to a company that would have me replace my whole overpriced PC if my GPU got too hot for a while in their box intentionally designed to get hot.
On Opendata, the M1 Ultra is 3% faster than the 12900k (1% if we include 3.1.0), and on Passmark that you took a picture of, it’s 11% slower. In STP, the M1 Ultra is 8% slower than the 12900k. That graph Apple made seems like pretty clear misinformation to me - where are they getting 40% faster performance vs the 12900k from? Hell, I don’t think either of those I linked used DDR5 anyways! The ole original M1 was faster than some competing laptop CPUs in single-threaded performance, but slower in multi-threaded. Maybe Passmark is too synthetic but Cinebench and Cycles are pretty on-point benchmarks for a Blender forum.