Nvidia marketing strikes again? DLSS might be a detail killer

Nvidia has brought out what was supposed to be a revolutionary upscaling technology known as DLSS (which uses machine learning to give you the best possible results), but does the final result in a fully dynamic and uncontrolled environment look as good as Nvidia’s carefully done demos?

In the comparisons, certain spots do seem to indicate the tech. works as advertised, but a number of areas packed with texture or geometry detail ends up being blurred or even smeared. This could also serve as a warning to those who think machine learning is going to bring about the end of general smart algorithms using a statistical approach, as this is another sign that you have the problem of “intelligent” machines actually being rather naive in their thinking when something does not closely match to the training data.

Then there’s the final bit that they found a little confusing, and that is whether Nvidia essentially handicapped the ability to do DLSS at sub 4K resolutions on their flagship card despite people paying over 1000 bucks for it. That being, it eliminates the ability to use DLSS at 1080p in a bid to have full RTX graphics at 60 FPS.

Could this possibly get Turing to the point where even the Vega VII begins to look like a decent deal?

Nvidia responds, appearing to acknowledge quality issues with DLSS and promises to improve its quality.
https://www.guru3d.com/news-story/nvidia-promises-to-improve-image-quality-issues-with-dlss,3.html

When you look at just how intensive the training must be for the algorithm to produce so much as passable results, it really dashes the idea that indie developers and artists will be able to use it to produce gorgeous high resolution visuals or high-resolution animation and art in a short period of time. This could also be a major reason while Epic, who is usually on the ball with the latest technology, has not revealed any indication to implement DLSS in their Unreal 4 engine.

I have to confess I completely bought the marketing. I thought this was going to be better than upscaling from a slightly higher resolution. It’s not. Oh well… :laughing:

At least it’s the best price/performance for AI projects, which I guess is why it was made in the first place. The FP16 support is nice to have. Just selling it to gamers using the RTX gimmick, when the real market being AI people.

I don’t know what’s up with this constant anti-NVidia large corporation pro-AMD open source hippie tone on these posts.

Why is it always posts celebrating any unconfirmed failure?

DLSS is actually damn impressive considering it’s the very first iteration. It’s the first type of this use case. Things are rarely perfect in their first version. Actually it’s impressive that the first version is usable for regular gameplay, just not perfect, but still. It’s almost certain it will keep improving over time. The very first practical use of machine learning in games, and already so beneficial. Why would you hate on that?

I mean Vega VII is same price and +/- same performance as RTX2080. With Vega VII, you have no option to use RTX or DLSS at all. With RTX2080 you do, and even if you choose not to, you still have GPU of the pretty much same speed as Vega VII, but with much better drivers, and in case of our community, much better Cycles rendering experience.

1 Like