Nvidia has brought out what was supposed to be a revolutionary upscaling technology known as DLSS (which uses machine learning to give you the best possible results), but does the final result in a fully dynamic and uncontrolled environment look as good as Nvidia’s carefully done demos?
In the comparisons, certain spots do seem to indicate the tech. works as advertised, but a number of areas packed with texture or geometry detail ends up being blurred or even smeared. This could also serve as a warning to those who think machine learning is going to bring about the end of general smart algorithms using a statistical approach, as this is another sign that you have the problem of “intelligent” machines actually being rather naive in their thinking when something does not closely match to the training data.
Then there’s the final bit that they found a little confusing, and that is whether Nvidia essentially handicapped the ability to do DLSS at sub 4K resolutions on their flagship card despite people paying over 1000 bucks for it. That being, it eliminates the ability to use DLSS at 1080p in a bid to have full RTX graphics at 60 FPS.
Could this possibly get Turing to the point where even the Vega VII begins to look like a decent deal?