curious to see how they are doing that witouth a dedicated ray tracing hardware ?
I mean, you don’t need dedicated ray tracing hardware to do raytracing.
for real time ray tracing yes. otherwise way too slow.
only to see the result of their deep learning denoiser it is quite obvious that he entered a straight leg in this GPU market.
it’s going to be a nightmare for devs if they need to support each gpu’s brands api…
no it is the opposite … they are tools related to opengl and some functions are already operational in the mesa on linux.
for example…
OpenSWR is now fully integrated into Mesa and provides a SWR renderer that supports much of the OpenGL 3.3 Core and OpenGL 3.0 Compatibility contexts. Standard Mesa environment variables provide the ability to run-time switch between OpenSWR and llvmpipe software renderers.
Well, yes and no. That speed in RTX games doesn’t only come from the dedicated hardware, they use all kinds of tricks and hacks to get to the required speed. Those are also possible on non raytracing cards, in fact, there are ray tracing demos on non rt hardware that look really good. But I don’t think that’s important, because as far as I understand it, the API doesn’t guarantee any specific performance, it just uses whatever hardware it performs on, which doesn’t say anything about Intels Hardware.
yes but if each will be dedicated for they own ray tracing hardware’s ? some solutions might be more open but support for other ray tracing hardware might now be as supported (for ex future intel publishing a great real time rt api but work better on inter hardware and sabotaging perfs of other brands of gpu’s… ect…)
yes. still in the mist isn’t it ?
i have hard time believing that real time ray tracing can run on non-specialized hardwares. it took nvidia four titan V “non-specialized” hardware to run a real time ray tracing demo that can now be run on “specialized” hardware witouth a problem. this make no sense.
no these API accelerate existing CPUs …
then they will probably create dedicated and optimized GPUs.
yes that is what im talking about. in the future it’s going to ray-tracing tech war.
but it is not an API problem, the drivers will interpret the hardware in their own way …
at the user and devs level, nothing changes.
imagine “cycles” with 2-3 samples rendered + intel denoiser + some optimization tricks, and you’ll see that it starts to make sense …
isn’t intel denoiser quite slow compared to the nvidia denoiser that can denoise 60 images / seconds ? nvidia cards have dedicated Cores to calculate all the AI related denoising stuff in real time. thoses kind of solutions will also be taken on futures hardwares, i’m sure the future intel gpu will bring ultra speed for denoising too.
probably not so slow …
probably the CPUs are already quite powerful, and with some redirection of the low-level code, interesting results have been achieved.
parallelization, multi threading, deep learning techniques …
all this at software level, it will surely have opened roads previously unimaginable.
Once found the way, wel see how deep is the lair of the white rabbit, Alice.
Hi, I guess they prepare market for the upcoming discrete GPU in 2020.
Cheers, mib
Neural networks are considerably slower on the CPU. That’s why Intel has also planned to have a GPU implementation.
But I am also pretty confident that they are working on hardware solutions.
