The main difficulty I have encountered with OptiX is its tendency to blur texture details away. I am going to have a closer look at aliasing in future experiment to get a better feel for it.
This denoiser, as well as nVidia’s, is quite awesome! However, I just wish Intel was more forthcoming with information, the lack of which is worrisome for future integration and improvement. Perhaps they’re waiting for Siggraph this year (if not; this entire thing was just their attempt to ride the early wave of Optix hype)
The first issue is that there’s been 0 activity from Intel since mid Spring. Nothing in the denoiser repo, nothing in their DNN math library repo, and their actual training data set has not been updated either.
Intel’s documentation still stating that temporal stability is not supported yet (required for animation) doesn’t leave any good feelings either; regardless of what some folks show as it being relatively ok frame to frame.
Secondly, we still don’t have many details about how the denoiser was trained. How many unique scenes were used? How many unique sample counts for each scene were used? With nVidia we have been given hints in some of their 2017 presentations at least, though specifics are also lacking on their side.
Additionally, while nVidia allows you to train using your own data sets, Intel does not. That could be a blocker for more general purpose adoption.
Both companies have been pretty awful with continually improving their models here as I don’t think the Optix libs and training data sets have been updated either. That’s just all weird.
There is going to be multiples surprises at the siggraph blender boot this year. May oidin be one of them ? There’s also the presence of tangent animation ceo, aren’t they working on that ?
I don’t have surprises or announcements to make, but if you’re going to SIGGRAPH, you can find me at Intel’s Create event and ask me anything about my work integrating OIDN.