Theoretically, maybe. But I wouldn’t count on it, or make any hardware decisions based upon it.
From nvidia, “Tensor Cores are programmable matrix-multiply-and-accumulate units”, so they likely have limited usefulness in raytracing. Matrix multiplication is used a lot, but mass accumulation happens comparatively rarely. Blender builds would need to be made to support it and this takes developer time. It is unlikely you’ll see any usefulness from a TPU in computer graphics.
i have also read that tensor cores are optimized for small numbers (8bit and 16bit ints/floats for example). i am not sure if this is true but if it is then they aren’t very useful for other things.
This seems to be the case, they have very fast I16 vector multiplication with an I32 output, I believe.
And then there’s the reveal of the limitations of the Optix library, it seems like both the RTX cores and Tensor cores were both designed with fairly specific tasks in mind (with the former being for tracing rays and the latter for certain types of AI). If it is found they even make CUDA and shader cores look flexible, then there may not be much of a push to code for them, as I don’t think people are willing for their render engines to take a big step backwards in exchange for more speed (after all of the work to get GPU-based engines to start resembling the larger CPU-based solutions).
That’s not to mention the possible complexities of now having to optimize the use of four different core types.
yes, the rtx and tensor cores remind me a bit of the fixed function pipeline we had before we got programmable shaders.
Denoiser changes incoming…
Prefiltered feature passes…
Tweaked outlier detection…
All to prepare for this, animation denoising support in master (but please note there’s no GUI yet).
Animation denoising is one step closer to being fully usable. Savvy users can now access the feature with Python.
As in the last commit, the artist-friendly GUI is not in yet. It is coming though.
Slightly off, but how do you save the Render Layer results with Multilayer EXR for every anti-aliasing sample (Full Sample) for further post-processing - I want to try and compare different denoising algorithms.
How to enable this feature, I simply can’t find it in v2.79b but seen checkbooks on older screenshots with older Blender versions. I turned on RenderSettings.use_save_buffers and RenderSettings.use_render_cache, but the EXR file doesn’t contain the channels.
Pablo tried to show this feature yesterday in the live stream but didn’t get it to work. I tried to repeat his steps (with slight variations) but didn’t get it to work either.
It seems to denoise but fails during writing the result. Can anyone point me in the right direction? Would love to give this a try.
i havent looked,maybe they are in the latest experimental builds.
Surprised no one has posted about it… but multithreaded OpenCL kernel compilations are now in the main builds of blender!
it doesn’t work on windows and 95% of the user base is on windows.
Two big commits impacting OpenCL rendering today.
Refactored attribute retrieval.
Separate baking kernels.
Big compilation time improvements and potentially sizable rendertime reductions are expected. The former even brings a slight boost to those using CUDA and CPU rendering.
second after second the barber is giving a nice haircut to the cycles engine ^___^
Ok, did we just lost the ability to have long multiframe motionblur trails or am i getting something wrong?
- Changed motion_blur_shutter to use a soft max value of 1 instead of 2.
Anything > 1 here is not physically correct
and makes no real logical sense.
Soft limits only affect value sliders, but you can still type in higher numbers manually.
Ok, thanx. I got it wrong.
Know what a soft max is, the whole point of that is so you don’t accidentally drag the mouse and get a ridiculous value that causes Blender or your machine to lock up.