This is not exactly same thing, but I’ve done render tests using E-Cycles with OptiX, as I use a lot shader bevels. As far as I understand the render engine is same as Cycles, just with render optimizations to simplify some calculation processes.
I haven’t got any crashes so far with scene that was prepared in Blender 2.83 using CUDA. Product shot studio scene, 5000x5000px, most materials with shader bevels, RTX 2080. Not very extensive tests yet and standard Cycles might do some render calculations that E-Cycles skips and would cause crashes. Also I didn’t use shader AO at all. But I thought of sharing my first experience as it worked well.
I noticed though that 2.83 CUDA renders faster than 2.92a CUDA. Both test renders with done E-Cycles, but could make Cycles tests as well to see if it has same slowdown with newer version.
One of above post mentioned different noise patterns for CUDA and OptiX, but all my test results were 100% identical.
Cant wait to wait to see CPU+GPU supported by Optix.
Atm using optix with rtx 3090 / i9-7940x is much slower than using two 1080ti and i9-7940x using cuda.
barbershop_interior_gpu.blend Results:
Another big problem seems to be the tile size atm. The Auto tile addon gives most time bad resluts. Atm it needs to be setup individually per scene and setup GPU/CPU/OPTIX/CUDA+Scene dependent.
I though the cause for the RTX 30XX performance issues in Cycles was that Nvidia needs to update their drivers for those cards. (Anyone, feel free to correct me on that)
Idk bout such thing. Might be an additional issue but leaving a powerful cpu unused isn’t a good thing at all.
Comparing gpu only times the 3090 is very very fast
I don’t think that a new driver will change those described tile size dependencies.
I make a big benchmark comparison I could post later when it’s done
Hi, CPU/GPU render use one CPU core per GPU, I have two GPU and two CPU cores are 100% used from the GPU.
There was a patch from Brecht to tackle it and it working fine but rejected because of problems with older GPU´s.
I cant find the patch in the patch tracker at moment, it is may be time to re animate it.
Hi, Patrick Mours (Nvidia) implemented Optix CPU+GPU render in cycles.
Will ceck out with next buildbot but with my RTX 2060 and old i5 I guess it does not help much for me but user with a better CPU may profit.
It was not mentioned whether it work with last tile stealing feature.
Yes, when using OptiX, Blender and E-Cycles will silently switch to standard path tracing and take the according settings (which may highly differ from the BPT one). Results can’t be compared here.
To have comparable results, you could manually select PT with CUDA though. Best would be to have the official benchmark do that though to have comparable results between different sources.
CPU+GPU is nearly useless when you have a really good GPU, the GPU needs massive tiles especially in optix to be optimal. Using GPU + CPU with optix throws a lot of performance away and eats up more power often resulting in slower renders for us 3090 folk (3090 + 3950x CPU here)
Basically any GPU that has optix support will be underutilized when using CPU + GPU.