1070 TI or RTX 2070

Just changed GTX cards with RTX.
Sold 1080TI + 1070TI + 2 x 1070.
Replaced with 2080TI + 2 x 2070S.

The 2080TI alone can do almost the work of all 4 previous cards with Optix. (RT cores “enabled”)

My reference was the link below and I can say the results are accurate within an acceptable error margin.
https://openbenchmarking.org/result/1911251-HU-BLENDER2818

The benefit from RT cores vary with parameters that I can’t yet completely understand. (Apparently something to do with number of samples in the scene)

My personal suggestion would be the 2070S. If not, 2070.

Hope this helps.

3 Likes

When you say “RT Cores “enabled”” are you referring to using the Optix GPU Compute experimental Cycles renderer or is there an additional RT Cores setting you are referencing here?

I am referring to Optix only.

I was under the impression that selecting Optix enables ray intersection to be carried out on RT cores, whereas the CUDA cores still run; well, CUDA… Is this correct? This is why I used such an expression.

Diving further into the issue, it appears the benefit from RT cores stand out as the number of primitives increase and ray intersection work intensifies. Low poly, no benefit. High poly, much benefit. I have made some benchmarks testing this but it would be great if someone API savvy could explain in a noob-proof manner.

Thanks.

1 Like

I am a novice so I could be wrong here.

Correct, this is my understanding as well.

I have noticed this as well. Working with simple renders the difference between Optix and CUDA seems small.

I’m still not certain how to go about setting optimal tile sizes though larger tile sizes (256+) seem to yield quicker results.

I too would like insight on this topic.