RTX 3090 Rendering Performance Benchmarks

Source: Linus Tech Tips

Edit:

Here are some more from JayzTwoCents-

PS: I dunno if he used OptiX or Cuda

2 Likes

Interesting. How do the 30x0 cards compare to the flagship AMD GPUs?

New AMD GPUs won’t be announced till the end of October. Reviews will probably be out early Novermber.

1 Like

The current AMD GPUs don’t seem to be anywhere near Nvidea. But hopes are high for the “Big Navi” release hopefully in a few weeks. As for CPUs, the fastest Threadripper stands at 22s for the BMW scene and 68s for Pavillion Barcelona (according to blender benchmark). Now that I look at the Linus benchmarks above, the Barcelona one has the same render time for botht OptiX and Cuda :laughing:. The render time for Barcelona on 3090 OptiXaccording to blender open data is 45s.

Disclaimer: I’m not a tech expert

the bars length is correct, the numbers are wrong (for optix)

Seems the more complex the scene is the bigger the gain will be.

Still didn’t see a single benchmark of the “new” motion blur capabilities, if anybody have a 3080/3090 and is seeing this message, could you please try rendering the “agent 327” scene to see how far it performs ? (compared to previous gen turing, as well as to the intel embree on the cpu side)

Here is the link to the scene in question:

Thanks in advance.

I’ll sure let you know if I find any Agent 327 benchmarks

There is a motion blur render in the last test.

Thank you for the find !

Interesting…The 3090 is “reported” 47.5% faster then Titan RTX in blender “opendata” (all 5 scenes combined) this scene with motion blur makes it even faster, 66.9% (19.4% more, compared to the scenes with no motion blur).

One question answered (thank you very much), two remains:

  • how does it compare to 10 series which don’t have the RT cores.
  • how big of a speed boost using the same GPU (30 series) with two blender versions (2.90 vs any 2.8x)

The reviews I’ve seen the 1080ti is like 4-5 times slower without the RTX magic. The regular 6Gb 2060 is faster in rendering than the 1080ti.

I don’t know about the other stuff. I’m just waiting for the rtx 3060 :slight_smile:. That might fit my budget.

1 Like
1 Like

Interesting how little CUDA benefits in newer cards - especially the 2080ti.

I have a 980Ti and can render the default BMW scene using GPU only in 1 minute and 1 second - the 2080ti takes 41 seconds using CUDA.

If I optimise the tile size to 16x16 I can get my 980ti render time down to a little over 43 seconds using CUDA.

Even though Optix support is experimental for my card, if I render the default BMW scene using it - I get a render time of 41 seconds - so the 2808ti is only twice as fast in this regard too.

For a card that is 3.5 years younger, I would have expected more impact on render times.

1 Like

I’m sorry but that really does not seem right. The 980Ti is less than half as fast as the 2080Ti in my tests. The benchmark everywhere else show that too.

Are you sure your numbers are correct? Maybe I just misunderstood your post.

Regarding Generation jumps:

A Generation jump of 25-30% is normal, considering theres exactly 2 generations between the 980Ti and 2080Ti, the numbers match spot on.

With 16x16 tile size - otherwise default render settings (400AA samples). I am however using a 2.91 daily build from the 15th August.

The benchmark for the BMW scene should have 1225AA samples. With the correct number of samples the headlight is much less noisy than in your image. This is why your 980Ti times look miraculous not because of using 16x16 buckets.

Benchmarks are not an exact science only a guide and I’m sure most people could spend a few minutes optimising each scene for their own hardware and shave a good few seconds off each test. There’s enough variety in the benchmark scenes to show which architecture is best at certain scenes which is all you can ask of a benchmark.

Ah ok - for some reason my BWN scene is set to 400 - weird.

Have u tried the blender benchmark launcher?