I did a Cycles render speed comparison between Windows 10 and Ubuntu GNOME

Click for full size:

Ubuntu wins on CPU, Windows 10 wins on GPU. I suspect that may be because NVIDIA’s drivers for Windows are more efficient? Just a guess.

Bmw scene 1080p takes 4min23 sec on a Gtx 970 ubuntu 16.04, tile size 480/360.
Your GPU tile size is too small.

Hi. There are two BMW rev4 scenes with different samples configurations, which makes things very confusing to all (one is the scene we all know in this forum, and another is what is available on the Blender web site)
What is the scene that you have used?

Anyway it’s true that this tile size is small for a GTX 9XX card, considering that currently the bug that forces use small sizes in 980 Ti on Windows 10 has already been solved in new Blender/drivers versions. So this small tile size could be affecting mainly linux, but not sure.

30+ minutes seems very long for the BMW scene with those specs?

I’d say that if the question is a comparison of performances between different host systems then the absolute time it takes to do the renderer shouldn’t really matter, it’s the time differential between the two systems that should pick our interest.

Yes I had the file from the forum, the file from blender.org has more samples added so it took 13min06sec to render this time.

Doesn’t really matter in this case - it’s about the difference between Windows 10 and Ubuntu GNOME 16.04 given the exact same scene settings.

I used the scene files from https://www.blender.org/download/demo-files/ and for the BMW scene I used the CPU scene file for the CPU test and the GPU scene file for the GPU test. All I changed was to change the resolution multiplier from 50% to 100% (so it would render full 1920x1080) and adjust the tile sizes to the values mentioned on the image above, for consistency across all three scenes.

I’ll try again with different tile size.

Exactly. This was explicitly not about optimising the render settings for performance per scene, it was only about comparing the same conditions on Windows 10 vs. Ubuntu GNOME 16.04.

Can you do a comparison with the same driver version on win and lin?

372.70 doesn’t seem to be available on Linux yet from what I can see.
I could downgrade on Windows I suppose… but I also tried a second time with an older driver on Ubuntu GNOME 16.04 (367.44 instead of the 370.28 I used in the final benchmark) but didn’t see any meaningful difference (not more than a couple of seconds).

Now someone will have to explain to me what scene you used or how you got the BMW GPU scene from https://www.blender.org/download/demo-files/ to render in 4min 23sec.

Because I just tried rendering it at 1920 x 1080 (100% resolution) with a tile size of 480 x 360 and my results were:

Windows 10: 10m 14s
Ubuntu Gnome 16.04: 13m 19s

So while increasing the tile size does decrease the render duration, the fundamentals of my initial test remain the same:
Windows 10 renders faster on GPU.

The same goes with the classroom scene, which I also just ran at tile sizes of 480 x 360:

Windows 10: 8m 19s
Ubuntu Gnome 16.04: 9m 3s

Read my post #6.
The file from the forum has 400 samples and the one from blender.org has 1225.
A more accurate test would be if you do a render with the card that is not driving the display at the same time.

Maybe it’s time to drop Linux.

Ah, I missed that post, cheers!

Why would this be a more accurate test? This isn’t about a theoretical laboratory test but about real-world production conditions.
And in both cases, on Windows and on Linux, I rendered with the card that was driving the display at the time.

Because the card might behave different under the OSes, it might affect the rendering when driving the display at the same time.
I don’t know it’s just a guess.

Sure but that doesn’t matter in terms of determining which OS performs better as a production environment. If my card behaves worse on Ubuntu than it does on Windows, it simply means Windows is the better OS to be using when doing GPU rendering work on this machine.

I guess it doesn’t matter to me since i drive the display with a different card.

It does matter, smaller tile sizes mean more image updates and image updates have significant overhead. That overhead may well be different between operating systems, so the difference isn’t necessarily in the rendering performance. Another reason may be the different driver versions, of course.

This isn’t about a theoretical laboratory test but about real-world production conditions.

In production, you should render final frames from the commandline or at least offscreen, otherwise you’re wasting performance.

The BI benchmark results don’t show such a big difference in GPU performance between Linux and Windows, however it’s not clear whether those were all rendered from the commandline and whether the drivers are newer/older.

Fair enough but the driver version I use on Windows doesn’t exist on Linux and vice versa. It seems that the driver version numbers don’t line up between Windows and Linux for NVIDIA.

Regarding the tile size, read my post #9. While render times improved, the fundamentals remain the same: Windows 10 is still faster than Ubuntu on GPU.

Either way, this is a comparison of equal conditions on Ubuntu and Windows. I’m sure everyone has their own special conditions they’d like the test to meet, but I just wanted to share the results as is.

I have read your post. You don’t seem to understand my point. If you do the test from the commandline or offscreen, the large difference you measured may not actually be there. That’s what the BI results suggest (if they’re done on the commandline), assuming the difference isn’t driver versions. The fact that the divergence reduced from 1,104x to 1,088x with larger sizes also suggests that this may be the case.

I found that in todays time the speed difference for CPU and GPU between MacOS WinOS or Linux seems to be most times very minimal.

I did beginning this year a test between Win10 MacOS and Ubuntu and found hardly any difference.

GPU had 0 difference.

Besides GPU drivers you have to be very careful that for each OS Blender is optimized the same way.
This can lead also to differences in CPU mode.

I understand the point but my test was focused on F12 renders during work sessions.

I also benchmarked the BMW27.blend scene for comparison.

Blend file

Use Hair BVH is off

56 Threads Dual Xeon E5 ES V3 @ 2.6 Ghz Turbo

inno3D iChill GTX 1070 X3

Ubuntu 16.04
Windows 7

Render device
GPU (370.28)
GPU (372.90)

Tile size

Render time 2.78

Render time Nightly Ubuntu