Hello,
I have made a bunch of undervolt test with Vega64 (custom watercooled) and they can be of interest here. I can’t write english so short version -.-
Every setting bellow is stable (firestrike stress test, valley and heaven).
First line : Wattman settings. All tests are carried with HBCC activated and +50% power limit. And use the second bios of Vega64.
Second line : Measurements.
-Actual frequency reported in Superposition.
-s: Score in Superposition 1080p extreme.
-f: Graphic score in Firestrike.
-t: Graphic score in Timespy.
-Watt : Power consumption in superposition scene 3.
-(Watt) : Range of power consumption in Firestrike, Timespy and Superposition.
The 3 rendering tests are from the official benchmarks (https://code.blender.org/2016/02/new-cycles-benchmark/) without modification.
-classroom
-fishy-cat
-koro
-The last power consumption is the range for these 3 renderings.
All power consumption measurements are made at the wall and include the complete system (Vega64, i7 [email protected], mb, dac, aquaero, 2 ssd, 2x8Go DDR3, d5 pump, 6x140mm, 1x120mm, 1x92mm).
Thanks.
So (system) power consumption varies between 197 to 307 watts, with a render time difference of 25 to 37 seconds.
I’m guessing it’s not possible to keep HBM voltage at 840 mv but core voltage to around 1050 mv?
Yes you can but there is no raison to do it. Lowering “hbm” voltage have no impact on total power consumption (it’s the memory controler voltage, not the actual memory stacks). Lowering the hbm voltage is usefull to reach lower undervolt, the gpu voltage can’t go lower than the memory controller voltage.
Was it with master/buildbot or 2.79? At 1060mV and default clock of 1648Mhz, I get 3min32 for classroom, 3min38 for fishy cat and 4min05 for koro, using the scenes from the official pack as is, without any change, using a vega64 with driver 17.9.2 and buildbot b3b3c6d
That was with 2.79 release and 17.9.1 driver.
With the last buildbot you mentionned (b3b3c6d) and 17.9.2 drivers. Tested right now with 1697Mhz@1045mv in wattman. I get :
classroom : 3mn08
fishy-cat : 3mn21
koro : 3mn51
full system power consumption during these renderings was 221-250W (slighty lower than before)
And sorry for the 2 or 3 day my posts require to reach you, they have a tough journey through the www to undertake!
I don’t know your setting but the first thing that comes to mind to explain the difference in performance would be the cooling solutions.
With this low power consumption on water I hardly exceeds 32° during rendering which allows me to have a stable frequency close to the one set in wattman. If you use the stock cooler the throttling alone can make a significant difference.
Bullzoid actually made a good argument that memory voltage isn’t actually memory or memory controller voltage, but base voltage.
As you can see on the video, raising ‘memory’ voltage did raise power consumption. The Wattman quirks were also interesting, since I never saw that mentioned in any of the reviews.
It’s both, the memory controller voltage, act as a floor for gpu voltage. This setting have also a direct impact on memory stability and artifacting at very low vooltage.
I have made alot of tests on Vega64 and share the results on hardware.fr forums, but in french. I am not fluent enough in english to write on a subject as tidious as this one here. Really sorry about that
For bliblubli, with the same 1060mv you use, but max stable frequency :
Classroom, blender 2.79-a8f11f5 :
Wattman settings : gpu:1702Mhz@1060mv hbm:1100Mhz@950mv 50%PL.
Power consumption at the wall : 245-260W
New drivers are out, Vega gets support for 2-GPU setups.
For a small selection of games at least, AMD claims an impressive 80 percent performance boost (when a second card is added), I wonder if it would be the same for OpenCL.
For OpenCL you don’t need any dedicated driver support. It should work straightaway and performance efficiency is nearly 100%. This is because OpenCL and CUDA is computing API, not the graphics so you can even use as many cards as many PCI-E slots you’ve got there. PCI-E lanes and PCI-E bus speed are not even that important because the whole computation if it comes to raytrace rendering is not made in real time (for now ).
Anyways guys, how Vega performs comparing to other cards?
I think Bliblubli reported on a previous page that the HBCC functionality works already (without any special patch for Cycles). Now to be fair, there is a way (done by other engines) that would allow for huge scenes on Nvidia cards, but that would require the coding of a cache system by the Cycles devs.
You can render with your SSD even if you want. The only limit is your OS. I think Linux however, with upcoming patches for the Linux kernel, will be much faster if you want to render scenes that are partially in system RAM and swap/pagefile. And Linux is much more efficient memory-wise, so it will swap later than windows, increasing performance further.
Two bench at the limits, with the latest 2.79 buid :
Max overclock with the Vega64 Liquid bios, 1772Mhz@1250mv, power at the wall (full system) will rendering : 355-377W. Classroom : 3mn00.
And max undervolt with the Vega64 Air second bios, 1677Mhz and 910mv, full system power at the wall : 187-195W (so around 100-120W max for the graphic card alone). Classroom : 3mn21.
Vega can be extremely power efficient
And for HBCC, with my normal everyday setting (1045mv) i just did a run with Gooseberry Production Benchmark (didn’t adjust anything beside going for GPU compute, not even the tiles size):
just bought vega56 but can’t get it to work on linux. OpenCL device is not selectable. On W10 it works fine.
Do i need to set up some variables perhaps ? I’m on Ubuntu 17.04.