AMD RX Vega

Thanks for your detailed help :slight_smile:

Does anyone considered Radeon Pro WX 5100? You can get it for around ~420 EUR.

I saw compared results for WX7100 and other cards https://pbs.twimg.com/media/DHwdnd6XsAAEDOt.jpg:large So WX5100 would be slower but consider its only 75 TDP would it nots be a good choise for rendering?

Iā€™m thinking about WX5100 or Vega 56 when it will come out.

Does anyone considered Radeon Pro WX 5100? You can get it for around ~420 EUR.

I saw compared results for WX7100 and other cards https://pbs.twimg.com/media/DHwdnd6XsAAEDOt.jpg:large So WX5100 would be slower but consider its only 75 TDP would it nots be a good choise for rendering?

Iā€™m thinking about upgrade to WX5100 or Vega 56 when it will come out so would love to hear some opinion.

There was a nice video yesterday covering Radeon Pro Duo vs GTX 1080 (non TI).

In most cases - using cycles, Radeon Pro Duo (dual rx 480s with 16GB each), performed on part with TWO GTX 1080s. in certain scenes GTX was a tad faster, in others the opposite.

When it comes to ProRender, it seems I was way off on my estimation. GTX 1080 performed impressively. A single GTX 1080 was on part with the Radeon Pro Duo. But something doesnā€™t sit well with me. more investigation would need to take place.

Iā€™ll share video later today, unless someone beats me to it. :wink:

Now they just need to get their hands on the unattainable RX VEGA 64/56 and compare them.

Wait, isnā€™t the Radeon Pro Duo actually two WX7100s not 480s? Or is a WX 7100 just a 480? Iā€™m confused.

Anyway, I watched that video too. Those results were really weird. I would have thought two 1080s would have trounced the Pro Duo in everything but memory. Isnā€™t it based on the last generationā€™s chips?

But even stranger was the AMD Pro Render test where the single 1080 (running OpenCL no less) was a lot faster that the Pro Duoā€¦ ?!! Everything Iā€™ve ever heard about OpenCL on Nvidia says that they have purposely crippled it in favor of Cuda.

Hey, I just had a thought: Does anyone know if there is a way to force an Nvidia card to render using OpenCL?

:smiley: few pages back (post #118)

Which I think is a stupid rumor. It makes no sense to cripple performance. When you want to sell hardware, you want the best benchmark numbers possible.
Besides, there were GTC sessions by Nvidia that explain a few things about achieving optimal performance of OpenCL code on Nvidia hardware: http://on-demand.gputechconf.com/gtc/2017/presentation/s7496-opencl-at-nvidia-best-practices.pdf and http://on-demand.gputechconf.com/gtc/2016/presentation/s6382-karthik-ravi-Perf-considerations-for-OpenCL.pdf

And what about WX 5100 for rendering? It has attractive price and low TDP.

Iā€™m looking for an upgrade, thinking about Vega 56 after last test results but also consider WX 5100.

It is probably a rumour like you said but there is a huge reason why they donā€™t want opencl to succeed. Their 6000ā‚¬ pro gpus is making a 95% profit and they can only do that if they isolate everyone else from CUDA.

nVidia still have 85% of the market and if 85% of the cards on the market sucks on opencl and vulkan then 85% of the population will think it sucks.

Radeon Pro Duo is two Polaris chips, same as RX 480s. same core count and all. just slightly lower clocks and better binning.

in the end it is the same silicone, just different drivers. so the XV7100 or what not is most likely just an rx480 with different drivers.

[QUOTE=
Main disadvantage of vega64 is Powerā€¦ it consumes a bit more then GTX 1080ti.
[/QUOTE]

Whatā€™s GTX 1080 Ti power draw while rendering ?

As Bliblubli said Vega is around 200w

GTX 1080 is approximately 225w with the whole machine rendering, (76w idle without GPU), so total would be ~150w.

Finally Iā€™ve to choose a way :

My goal is to get enough speed to score 15.000 pts at luxmark 3.1 complex scene bench. for this i need

3 GTX 1080 Ti ===>> 3.200 Ā£ http://www.materiel.net/carte-graphique-geforce-gtx-1080-ti/

3 Rx Vega 64 ===>> 2.000 Ā£ http://www.materiel.net/achat/rx-vega-64/Or (if it can reach 15.000 pts )

2 RX Vega 64 LC ===>> 1.600 Ā£ http://www.ldlc.com/fiche/PB00234426.html

With latest driver that have a working Wattman, I lowered the power draw from 200W (power save profile) to 170W with 2-3% higher performance and a more quiet card. Note that the performance and balanced profile perform worse for long renderings, because of throttling. With a good voltage (1040mV in my case), my card render at 1.6Ghz all the time, which is the boost speed and is stable.

Interesting. If you find gains on all sides - why is that not defaults? (Honest question as there might be reasons I donā€™t understand.)

I think their marketing departement are in a ā€˜who has the biggest oneā€™ competition with NVidia. Iā€™m pretty sure the engineers would have made completely different decisions otherwise. Thing is, to squeez the latest Mhz out of a chip, you have to increase voltage a lot, loosing in efficiency pretty fast. But efficiency is not as important in the gaming market as FPS. They know miner are skilled persons, who will lower the frequency and voltage to find the optimum, so in the end, the defaults are made to make nice charts on the benchmark sites for gaming.

On top of that, another factor is the percentage of chips you want to sell. Before, cards were sold like CPU at different frequencies, depending on the quality. Today, the segmentation is made with completely deactivated parts, but with one frequency for all. So you have the very good one and the very poor one all sold at the same frequency, thus the driver has to set a voltage that will also work on the worst one, in a bad tower, in summer, with a cheap PSU and cheap motherboard. So in the end, you have an enormous security margin, which are useless for most consumers.

Regarding cycles, it under use GPUs (both CUDA and OpenCL, as itā€™s not that easy to keep thousands of units to do something all the time). Idle units donā€™t require power. This is why you can lower the voltage even more, while keeping the max frequency and a quiet card.

Man ā€¦ so much to read so little time ā€¦ I hardly play games at all anymore, so Vega and threadripper won my new build investment dollars. AMD is better focused for workloads ā€¦ and hopefully ā€¦ I can make a little magic in another year or two. Anyhow,

Does anyone know the optimal tile size for Vega FE? Iā€™m using the tile size addon but ā€¦ how can it know?

Getting 4:32.8 on the classroom scene. Havenā€™t tried a 1950x blend yet, donā€™t really intend to, as itā€™s only air-cooled for now. Wasnā€™t happy with the -lack of- selection of closed loop water coolers. Most do not come close to covering the CPU, and some donā€™t eve cover the dyes. I donā€™t care. I refuse to throw a full-size blanket on a California king-size bed. And I use windows 10 too ā€¦ I know Iā€™m terrible but, I just want the platform thatā€™s the focus of the most current security efforts. But yeah, optimal tile size for Vega FE?

Thanks.

320*270 or around that size. Try it out.

Thanks.

Been playing with more benchmark tests (Iā€™m still quite new to this realm), just did fishy cat without changing settings -only form experimental to supported- and noticed the entire scene renders as one tile. Is that intentional for the test? Got 5:36.4 ā€¦ when I looked up an article on timesā€¦ the only thing in the particular BN article under 10 minutes were render farms. I did it again with your recommended tile size and it trimmed off a minute. Pretty happy with my choice. Would love to get another card down the road, but first I need to produce stuff to justify it. =)