AMD RX Vega

nice speed up there. Monster machine.

Hi Everyone,

Iā€™m new to the community, first time Post, so the Company I work for gave me a machine with a GTX 1060 6 GB and I have a few video projects Iā€™m working on. It became apparent that there werenā€™t enough hours in the day (or night) to render all the frames to meet the deadlines. After watching CG geeks video on youtube we decided to order in a Radeon Pro Duo 32GB. Now the render time on the GTX 1060 averages at around 4 mins per frame. to my shock/horror/bewilderment the Pro Duo is taking 11 mins per frameā€¦ as a trial I popped my personal GTX 750Ti in and got a 13ish minute resultā€¦

one thing I noticed is that while rendering with the 1060 or 750Ti the system is pretty useless to do anything else, but the Pro Duo seems to render in the background and responsiveness isnā€™t effected at all.

are there some special settings I need to do when setting OPEN CL? to get the ā€˜almost as fast as 2 1080ā€™ times that CG geek got?

thanks
Paul

Bigger tile size is important. What do you have now? Try 320x270.

What version of blender so you use? Newer blender builds are much faster on nvidia than on amd.

not sure but IIRC > AMD likes full frame tile size & nvidia ~256x256 < this is tested :wink:

PS
for quick presets activate Auto Tile Size addon

the other question is are you recompiling the kernel for every frame? as in, are you rendering via commandline, using the -a command?

With CUDA, the kernels are already compiled, whilst OpenCL compiles it on demand per blender instance (iirc). The comparison should be from the second frame rendered.

Is AMD calling it quits on Vega (the discrete market at least)?

If this is true, then Nvidia is very close to having a monopoly in this market (even though AMD continues to do okay in the lower end). Right now, it looks like Vega will mostly be a brand for scientific tasks and integrated graphics.

In a sense, it looks like the only choice right now for serious work is Nvidia (that being at least until AMD gets up to speed with their new multi-die GPUā€™s with Navi, which should allow them to actually sell a high end card with a decent profit margin). By saying Nvidia is the only choice now for higher-end work, I mean thereā€™s literally no alternative now in terms of something you can get.

Pretty sure the shortage is linked to the crypto mining performance on AMD cards (if you undervolt them, you can get pretty decent ROI)

Like doublebishop said, both AMD and Nvidia are making great amount of profit currently selling to cryptocurrency miners, so I really doubt that they are discontinuing any of their high-tier products.

Also cryptocurrency mining demand is very situational and spiky. AMD has been caught with their pants down previously where they had oversupply after the crypto market crashed. They donā€™t want to see it again.

The prices of GPUs are 2 times the price it should be. Itā€™s like we need PCIe based cards that have a CPU installed instead of a GPU.

someone needs to make this, but I think access to ram is the slow part over pcie

Hi sorry if this was discussedā€¦but is anyone here using Raven Ridge APUs with Blender? Just curious because I want to build a budget workstation with Ryzen 5 2400G and no graphics card. Itā€™s very hard to find benchmark tests for modern APUs. Thanks for any feedback!

Just wanted to follow-up and see if you ended up getting the Vega and how it worked out? I am considering the same. I also do archviz and accessing system memory would be a great help. But is a card with HBCC needed for this? With OpenCL does it access system memory with any card?

Sorry for the late response. I actually did (before the cryptomining took over), bought a XFX Vega 56, and Iā€™m really glad that I did it, HBCC works really well with blender/cycles, Iā€™m able to render bigger/more complex scenes on GPU that otherwise I could only render on cpu. I think HBCC is needed for using system memory as vram for now, I searched for some alternative but didnā€™t find any. Rendering with vega have been very stable. Itā€™s definitely an upgrade coming from a 580 8GB.

1 Like

Using system memory work on any OS with or without HBCC starting with Polaris, so Vega also works like that. HBCC only should make performance a bit better under windows 10, thatā€™s all.

What should I buy:
2x1070 8GB
1x1080 8GB or maybe 11GB version
1xRX Vega 64 8GB

Currently Iā€™m using GTX 970.

Iā€™m considering Vega 64 but Iā€™ve never had AMD GPU before.

Are there any special settings for Vega GPU - like ā€˜debug mode 256ā€™ or does it work out of the box like CUDA?

IMHO 1x1080 doesnā€™t make sense. Is only marginally faster than the 1070, but has the same amount of ram. I would go for the 2x1070 if you are more after speed, or for the 1080ti if you are concerned about ram. Canā€™t say about the vega, I donā€™t have experience with it.

Dual 1070, forget amd. I cant even render on the daily build.

Iā€™d recommend AMD ATI Radeon graphic cards, they work so much better (especially in GNU/Linux).

Iā€™m waiting on a MSI Laptop with Ryzen + Vega in my local Memory Express store myself.

I donā€™t know about you, but my system is pure GNU OS + Software. And UEFI is a ā€œmini-OSā€ in itself -----> such that 2 of my laptops have been ā€œfirmware-flashedā€ and damaged. But with BIOS the Hackers canā€™t touch my other non-UEFI, non-INTEL, non-Windows computers ( I use Puppy LInux version 7.0.8.5 ). :yes:

As most people are not constant target of CYBER ATTACKS, they do not know this. But in case you value your Computer System security, it is best to AVOID Intel + Nvidia. Intelā€™s Management Engine (ME) if used along with UEFI + Windows 10, makes your system totally vulnerable to remote control (even if you power down your computer)ā€¦

Some useful advice:

I do game programming on both BGE + Godot. I know some of you use the GPU to render using Cycles.

But hereā€™s a little secretā€¦ For my Animation Movies, I use Blender Internal to render. The speed + Quality is simply amazing. And your scene is not limited by GPU RAM.

Also, on my Windows 10 laptop with Nvidia 960M, I noticed Blender 2.8 EEVEE takes literally minutes++ just to load the demo files. And viewport viewing is not better than rendered view in either BI or Cycles.

Perhaps itā€™s because itā€™s a 960M, but my laptop is a Gaming Laptop, and I didnā€™t mess with pre-installed drivers.

That sounds like half-nonsense, can you please give a credible reference? Thereā€™s some truth to vulnerabilities in ME, but thereā€™s also vulnerabilities in the AMD equivalent of ME (ā€œSecure Processorā€). Linux isnā€™t going to protect you from that stuff.

If you really want to be ā€œsafeā€ and run a pure FOSS workstation, you need to give up on x86 and get one of these.