These theoretical numbers (FLOPs) didn’t reflect performance in real-world benchmarks. On specs alone, Vega should match the 1080Ti. In reality, it’s closer to a 1070. Again, the drivers are not “done”.
As for “OpenGL performance”, it’s much more complicated: There is a set of archaic “professional” OpenGL features that is intentionally throttled on non-professional GPUs.That’s what is influencing those SPEC benchmarks (CATIA, Solidworks…). AMD has decided to give the Vega FE a bit more leverage here, but this is not an architectural advantage.
By and large, Blender is not affected by this (double-sided shading being the exception on NVIDIA), nor should it be. Blender should be using the same modern OpenGL features that OpenGL games (such as DOOM 2016) use, to avoid being throttled on consumer hardware. Of course, nothing prevents vendors from detecting and throttling Blender usage explicitly, but that’s another story.
I’m no expert on compiling but couldn’t the compiler make the convertion FP32 to FP16 while compiling.
No. How would the compiler know it is safe to make that transformation? Variables need to be declared or annotated explicitly as half-precision.
FP16 (per channel) is enough for many image-processing tasks, but not for general computation.
You know tbh, even after actual benchmarks come in…I still have to stick to nvidia, because amd just have crappy support on linux…and even when I used windows ati cards had hacky drivers…or hardware…not sure what it was, but my systems always failed after a while due to a video card(ati/amd)…I do however want them to garner more opencl and vulcan support…I’m always happy to see widely supported open source standards…so good luck to amd…I’m very happy about there new cpus’, maybe the gpus’ may end up to be terrific…I’ll still be using nvidia…for at least two years…the amount of time I assume it would take for good linux support…if they take that path at all.
Have you tried the new amdgpu driver? The old proprietary fglrx one was pretty bad, but i’ve heard good stuff about amdgpu. I’m currently looking for a new mid-range gpu and while they say Nvidia cards work better on Linux with their own proprietary driver i’d prefer a true open-source one coming from the manufacturer itself. Too bad my 5670 is too old to be supported by amdgpu… Intel’s HD graphics may be poor performers, but their opensource drivers have been the most rock-solid in my experience.
who’s ‘they’ :)? I am not against one day using amd graphics…I really could care less, so long as I have stability… I just need to know from other linux users that it is working as solid as nvidia is…and I do not make changes lightly…this would require hearing good things for about a year for me…the truth comes with time.
on a side note intel chips are just too far behind the power curve for me to buy one in the next 5 years. If they ever get serious, I think it would take at least that long for them to make it work.
Kinda frustrated that there are no blender benchmarks done by a blender user with this GPU. The only number we got was from blender was from gamernexus benchmark where we don’t know what kind of blend file it is. Their blender crashed so they used a build where a dev says it has significant slowdowns with that version of blender.
ppl will use the RX VEGA anyway, because it will be much cheaper. So we better wait for RX Vega benchmarks, hopefully with 2.79 and not outdated things.
The issue identified by gamernexus is that they had many issues to get Blender running on RX VEGA. So getting one now just for blending is unfortunately waste of your money.
I will admit im still tempted as it would still be a nice boost from my r9 fury (non x). Though that power consumption… still in the 280 range… seems it is AMDs sweet spot. though it would be cool if they released it in sub 250w…
We will know soon enough as info will be released during SIGGRAPH
I guess like many other websites, they took the 2.78c version to test. Latest buildbot should work out of the box since the openCL buffer patch.
Regarding consumption, as long as cycles will use GPUs at 40-60% of their real capacity, it will stay very low. I have a build that has higher occupancy (about 20-30% faster than master) and still lowering my RX480 voltage from 1.2 to 0.97 works very stable and then the GPU use less than 80W, nothing compared to the 180W peaks seen in video games.
Of course if some fat is removed from the kernel in 2.8, usage may go up and then you will have to set your voltage higher, but I think it will never reach what games draw for power.
Based on this at least for compute, VEGA FE in its current limited form is almost 2x as fast for LuxRender against RX 480. Though I know numbers don’t directly translate between LuxRender and Cycles. but still
Only about 3 more weeks before SIGGRAPH and more details…
2 RX 480 are faster then the 1080Ti, which is faster than VEGA. So the best thing to do I think is to wait for mining to become not rentable, then buy a mining motherboard with 11 PCI-E and 11 RX480. The prices will drop heavily because of the flood. Before mining, RX480 could be found for 180€, so maybe 150-140 during the market flood, which means less than 1500€ for 10 of them and damn fast rendering, it will be as fast as 6 1080Ti with my build
& if you are also into other sims/vfx (as with houdini, maya) then from what i observe lately the VEGA is paving the way (openCL started to show it’s maneuverability in more complex ways)