Plus, like others mentioned, the power use is quite a bit higher.
I posted above a chart that shows the performance per watt. It is a Nvidia chart even if half true it is significant. If true it means you have 2x performance at 150w, and more above.
Iām not a programmer, but I believe the shading is done on the CUDA core and other aspect take advantage of the RT core if Cycles support it, Nvidia support has been great so far.
I am also kind of hype about Omniverse + 4000 series support, I wonder if you can generate a .EXE from it, being able to create a racing game is kind of appealing.
If you want to create a 3d app you would use Unreal, Unity or another game engine. Omniverse will be a Nvidia āaquariumā. By that I mean it will be itās own world where Nvidia sets the rules and reaps the profits. A bit like a VR version of AOL. A āwalled gardenā.
The hardware will be pushed to the edge of breaking though, that is unless Nvidia gives you a way to put the card into a lower gear so you know for sure it will keep working years from now (a ādonāt break my hardwareā mode that does not involve booting into the UEFI).
In fact, Intel and AMD are really pushing their products to the edge too (meaning you pay more for chips with an increased fail rate). I wonder how long until the Geico Gecko starts yapping about insurance for computer hardware?
You already get a ārecommendedā 50 a month Allstate insurance plan when you buy a GPU that you have to opt out of, I canāt believe thereās people out there who would actually buy that
As igorās Lab proven you could potentially unvervolt it to 350W and have only minor performance hit. Like 40% less power with 10% less perf.
Fastest GPU never had good perf/wat, cause every % of perfomance cost you few % of power.
Anyway, iāve composed table with TGP of RTX GPUs for ease comparsion:
Its hard to compare just TGP cause now they introduced Max TGP, but besite that in this generation unlike in previous in same segment (so 1080ā2080ā3080ā4080) base TGP remained the same.
I prefer to spend less, I have a 3070 that Iām perfectly happy with. I was hoping the 4000 series would have more VRAM, so in 10 years when I upgrade, I could get one of those, but I guess not hopefully at that point thereās a good professional GPU with a ton of VRAM that doesnāt require itās own generator. Even the 3090 takes more power than Iād like
You will not spend less watts. Since 3000 series appear to be as not as efficient it will take more time working to achieve same results spending more energy overall.
Now if you are happy with what you have and the kind of work you do donāt warrant another investment is your right to do the correct call for your situation.
I agree that VRAM situation with Nvidia continues to be disappointing, and i bet that laptops will continue to be a even bigger problem. That is one of the reasons to AMD, Intel. Apple, Qualcomm(?) need step up their game.
I am keeping an eye for HEVC 4:2:2 if there is now hardware decoding in 4xxx, Da Vinci Resolve seems to have got a boost with 4xxx too.
Despite using specialized hardware such as the RT cores, the ray tracing pipeline still relies on CUDA cores and the CPU for a handful tasks, and here NVIDIA claims that SER contributes to a 3X ray tracing performance uplift (the performance contribution of CUDA cores). With traditional raster graphics, SER contributes a meaty 25% performance uplift. (ā¦) The Tensor cores deployed on Ada are functionally identical to the ones on the Hopper H100 Tensor Core HPC processor, featuring the new FP8 Transformer Engine, which delivers up to 5X the AI inference performance over the previous generation Ampere Tensor Core.
The third-generation RT Core being introduced with Ada offers twice the ray-triangle intersection performance over the āAmpereā RT core, and introduces two new hardware componentsāOpacity Micromap (OMM) Engine, and Displaced Micro-Mesh (DMM) Engine. OMM accelerates alpha textures often used for elements such as foliage, particles, and fences; while the DMM accelerates BVH build times by a stunning 10X.
From what this test is showing the 3090 is idling at ~17-20w and the A6000 at ~11-13w, itās also showing GPU utilization/RAM usage/power consumption for things like viewport navigation and final rendering.
So you should expect the same for the 4000 series with their 2x power efficiency.
My laptop RTX3060 in performance mode idles at 13w. Changing tabs here in Blender Artists which is not heavy donāt change that value it is still 13.xxx watt changes only the .xxx - i have 3 browsers opened and about 100 pages overall. Changing to a digital store site tab it goes to 24w. Playing VLC HEVC 1440x1080 23.97fps do not add much a large part is still in 13-14w. A 4K60FPS video in Youtube goes up to 26w at start and reduces to 14-17w after.
It is actually overall less than in hybrid mode since the APU+RTX goes to 18w or so. That was surprise i discovered recently.