true and sorry for the off topic, I think the new surface laptop studio looks great even if slightly bulky.
Last week I composed an animated scene on my Macbook Air M1.
Rendering one frame on the GPU took approximately 1.5 hours at 11W wall power, thatās 16.5 Wh for the render job (I didnāt wait for completion).
My PC with 2 x RTX 2070 renders a frame in 2.5 minutes using 350W wall power, thatās 14.6 Wh for the same rendering job.
NVidia RTX is so much faster that it cancels out the higher power consumption.
An RTX 3080 laptop works more efficiently using less watts than the desktop card, so it should consume even less power for the same render job.
Appleās GPUs are currently no competition for the RTX nVidias, an M2 with RT-cores might be.
Thatās for final rendering, composing scenes is so much more fun on the Mac.
Iād recommend an XMG Neo with Ryzen 5900HX and the RTX 3080 (150W) because those are the laptop versions tuned for highest efficiency.
You wouldnāt be here if you didnāt care. So I guess you get points?
I always find it funny how people who are obviously invested in a topic all of sudden act like they donāt care when they get called out.
Why not work with both ?
I started to love working on the macmini and when I need more GPU I moonlight or MS Remotedesktop into it.
Some architecture software is only windows so I am stuck with that - the rest is all macOS too.
Apple laptops are not more efficient than a PC Nvidia laptop for rendering in Blender.
I can render BMW27 in 12 sec in a RTX3060 consuming ~120w, CPU takes less than 15w and all other stuff (SSD, memory , screen, ports) certainly would not get more than 40w. So lets say 170-180w. If M1 Mac takes 41 seconds to render and takes 90w, that means crude accounting 90w x41 sec while the PC laptop is spending 180w x12 sec. And that do not include the time benefit of a shorter render in account, just energy efficiency.
And i can even improve more that number on battery: 40w RTX3060 takes 23 seconds.
Apple nevertheless have a big chance if Nvidia donāt wises up and increase the video memory in their cards. I would even say that Nvidia do not have future in 3D DCC if they donāt at least double the video memory in 2 years.
I think that CPU is already heavily into diminishing returns.
I think I might have to agree the are less efficient than some believe or at least with how blender and cycles run at the moment.
BMW on the GPU only takes about 92 sec and during that time it uses between 20-22 Watt on the M1Pro binned (so 8c cpu and 14c gpu), uses 3-5 Watt while I am writing this if anyone is curious.
Rendering with both is not worth it as the time gain is too small for the power it uses.
It then takes 86 sec but uses about 39-41 Watt most of the time, which is about double for only 6 sec faster
So if a PC laptop with an Nvidia card uses lets say 167 Watt and does it in 12 sec that would make both machines use the same amount of power for the same job.
In the case of a PC with an Nvidia card managing that at like 120 Watt and 12 sec it actually would be more efficient.
Also the M1 Max sounds less efficient as the binned 14core.
47 sec (on the later builds) will use what 60 W with GPU only ?
That is 1,95x as fast, with 2,28 times more cores but at 3 times the power use ?
On balanced power i can render the BMW scene in 23 sec taking less 80w (GPU 40w instead of 120w). So a total of 90-100w. The current big Nvidia issue is their low graphic memory.
I know the Apple relations with Epic are not good but Apple has a significant advantage in Unreal over Nvidia there due to the memory because it is a hard cut. My RTX 3060 have only 6Gb and while it is good for small scenes it is not workable for medium with complexity or big scenes. I would say the workable level starts at 12 Gb video memory at least.
Yes memory is definitely a big advantage for Apple at the moment, if you kind of want to match Apple you need to buy a 3090 with 24GB and these days that costs a fortune, about 3,3k here depending on brand.
I payed 500 less for my config but potentially can use the same amount of VRAM if I leave 8 for the rest.
Anyway sorry for getting sidetracked again.
Here is hoping for some more blender optimisations on the Apple machines.
OK, I learned, current and coming Mx Apple devices are useless and
over expensive. So I stop buying them and go back to PC.
Throw out my AMD GPU and go Nvidia only.
Also Michal Jones can go back, concentrating on fixing my regressions
in Mail and Safari.
Thread can be closed.
As a Linux user, I just have to say that I think youāre ALL a bunch of nerds!
ā¦which, yeah, Iāll admit is rather hypocritical of me, cuz I use Linux.
I actually think all of yāall are goofy cause yāall actually render on your computers and not the cloud.
Suckers.
Anyone who uses the cloud for anything is one of those hipster wannabe types who always holds up the line at Starbucks ordering a coffee with 50 different ingredients.
ā¦I would expect no less from the CEO of Apple.
Haha
Gotta do something with my billions.
You could buy all of us here our own Maserati. Itād only cost you, what, 5 minutes off your paycheck?
Let me think about it.
I respect you, sir.
Well, at least I moved out of my grandmotherās basement !
If you compare this based on GPU rendering yes - shorter render time on an RTX is also less time it uses significantly more power.
There is however more you can do with a laptop than GPU rendering. And yes in that aspect the current Apple laptops do have a clear advantage.
Or do you think Intel just because of a fluke decided to change the core types in their new CPU?
Well cannot you use also the main RAM if the VRAM is full ? I know it is slower but it should hold the data with the new CUDA / NVIDIA drivers as far as I know.
Currently my CPU is around 3-8% 2.9-3.1ghz )and note this a octacore(16 threads) so in other less cores will be higher, i have 3 browser with a total of about 50 tabs opened, Blender is opened but doing nothing. Some small otehr stuff opened. 11 Gb total RAM in use.
CPU is consuming 7w.
Nvidia consumes 13 w while at 1-4% use which appear ineficient. If i go to Blender and rotate the viewport in shaded view fastest possible with the cube Nvidia jumps to 23-36% and the consumption to 17-23w. CPU goes to 9w.
So i would say at this low intensity use the Nvidia is a bigger issue than the CPU (i did not activated Optimus so it canāt turn off the Nvidia). I have AMD 5800H that are said to be more efficient than Intels.
The maximum power i have consumed from was 133w from Nvidia and 82w from AMDās both in Unreal.
I did try something as I got no reply from Jason.
His code said āmax_threads_per_threadgroup = 512;ā
According to Apple documentation 1024 is the max you can do, however if I change it to 1024 myself and build blender, the rendering does not work.
Actually 768 does now work either so no clue why they put 1024 in their doc as Max.
Anyway only tried it because I was trying to build the SSGI version but it did not work.