I'm waiting to be corrected by someone who knows GPU architecture better than me but as far as I can tell the synthetic benchmarks can trade blows with a 3070 or 80 (mobile), but the actual gaming performance isn't going to be as rosy.
Also recall that very few games needing that performance actually work on MacOS
"Also recall that very few games needing that performance actually work on MacOS"
But many Windows games do run under Crossover (a commercial wrap of WINE - well worth the measly licensing fee for seamless ease of use to me) or the Windows 10 ARM beta in Parallels. I got so many games to run on my M1 MacBook Air I ended up returning it to wait for the next round that could take more RAM. I'm very, very happy I waited for these and I fully expect it will replace my Windows gaming machine too.
Well, Apple G13 series are excellent rasterizers. I‘d expect them do very well in games, especially with that massive bandwidth and humongous caches. The problem is that not many games run on macOS. But if you are only interested in games with solid Mac support, they will perform very well (especially if it’s a native client like Baldurs Gates 3).
Which other passively cooled laptop can do it? And what 3 year old card are you comparing it to? Hopefully something with 20W or lower power consumption.
45fps at medium Full HD is not far off a 1650 Max q
Apple compare themself to a 3080m, the perf from an M1 is not even close to a 3 y/o card. I don't care if it takes 10w if I can't even play at 60fps on "recent'ish" games.
You may have mistaken last year's M1 (the one in the video, available passively cooled in the MacBook Air) with the new M1 Pro and M1 Max (the ones being compared to the more powerful counterparts).
It's really hard to compare Apple and Nvidia, but a bit easier to compare Apple to AMD. My best guess is performance will be similar to a 6700xt. Of course, none of this really matters for gaming if studios don't support the Mac.
The gaming performance will be CPU-bottlenecked. Without proper Wine/DXVK support, they have to settle for interpreted HLE or dynamic recompilation, neither of which are very feasible on modern CPUs, much less ARM chips.
From all reports, rosetta2 performs remarkably well. Apparently they added special hardware support for some x86 features to improve the performance of dynamically recompiled code.
M1 Pro & Max (and plain M1 too for what it's worth) have unified memory across both CPU and GPU. So depending on the model it'd be up to 32gb or 64gb (not accounting for the amount being used by the CPU). Put differently - far more than 3070 and 3080.
It's not quite apples to apples. The 3070 only has 8GB of memory available, whereas the M1 Max has up to 64 GB available. It's also unified memory in the M1 and doesn't require a copy between CPU & GPU. Some stuff will be better for the M1 Max and some stuff will be worse.
It's probably bad, the M1 could not get 60fps on WoW so ... When I see Apple comparison I would take that with a grain of salts because the M1 is not able to run any modern game at decent fps.
MoltenVK is Vulkan's official translation layer to Metal, and doesn't have too much overhead. Combine with dxvk or d3vkd to translate from DirectX—DirectX before 12 is generally faster with DXVK that Windows' native support.
Apple compared the M1 Max to a 3080m. 4x the GPU cores and up to 8x the memory makes a difference, and it wouldn't be at all surprising to see that their numbers are accurate.
No one has explained what you got wrong, so in case anyone reading this is still confused, Apple compared an M1 Max to a 3080m. An M1 Max's graphics card is ~4x as fast as an M1.
In the keynote Apple said the M1 Max should be comparable to the performance of an RTX 3080 Laptop (the footnote on the graph specified the comparison was against an MSI GE76 Raider 11UH-053), which is still quite a bit below the desktop 3080.
Seems kind of unfair with the NVIDIA using up to 320W of power and having nearly twice the memory bandwidth.
But if it runs even half as well as a 3080, that would represent amazing performance per Watt.
I believe they compared it to a ~100W mobile RTX 3080, not a desktop one. And the mobile part can go up to ~160W on gaming laptops like Legion 7 that have better cooling than the MSI one they compared to.
They have a huge advantage in performance/watt but not in raw performance. And I wonder how much of that advantage is architecture vs. manufacturing process node.
I am very confused by these claims on M1's GPU performance. I build a WebXR app at work that runs at 120hz on the Quest 2, 90hz on my Pixel 5, and 90hz on my Window 10 desktop with an RTX 2080 with the Samsung Odyssey+ and a 4K display at the same time. And these are just the native refresh rates, you can't run any faster with the way VR rendering is done in the browser. But on my M1 Mac Mini, I get 20hz on a single, 4K screen.
My app doesn't do a lot. It displays high resolution photospheres, performs some teleconferencing, and renders spatialized audio. And like I said, it screams on Snapdragon 865-class hardware.
Productivity. It's a social VR experience for teaching foreign language. It's part of our existing class structure, so there isn't really much to do if you aren't scheduled to meet with a teacher.
The MSI laptop in question lets the GPU use up to 165W. See eg. AnandTech's review of that MSI laptop, which measured 290W at the wall while gaming: https://www.anandtech.com/show/16928/the-msi-ge76-raider-rev...
(IIRC, it originally shipped with a 155W limit for the GPU, but that got bumped up by a firmware update.)
The performance right now is interesting, but the performance trajectory as they evolve their GPUs over the coming generations will be even more interesting to follow.
Who knows, maybe they'll evolve solutions that will challenge desktop GPUs, as they have done with the CPUs.
A "100W mobile RTX 3080" is basically not using the GPU at all. At that power draw, you can't do anything meaningful. So I guess the takeaway is "if you starve a dedicated GPU, then the M1 Max gets within 90%!"