iPhone 13 Pro, with A15 Bionic, scored around 2250 Geekbench 6 single core. M2 Max, more-so around 2800. These two chips are comparable because they use the same microarchitecture, and it indicates a ~20-25% uplift in single core mostly just through pushing more power through the chip.
If they can drive the same increase with an M3 chip based on the A17 Pro microarchitecture, a single-core score of 3500 is within reach. For a laptop.
Comparison: the i9 13900KS, the current stock single-core performance champion, can push 3100 with excellent cooling; and it reaches this blistering performance (~10% faster than an M2 Ultra) by, singularly, drawing upward of 220 watts from the wall [1]. The Ryzen 7900x isn't far behind, in both metrics. An entire M2 Max SoC peaks at around 90 watts, and there's evidence that the CPU in isolation draws closer to 35 watts [2].
Current laptop AMD chips have about the same efficiency as M2 Pro, when throttled to similar wattage (see the section about 35W). Instead they default to more power and being more powerful, while being vastly cheaper (M2 Pro Mac mini with 32gb of ram and 2tb of SSD costs what?).
So yeah, for a laptop chip apple is ahead, but it is pretty easy to overstate by how much.
Apple’s (or actually TSMC’s) advanced packaging makes communication between dies faster. I think that some of the performance gains come from a lower number of clockticks required for accessing memory on an Apple chip.
It's already proven that it's not just the node advantage. Just compare Apple's 7nm A series chips to Zen3 or M2 to Zen4. The power advantage for Apple is 3x - 10x.
Actually, it's way worse for Intel and AMD if you only use single core power.
If you run Geekbench 5 or 6 on an M1, the total package power ranges 0.3w - 5w, mostly staying below 3w. Intel and AMD can use 10x or more power to achieve similar or lower Geekbench scores.
I find articles like this really vexing because this isn't a general-purpose CPU, it's just a chip meant to run the software that Apple allows on the App Store. Most of the software I use day-to-day can't run on it by design.
It's kind of depressing that the most advanced mobile CPU technology (that can only be fabbed by a single company in a far-away country) is monopolized by a company that heavily restricts the software you can run on it, and will never sell it to me with documentation and let me plug it onto my own motherboard--which is ironically how Apple first started, by buying and making full use of the MOS 6502. But the days of the garage hacker getting their hands on cutting-edge parts are long gone.
My definition of "general purpose" is that that you can "run whatever software you like on it". If you can't run whatever on it, that restricts the CPU to a "specific purpose", no?
That's not the same thing. An instruction set is an inherent aspect of a CPU; you can't have a CPU without one. Also, you can emulate or translate instruction sets between general-purpose CPUs, or in most cases just recompile the higher-level source code. The underlying logic will be executed regardless.
> […] An instruction set is an inherent aspect of a CPU; you can't have a CPU without one.
Oh, yes, you can.
The following examples of
– Transport triggered architecture
– Dataflow architecture
– Optical and quantum computing
represent viable, general purpose (not general availability), albeit experimental, CPU's without instruction sets. That is, none of them have anything similar to «movq $1, %r0» or «add.w %r2, %r0, %r1».
FPGA's are another and a more conventional example of a general purpose (even if specialised) CPU without an instruction set.
True. But it does make it functionally a non-general purpose CPU. I suppose you can load your own programs if you have a dev account or jailbreak right?
Unless it’s changed again you don’t need a dev account to deploy apps to iOS - but it is somewhat limited. You can have at most 3, the signing lasts for just a week and IIRC the entitlements you can use are limited as well?
I disagree. $100/yr to put what might be a very simple little toy application is not worth it. Sure, that toy might inspire more, and it could even lead to a promising career, but it’s a stretch to argue it was worth it when it could just as easily never be utilized.
The promise of general purpose computing and open source as well is that it empowers all kinds of users.
Oh how badly I want to tweak little things about my iPhone, but can’t because the software is locked down.
Compared to the cost and power of any of the hardware and software we are talking about it is insignificant.
If you want to do toy programming on a budget there are a million arduino-like things out there.
I recently saw calculations on the price of say, just iOS alone. It being equivalent to multiple Manhattan projects. Regardless of how accurate that estimate is, the general point remains.
Do I think Apple should make developing for non-distribution on the iPhone free? Yes. Do I think there is an argument that is almost a moral obligation? Possibly.
But what you get for like 30 cents a day with a developer account is mind boggling. I mean, that was the initial point wasn’t it? These are insanely capable and meticulously engineered pieces of technology.
Edit:
I can even argue it is an ethical obligation that Apple should allow all users to install non App Store apps if they explicitly so choose. And I have argued that in the past.
But I separate this from the evaluation of the value proposition offered by an Apple Developer account.
What you are missing is that basically everyone needs a phone, not everyone needs an arduino. There is a fundamental difference between even 30¢ and free. Make developing free and suddenly you’ll have more developers.
Now it’s a whole other discussion of if that’s a good thing or not. But I personally like to believe that everyone should be able to develop as a hobbyist. The commercialization of software is often at odd with it’s users. Not so much when people are doing things for themselves and for fun.
I’m glad someone else feels this way. I absolutely hate those YouTube videos talking about the “4K 30min total render” bs. Show me a M-chip MacBook that doesn’t balk when running Docker, or chug when opening a fat codebase in WebStorm.
Sure the chips are fast when they run the stuff Apple designed for it, but they don’t seem any faster to me in my work. I will give them efficiency though. I love not having to carry the charge brick everywhere I go.
Personal anecdote- Neither my M1 Air (personal) or M2 Pro (work) have any trouble with docker and jetbrains. Fans don't even turn on on the M2.
Both have apple silicon builds now, but it and a lot of other software built for Intel ran at similar performance or better (compared to Intel MacBook / Thinkpad).
I understand why people hate apple, but I've never heard anyone say this (what op said) who has had similar setups. It works a hell of a lot better than my X1 Carbon did.
Wtf, you do realize software is so wide-reaching (even just on “limited ios”), you can’t meaningfully optimize for that at the goddamn CPU level. Like what, this is not some embedded chip in a happy meal toy that plays a single dumb game alone - there is crypto, graphics, everything necessary for an OS.
Docker sucks because it is a linux-ism, not any other reason. And I think most people would disagree with your notion, it really does run circles around basically every other laptop.
I mean, the M1 chip in my Macbook Pro runs absolute circles around the Intel one that I had in the previous generation, and they were released within a year of each other, so... I honestly do not know what you're talking about.
That's not a high bar. I'm still stuck on the 2019 MacBook Pro for work, and it is significantly slower and hotter than my ThinkPad with Linux that I use for hobby programming.
Whenever I open IntelliJ on it I can feel the temperature of the room increase. The Intel Macs were terrible for many years.
Cannot wait for my Intel MBP to be refreshed into an M2 at work. Running Docker turns it into a jet engine. Like having a white noise machine with an unpleasant tone that also generates a ton of heat.
I went from a Core i9 to an M1 Max and it's been a world of difference. Performance is great, it runs my dual 4K screens with scaling (3008x1692, virtual resolution 6016x3384) without a sweat, and the battery lasts forever.
Moreso than the noise, the touchbar at the back can become too hot to touch and the plastic strip around the screen, where MacBook Pro is printed, is bubbling and peeling away with the heat.
I love how apple considers my skin part of the cooling solution. If it wasn't for legal limits, they would absolutely just burn people, sell a laptop cooling mat for $100, and tell people they are using laptops wrong.
I have a 14” M2 which has been up 52 days with docker running in the background, and Rider and Clion. And it runs better than my DELL XPS 15 which heats the room.
> or chug when opening a fat codebase in WebStorm.
It'd be worth checking that you're not using an Intel version of WebStorm. I've personally found IntelliJ (largely the same thing as WebStorm) on the M1 to be _extremely_ fast, certainly leaps and bounds ahead of the old Intel Macs. That's the native version, though; via Rosetta it is absolutely painful (the JVM is close to a worst-case for Rosetta).
It’s depressing right now maybe, but at least it probably means we’ll have this level of performance inexpensively in some years. The tech is still getting better quickly.
If you collect vintage compact Macs, you can literally see the hardware transition from the original Macintosh designed like any other random 68k project within the reach of homebrewer with a PAL programmer, to highly compact gate arrays and “if you don’t have the ability to design and obtain custom silicon, good luck”. VLSI giveth and VLSI taketh.
This is the same family as the MacBook line use. Once yields improve on N3 next year you’ll be able to get even a MacBook Air that will have this same chip but with more cores.
This is super impressive on paper, but it feels like there aren't really many applications that can take advantage of this power on a phone due to thermal and (battery) power constraints.
Is the idea that mobile workloads are spiky so it can boost to full speed to render a webpage for example then go back to sleep? Are there / can there be any applications that sustain high clock speeds?
As far as I’ve gathered, race to idle has definitely been a focal point for Apple when it comes to power management along with coalescence of CPU and antenna wakeups — the system batches up work, powers up components, burns through the work as quickly as possible, and then returns components to idle/sleep.
This is also why on iOS, apps that become known for abusing or mismanaging background processing are penalized and have their background processes run less often. Even just a couple of processes waffling around instead of completing their tasks and quitting can keep the system from idling and have a big negative impact.
Well they're bringing AAA games like Resident Evil 4 Remake and Assassin's Creed Mirage to the iPhone, complete with raytracing support, so it looks like it'll be a golden age of Apple gaming soon enough.
Games made with a controller in mind are miserable experiences on a touchscreen. Phones are awkward and uncomfortable to hold like a controller, the lack of physical buttons provides no tactile feedback on where to press, the lack of shoulder buttons means overloading your thumbs with even more inputs, and your fingers obscure half the screen.
And if the implication is that they'll port those iOS games to Mac, then I wouldn't count on it if it takes even a single iota of extra effort on their end. There are more Linux gamers than Mac gamers, and yet game devs still can't be arsed to test their games on Steam Deck.
I don't think they expect people to seriously play Resident Evil 4 with touch controls. I imagine most people who actually complete the game will use either the controllers that attach to the side of the phone like a Switch, controllers that have a mount on top to hold the phone, a normal Bluetooth controls like a Dualshock with their phone in some other dock, or stream everything to an Apple TV.
I have been surprised they have not been more aggressive with the chips in AppleTV and signed more deals to port games to the platform. In some ways Apple is well positioned to be a player in the game space, but they seem to lack the commitment and flexibility required to make it work.
Apple has historically been terrible about committing to gaming which has led to game developers ignoring the platform. I do hope that Apple actually stays the course this time around and with the M* series of chips they've raised the floor for what game developers can count on having available.
I really thought they would have made a bigger play for gaming on the Apple TV (The hardware, not the app, not the service, I still can't get over that...), they pushed a little at the start but with most Apple gaming initiatives (aside from mobile) it seems to have fizzled out.
Which is why you can stream your iPhone to a TV and connect a controller. There are also many controllers that clip onto the iPhone itself for a Switch-like experience.
> And if the implication is that they'll port those iOS games to Mac, then I wouldn't count on it if it takes even a single iota of extra effort on their end
The games will come to all Apple Silicon devices as the architectures are very similar if not the same (M1 iPad vs M1 Mac). Apple also released the Game Porting Toolkit and I can already play games at decent framerates with Whisky [0] which utilizes GPTK. Keep in mind this is basically beta software that's not even for the end user to use but as a developer tool to help port their games over.
And obviously the games when on iOS won't be played with touch controls, it's for playing with a Bluetooth controller.
iPhone could be a legit game console if developers wanted it to. Pair a Bluetooth game controller with screen mirroring and there you go. There are so many games on Steam I wish they would port to iPhone. It's like having this incredibly powerful device in your pocket and you can't actually use it for anything.
“ There is a catch though: Apple's A17 Pro operates at 3.75 GHz, according to the benchmark, whereas its mighty competitors work at about 5.80 GHz and 6.0 GHz, respectively.”
This is why numbers like this don’t matter when you say the performance is within 10%
CPUs in a mobile form factor are thermally constrained. Consider that the 13900 will have a heat sink and dedicated fan. If Apple made a laptop version, they’d be using essentially the same chip at a higher clock frequency. If Apple put it into a Mac Pro, they’d run it higher frequency yet still.
of course there’s more than just thermals. The CPU has to actually be designed to run at a high clock.
I could be wrong but my understanding is that all their CPUs share a good amount of commonality and the major differences are more around packaging (IO connectors etc)
You just look at final performance metrics, ghz/mhz matter back in the 90s. Now you can’t just say higher freq means better because architecture all different
The catch is that we can expect better performance when they increase the clock to be closer to the i9. However, it’s never clear how high it go safely.
You can't just turn up clockspeed and expect linear increase in performance. The chips are packed so dense with transistors that the chips have to do some magic to be able to operate at those clock speeds without frying. Desktop parts don't really care about energy efficiency and are thus designed to be able to operate like that since max performance is more important than efficiency. Microarchitectural design decision allow engineers to pick tradeoffs for where they want to place their chips in the market, but most major advantages between chips come from fabrication. And these days there aren't much gains left to be made there.
CPUs are designed with a certain clock speed in mind. The target frequency affects the design of literally every component. Turn up the frequency too high and signals won't arrive in time. Increasing the frequency to match performance with AMD and Intel might require an almost complete redesign.
Maybe it can run faster given better cooling and less aggressive power-saving. But I'd be surprised if it can be made to clock much higher, else Apple would have done it already. Mobile CPUs already have peak performance well above what they can sustain. An even faster turbo would be used more rarely but would still be useful.
I mean, they won’t just take the same chip, but they can definitely reuse many of its parts and it is not unreasonable to think that they can raise the frequency quite a bit.
These numbers are a harbinger of what to expect when Apple brings 3nm tech to the M-series, especially the Ultra with 24+ cores and much more lenient thermal envelope (not to mention active cooling).
That's not fair at all. Snapdragon 8 gen 2 was ahead of Apples best offering, just a few days ago [1], in gaming. That's not "mediocre". That's much closer to "head to head". Thermal throttling is also a significant problem with iPhones, for anything real world [2]. A new snapdragon will come out shortly, and the battle will continue.
There are different benchmarks one can pick depending on the article they want to write. Geekbench 6 favors Apple CPUs more that Geekbench 5, which was already better on Apple CPUs than say Antutu.
Antutu CPU - A16 248335, SnapDragon 8 Gen2 268542
Antutu GPU - A16 394336, SnapDragon 8 Gen 2 580101
If You look at Geekbench 5 scores for A16 and SD8G2, A16 is ahead by less than 10%.
Nuvia was a company founded by Gerard Williams in 2020 to build an ARM data center chip with a very high performance/watt profile. He was previously at Apple running the chip unit in some of the crucial years for Apple Silicon. Qualcomm bought Nuvia in 2021, and Williams is VP Engineering now. They are going to present the first Snapdragons with the Nuvia cores in October, probably in the wild at CES. Smartphone, watch, PC, IOT, etc. Data center after that. I expect it will bring Qualcomm much closer to Apple on the CPU part of the SoC. So yes, Windows PCs that can get closer to MacBooks in battery life and CPU performance. Linux at your own risk probably, because they have had a deal with Microsoft on this for a while.
If they can drive the same increase with an M3 chip based on the A17 Pro microarchitecture, a single-core score of 3500 is within reach. For a laptop.
Comparison: the i9 13900KS, the current stock single-core performance champion, can push 3100 with excellent cooling; and it reaches this blistering performance (~10% faster than an M2 Ultra) by, singularly, drawing upward of 220 watts from the wall [1]. The Ryzen 7900x isn't far behind, in both metrics. An entire M2 Max SoC peaks at around 90 watts, and there's evidence that the CPU in isolation draws closer to 35 watts [2].
It cannot be understated how far ahead Apple is.
[1] https://www.tomshardware.com/reviews/intel-core-i9-13900ks-c...
[2] https://www.notebookcheck.net/Apple-M2-Max-Processor-Benchma...