Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple's A17 Pro Within 10% of i9-13900K, 7950X in Single-Core Performance (tomshardware.com)
82 points by CharlesW on Sept 14, 2023 | hide | past | favorite | 105 comments


iPhone 13 Pro, with A15 Bionic, scored around 2250 Geekbench 6 single core. M2 Max, more-so around 2800. These two chips are comparable because they use the same microarchitecture, and it indicates a ~20-25% uplift in single core mostly just through pushing more power through the chip.

If they can drive the same increase with an M3 chip based on the A17 Pro microarchitecture, a single-core score of 3500 is within reach. For a laptop.

Comparison: the i9 13900KS, the current stock single-core performance champion, can push 3100 with excellent cooling; and it reaches this blistering performance (~10% faster than an M2 Ultra) by, singularly, drawing upward of 220 watts from the wall [1]. The Ryzen 7900x isn't far behind, in both metrics. An entire M2 Max SoC peaks at around 90 watts, and there's evidence that the CPU in isolation draws closer to 35 watts [2].

It cannot be understated how far ahead Apple is.

[1] https://www.tomshardware.com/reviews/intel-core-i9-13900ks-c...

[2] https://www.notebookcheck.net/Apple-M2-Max-Processor-Benchma...


That sounds overstated.

https://www.notebookcheck.net/AMD-Ryzen-9-7940HS-analysis-Ze...

Current laptop AMD chips have about the same efficiency as M2 Pro, when throttled to similar wattage (see the section about 35W). Instead they default to more power and being more powerful, while being vastly cheaper (M2 Pro Mac mini with 32gb of ram and 2tb of SSD costs what?).

So yeah, for a laptop chip apple is ahead, but it is pretty easy to overstate by how much.


??? How is this supported by your article? Looking at the only perf/watt benchmark (Cinebench ST), they show:

M2 Pro @ 141 points/watt Ryzen 9 7940HS @ 47 points/watt


> when throttled to similar wattage (see the section about 35W)


I can't wait for AMD chips on 3nm so we can see if it really is "just the advantage of the process" like a lot of folk say.


Apple’s (or actually TSMC’s) advanced packaging makes communication between dies faster. I think that some of the performance gains come from a lower number of clockticks required for accessing memory on an Apple chip.


It's already proven that it's not just the node advantage. Just compare Apple's 7nm A series chips to Zen3 or M2 to Zen4. The power advantage for Apple is 3x - 10x.


your overall point is valid, but it’s somewhat misleading to compare single core performance alongside total SoC power.


Actually, it's way worse for Intel and AMD if you only use single core power.

If you run Geekbench 5 or 6 on an M1, the total package power ranges 0.3w - 5w, mostly staying below 3w. Intel and AMD can use 10x or more power to achieve similar or lower Geekbench scores.


I think you meant to say that it cannot be overstated how far ahead Apple is ;)


The 7900x3d has the same performance but uses 50w less than the 7900x


I find articles like this really vexing because this isn't a general-purpose CPU, it's just a chip meant to run the software that Apple allows on the App Store. Most of the software I use day-to-day can't run on it by design.

It's kind of depressing that the most advanced mobile CPU technology (that can only be fabbed by a single company in a far-away country) is monopolized by a company that heavily restricts the software you can run on it, and will never sell it to me with documentation and let me plug it onto my own motherboard--which is ironically how Apple first started, by buying and making full use of the MOS 6502. But the days of the garage hacker getting their hands on cutting-edge parts are long gone.


Just because you can't run whatever software you like on it, does not make it non-general purpose CPU.


My definition of "general purpose" is that that you can "run whatever software you like on it". If you can't run whatever on it, that restricts the CPU to a "specific purpose", no?


Cant I write whatever code I want and run it on the A chips through MacOS?


if I write a bit of software and target ARM64 and it cannot run on x86.

Does that mean that x86 has a specific purpose now and is no longer a general purpose CPU?


That's not the same thing. An instruction set is an inherent aspect of a CPU; you can't have a CPU without one. Also, you can emulate or translate instruction sets between general-purpose CPUs, or in most cases just recompile the higher-level source code. The underlying logic will be executed regardless.


Tada, there is ish, so you can run whatever on an iphone emulated by an x86 layer! That is, you have a general purpose CPU.

Sure, apple suck at sideloading, hopefully the EU ruling will make it a possibility, but then say that instead.


> Tada, you can but only after jailbreaking and replacing your actual hardware with a virtual one.

That sounds like these "technically, a taco is a sandwich" discussions. Maybe, but was that actually the question?


ISH does not need a jailbreak. you just download it from the App Store.


> […] An instruction set is an inherent aspect of a CPU; you can't have a CPU without one.

Oh, yes, you can.

The following examples of

– Transport triggered architecture

– Dataflow architecture

– Optical and quantum computing

represent viable, general purpose (not general availability), albeit experimental, CPU's without instruction sets. That is, none of them have anything similar to «movq $1, %r0» or «add.w %r2, %r0, %r1».

FPGA's are another and a more conventional example of a general purpose (even if specialised) CPU without an instruction set.


Oh so you’re saying the iPhone cpu is a general purpose cpu then. Just need to translate the instructions. Ok.


An ARM64 emulator will run that software on x86.


True. But it does make it functionally a non-general purpose CPU. I suppose you can load your own programs if you have a dev account or jailbreak right?


Unless it’s changed again you don’t need a dev account to deploy apps to iOS - but it is somewhat limited. You can have at most 3, the signing lasts for just a week and IIRC the entitlements you can use are limited as well?


Yes. And a dev account is like $100/year.

It’s bad that they can gatekeep like this, I agree. I should just be able to put whatever on the hardware I buy.

But the expense in practical terms, immediate pragmatic terms, is negligible compared to what it unlocks.


I disagree. $100/yr to put what might be a very simple little toy application is not worth it. Sure, that toy might inspire more, and it could even lead to a promising career, but it’s a stretch to argue it was worth it when it could just as easily never be utilized.

The promise of general purpose computing and open source as well is that it empowers all kinds of users.

Oh how badly I want to tweak little things about my iPhone, but can’t because the software is locked down.


Compared to the cost and power of any of the hardware and software we are talking about it is insignificant.

If you want to do toy programming on a budget there are a million arduino-like things out there.

I recently saw calculations on the price of say, just iOS alone. It being equivalent to multiple Manhattan projects. Regardless of how accurate that estimate is, the general point remains.

Do I think Apple should make developing for non-distribution on the iPhone free? Yes. Do I think there is an argument that is almost a moral obligation? Possibly.

But what you get for like 30 cents a day with a developer account is mind boggling. I mean, that was the initial point wasn’t it? These are insanely capable and meticulously engineered pieces of technology.

Edit: I can even argue it is an ethical obligation that Apple should allow all users to install non App Store apps if they explicitly so choose. And I have argued that in the past.

But I separate this from the evaluation of the value proposition offered by an Apple Developer account.


What you are missing is that basically everyone needs a phone, not everyone needs an arduino. There is a fundamental difference between even 30¢ and free. Make developing free and suddenly you’ll have more developers.

Now it’s a whole other discussion of if that’s a good thing or not. But I personally like to believe that everyone should be able to develop as a hobbyist. The commercialization of software is often at odd with it’s users. Not so much when people are doing things for themselves and for fun.

Just my 2¢


You don’t need a dev account, but unfortunately do need a mac.

The limitation is that you will have to revalidate weekly (which can be automated).


I’m glad someone else feels this way. I absolutely hate those YouTube videos talking about the “4K 30min total render” bs. Show me a M-chip MacBook that doesn’t balk when running Docker, or chug when opening a fat codebase in WebStorm.

Sure the chips are fast when they run the stuff Apple designed for it, but they don’t seem any faster to me in my work. I will give them efficiency though. I love not having to carry the charge brick everywhere I go.


Personal anecdote- Neither my M1 Air (personal) or M2 Pro (work) have any trouble with docker and jetbrains. Fans don't even turn on on the M2.

Both have apple silicon builds now, but it and a lot of other software built for Intel ran at similar performance or better (compared to Intel MacBook / Thinkpad).

I understand why people hate apple, but I've never heard anyone say this (what op said) who has had similar setups. It works a hell of a lot better than my X1 Carbon did.


Wtf, you do realize software is so wide-reaching (even just on “limited ios”), you can’t meaningfully optimize for that at the goddamn CPU level. Like what, this is not some embedded chip in a happy meal toy that plays a single dumb game alone - there is crypto, graphics, everything necessary for an OS.

Docker sucks because it is a linux-ism, not any other reason. And I think most people would disagree with your notion, it really does run circles around basically every other laptop.


I mean, the M1 chip in my Macbook Pro runs absolute circles around the Intel one that I had in the previous generation, and they were released within a year of each other, so... I honestly do not know what you're talking about.


That's not a high bar. I'm still stuck on the 2019 MacBook Pro for work, and it is significantly slower and hotter than my ThinkPad with Linux that I use for hobby programming.

Whenever I open IntelliJ on it I can feel the temperature of the room increase. The Intel Macs were terrible for many years.


Cannot wait for my Intel MBP to be refreshed into an M2 at work. Running Docker turns it into a jet engine. Like having a white noise machine with an unpleasant tone that also generates a ton of heat.


I went from a Core i9 to an M1 Max and it's been a world of difference. Performance is great, it runs my dual 4K screens with scaling (3008x1692, virtual resolution 6016x3384) without a sweat, and the battery lasts forever.


Moreso than the noise, the touchbar at the back can become too hot to touch and the plastic strip around the screen, where MacBook Pro is printed, is bubbling and peeling away with the heat.


I love how apple considers my skin part of the cooling solution. If it wasn't for legal limits, they would absolutely just burn people, sell a laptop cooling mat for $100, and tell people they are using laptops wrong.


I have a 14” M2 which has been up 52 days with docker running in the background, and Rider and Clion. And it runs better than my DELL XPS 15 which heats the room.


> or chug when opening a fat codebase in WebStorm.

It'd be worth checking that you're not using an Intel version of WebStorm. I've personally found IntelliJ (largely the same thing as WebStorm) on the M1 to be _extremely_ fast, certainly leaps and bounds ahead of the old Intel Macs. That's the native version, though; via Rosetta it is absolutely painful (the JVM is close to a worst-case for Rosetta).


The security policy of the OS and the performance characteristics of the SoC are two different things.


It’s depressing right now maybe, but at least it probably means we’ll have this level of performance inexpensively in some years. The tech is still getting better quickly.


> But the days of the garage hacker getting their hands on cutting-edge parts are long gone.

In the context of CPUs i'm with you (still tinkering with things like 6502s).

On the other hand, i just deployed Llama2 70B (with the help of [0]) on my home PC and it was fascinating, when it wrote it's first response.

[0] https://news.ycombinator.com/item?id=37492986


If you collect vintage compact Macs, you can literally see the hardware transition from the original Macintosh designed like any other random 68k project within the reach of homebrewer with a PAL programmer, to highly compact gate arrays and “if you don’t have the ability to design and obtain custom silicon, good luck”. VLSI giveth and VLSI taketh.


It’s a preview of what will probably be the basis of the M3 series of Macs that will allow you to run any software you want


Not ideal, but you can basically run whatever you want on it, you just have to revalidate its licence each 7 days (for which you might need a mac).

Also, even without that there is ish which can run most x86 software.


This is the same family as the MacBook line use. Once yields improve on N3 next year you’ll be able to get even a MacBook Air that will have this same chip but with more cores.


For practical purposes, it will be largely the same chip as the M3, which will be used in general purpose computers.


Perhaps we should then view NSO Group as liberators, freeing software from the tyranny of evil corporations. /s


> this isn't a general-purpose CPU

Apple uses the same CPU/GPU core designs for handheld, tablet, laptop, and desktop.


This is super impressive on paper, but it feels like there aren't really many applications that can take advantage of this power on a phone due to thermal and (battery) power constraints.

Is the idea that mobile workloads are spiky so it can boost to full speed to render a webpage for example then go back to sleep? Are there / can there be any applications that sustain high clock speeds?


As far as I’ve gathered, race to idle has definitely been a focal point for Apple when it comes to power management along with coalescence of CPU and antenna wakeups — the system batches up work, powers up components, burns through the work as quickly as possible, and then returns components to idle/sleep.

This is also why on iOS, apps that become known for abusing or mismanaging background processing are penalized and have their background processes run less often. Even just a couple of processes waffling around instead of completing their tasks and quitting can keep the system from idling and have a big negative impact.


Well they're bringing AAA games like Resident Evil 4 Remake and Assassin's Creed Mirage to the iPhone, complete with raytracing support, so it looks like it'll be a golden age of Apple gaming soon enough.


Games made with a controller in mind are miserable experiences on a touchscreen. Phones are awkward and uncomfortable to hold like a controller, the lack of physical buttons provides no tactile feedback on where to press, the lack of shoulder buttons means overloading your thumbs with even more inputs, and your fingers obscure half the screen.

And if the implication is that they'll port those iOS games to Mac, then I wouldn't count on it if it takes even a single iota of extra effort on their end. There are more Linux gamers than Mac gamers, and yet game devs still can't be arsed to test their games on Steam Deck.


I don't think they expect people to seriously play Resident Evil 4 with touch controls. I imagine most people who actually complete the game will use either the controllers that attach to the side of the phone like a Switch, controllers that have a mount on top to hold the phone, a normal Bluetooth controls like a Dualshock with their phone in some other dock, or stream everything to an Apple TV.


I have been surprised they have not been more aggressive with the chips in AppleTV and signed more deals to port games to the platform. In some ways Apple is well positioned to be a player in the game space, but they seem to lack the commitment and flexibility required to make it work.


> they seem to lack the commitment

Tale as old as time.

Apple has historically been terrible about committing to gaming which has led to game developers ignoring the platform. I do hope that Apple actually stays the course this time around and with the M* series of chips they've raised the floor for what game developers can count on having available.

I really thought they would have made a bigger play for gaming on the Apple TV (The hardware, not the app, not the service, I still can't get over that...), they pushed a little at the start but with most Apple gaming initiatives (aside from mobile) it seems to have fizzled out.


Mobile gaming is gaming and Apple owns mobile. They own a slice of the gaming space already.


Which is why you can stream your iPhone to a TV and connect a controller. There are also many controllers that clip onto the iPhone itself for a Switch-like experience.


> And if the implication is that they'll port those iOS games to Mac, then I wouldn't count on it if it takes even a single iota of extra effort on their end

The games will come to all Apple Silicon devices as the architectures are very similar if not the same (M1 iPad vs M1 Mac). Apple also released the Game Porting Toolkit and I can already play games at decent framerates with Whisky [0] which utilizes GPTK. Keep in mind this is basically beta software that's not even for the end user to use but as a developer tool to help port their games over.

And obviously the games when on iOS won't be played with touch controls, it's for playing with a Bluetooth controller.

[0] https://github.com/Whisky-App/Whisky


You know you can connect an Xbox or PlayStation controller to an iPhone?


The new iPhones can now connect to a monitor via USB-C, so with that and a Bluetooth controller you’ve got a good stew going.


You can connect a controller to an iphone.


So, it looks like Apple will be taking the console path to gaming, instead of the PC way.


They'll be available on all Apple Silicon devices including Macs, as the processors are so similar.


iPhone could be a legit game console if developers wanted it to. Pair a Bluetooth game controller with screen mirroring and there you go. There are so many games on Steam I wish they would port to iPhone. It's like having this incredibly powerful device in your pocket and you can't actually use it for anything.


Sounds like Apple will be needing branded oven mitts to hold the thing in the store's accessories section.


Remember that whatever is in the A series chips ends up in future M series chips too, only with lower thermal and battery restrictions.

I think this would be the base of the future M4.


Latency. I may spend 1,000 times as long looking at a website as it takes to render but that doesn’t mean taking a long time to render is ok.

This is why modern consumer processors have boost modes followed by thermal throttling.


“ There is a catch though: Apple's A17 Pro operates at 3.75 GHz, according to the benchmark, whereas its mighty competitors work at about 5.80 GHz and 6.0 GHz, respectively.”

This is why numbers like this don’t matter when you say the performance is within 10%


Could you expand on that?

Why does GHz matter? Performance should be measured as a black box, with the metric being the time it takes to accomplishing some task.


CPUs in a mobile form factor are thermally constrained. Consider that the 13900 will have a heat sink and dedicated fan. If Apple made a laptop version, they’d be using essentially the same chip at a higher clock frequency. If Apple put it into a Mac Pro, they’d run it higher frequency yet still.

of course there’s more than just thermals. The CPU has to actually be designed to run at a high clock.

I could be wrong but my understanding is that all their CPUs share a good amount of commonality and the major differences are more around packaging (IO connectors etc)


You just look at final performance metrics, ghz/mhz matter back in the 90s. Now you can’t just say higher freq means better because architecture all different


The catch is that we can expect better performance when they increase the clock to be closer to the i9. However, it’s never clear how high it go safely.


You can't just turn up clockspeed and expect linear increase in performance. The chips are packed so dense with transistors that the chips have to do some magic to be able to operate at those clock speeds without frying. Desktop parts don't really care about energy efficiency and are thus designed to be able to operate like that since max performance is more important than efficiency. Microarchitectural design decision allow engineers to pick tradeoffs for where they want to place their chips in the market, but most major advantages between chips come from fabrication. And these days there aren't much gains left to be made there.


CPUs are designed with a certain clock speed in mind. The target frequency affects the design of literally every component. Turn up the frequency too high and signals won't arrive in time. Increasing the frequency to match performance with AMD and Intel might require an almost complete redesign.

Maybe it can run faster given better cooling and less aggressive power-saving. But I'd be surprised if it can be made to clock much higher, else Apple would have done it already. Mobile CPUs already have peak performance well above what they can sustain. An even faster turbo would be used more rarely but would still be useful.


I mean, they won’t just take the same chip, but they can definitely reuse many of its parts and it is not unreasonable to think that they can raise the frequency quite a bit.


Clock speed correlates pretty closely to power consumption within a given family.

It correlates less directly between families, smaller process nodes tend to allow for high frequencies at the same power consumption.


I think OP meant clock speeds don’t matter, relative performance does.


These numbers are a harbinger of what to expect when Apple brings 3nm tech to the M-series, especially the Ultra with 24+ cores and much more lenient thermal envelope (not to mention active cooling).


Remember the Apple mantra "megahertz myth"?


What’s really embarrassing is how it compares to the Snapdragon.

Is the gap expanding at this point?


Qualcomm is just happy lazily sitting back and collecting licensing fees while putting out mediocre products at this point.


That's not fair at all. Snapdragon 8 gen 2 was ahead of Apples best offering, just a few days ago [1], in gaming. That's not "mediocre". That's much closer to "head to head". Thermal throttling is also a significant problem with iPhones, for anything real world [2]. A new snapdragon will come out shortly, and the battle will continue.

[1] https://beebom.com/snapdragon-8-gen-2-vs-apple-a16-bionic-be....

[2] https://www.youtube.com/watch?v=jSzwefx_TXU


> Snapdragon 8 Gen 2 scored 1490 in the single-core

> Apple’s A16 Bionic chipset scored 1879.

Both on the same 4nm TSMC process.


There are different benchmarks one can pick depending on the article they want to write. Geekbench 6 favors Apple CPUs more that Geekbench 5, which was already better on Apple CPUs than say Antutu.

Antutu CPU - A16 248335, SnapDragon 8 Gen2 268542

Antutu GPU - A16 394336, SnapDragon 8 Gen 2 580101

If You look at Geekbench 5 scores for A16 and SD8G2, A16 is ahead by less than 10%.


I would suggest you to read this: https://www.antutu.com/en/doc/119646.htm


Isn't this true for literally every cross-platform benchmark? Of course the toolchain, kernel, and graphics API will be different.


No, CPU and memory tests have very little overhead.


That's why Nuvia/Oryon is coming.


Looks like they're having issues delivering from their 2022 vision https://wccftech.com/qualcomm-oryon-problems-potential-snapd...


I don’t think people understand what a big leap this going to be for their CPU cores.


Why's that? Will they run Windows? Linux?


Nuvia was a company founded by Gerard Williams in 2020 to build an ARM data center chip with a very high performance/watt profile. He was previously at Apple running the chip unit in some of the crucial years for Apple Silicon. Qualcomm bought Nuvia in 2021, and Williams is VP Engineering now. They are going to present the first Snapdragons with the Nuvia cores in October, probably in the wild at CES. Smartphone, watch, PC, IOT, etc. Data center after that. I expect it will bring Qualcomm much closer to Apple on the CPU part of the SoC. So yes, Windows PCs that can get closer to MacBooks in battery life and CPU performance. Linux at your own risk probably, because they have had a deal with Microsoft on this for a while.


And also, I should mention, ARM is suing over this, and wants Qualcomm to destroy all that IP or negotiate a new licensing agreement.


Yes, the 8cx Gen 4 will be used in laptops running Windows and (unofficially) Linux.


“new architecture on new process node almost as fast as competitor’s flagship from a year prior, from two process nodes ago”

power efficiency aside (which admittedly is non-trivial), this is not that big a deal or particularly interesting, imo.


This is a mobile processor being compared to flagship desktop processors.

Apple’s lead is almost unfathomable.


two process nodes difference. apple have bought out all of TSMC’s “3nm” capacity. they obviously consider it an important competitive advantage.


The Raptor lake refresh doesn’t seem to add much onto the 13900


Your next Mac Book Pro will be a sleeve for an iPad Pro, which, in turn, will be a sleeve for an iPhone Pro. Turducken computing. ;)


Asus did that ~10 years ago with the PadFone, you had a phone that could plug into a tablet shell that in turn could be plugged into a keyboard.


Macpadone?

I'm not sure how realistic this is but I think I'd love it.


I actually like this :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: