Apple hasn't seemed interested historically. And the Nuvia folks left Apple to found their company explicitly because they thought an M1 style CPU core would do well in servers but Apple wasn't interested in doing that.
it's not that apple will sell server chips. it's that developers can locally work on arm which makes it easier to deploy to severs. linus torvalds had a quote about this...
"""
And the only way that changes is if you end up saying "look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on".
"""
It’s wild to consider that my next computer (an arm M1 Mac) will be compiling code for mobile (arm) and the cloud (arm). I wonder if we’ll ever see AMD releasing a competitive arm chip and joining the bandwagon.
Personally, I don’t see many server admins choosing to pay the Apple Tax to get M1 into their data center. I don’t see how the watt/performance ratio could pay off that kind of tax.
I did not mean to imply that the actual M1 will be used in data centers. Apple is quite popular among developers and its also a trendsetter which will probably lead to other computer manufacturers to adopt ARM for personal computers. So having more people use ARM on their personal computers will lead to more ARM adoption in the data center.
I believe interpreting statistics from those surveys in this way isn't fair. There are so many developers around the world but the pattern of value/money generation by them is not uniform; in other words, a small percentage of developers work for companies that pay the largest share of server bills and penetration rate of macOS devices among developers of top companies is probably higher than average.
(I'm not implying that developers who work on non-macOS devices, make less value because your device doesn't have - nearly - anything to do with your impact. I'm just talking about a trend and possible misinterpretation of data)
The OP wasn't suggesting Apple M1 chips in the data centre, but rather that Apple M1 chips in developer workstations will disrupt the inertia of x64 dev –> x64 prod. It will be easier for developers to choose ARM in production when their local box is ARM.
AArch64 is load-store + fixed-instruction-length, which is basically what "RISC" has come to mean in the modern day. X86 in 2001 was already… not that :)
Not really, because the variable length instructions have consequences - mostly good ones because they fit in memory better.
Also, the complex memory operands can be executed directly because you can add more ALUs inside the load/store unit. ARM also has more types of memory operands than a traditional RISC (which was just whatever MIPS did.)
The upside to variable length instructions is that they are on average shorter so you can fit more into your limited cache and you make better use of your RAM bandwidth.
The downside is that your decoder gets way more complex. By having a simpler decoder Apple instead has more of them (8 wide decode) and a big reorder buffer to keep them filled.
Supposedly Apple solved the downside by simply throwing lots of cache at the problem and putting the RAM on-chip.
I'm not a CPU guy and this is what I've gathered from various discussions so I'm happy to be corrected.
In most cases, yes, but it doesn't get rid of the complexity for compiler backends that can't directly target the real instruction sets Intel uses and have to target the compatibility shim layer instead.
Hopefully ARM on cloud will result in cheaper prices.