Why would a datacenter want a (weird, tiny, embedded) M1 when they can get an 80-core, 8-channel, 128-lane Ampere Altra – which is actually designed to be standards compliant and even has an open source firmware option?
(Even a future "large" Apple SoC is unlikely to be datacenter large…)
try as I have, I can't imagine how Apple wouldn't sell more servers if they bought Ampere than the entire generic ARM server OEM industry can collectively. By generic, I mean to exclude custom silicon per AWS, Azure, et.al., but I'm tempted to think that the only reason why there's been nothing from Apple in the server category, simply because they're quietly waiting for a good candidate for acquisition to emerge. Ampere looks very interesting at the moment, but combine the Ampere chips with proven track record of optimisation and integration with the OS, and fast forward a few years to a time when we're more accommodating of vendor OS control in exchange for getting back some independence from the cloud hegemony players, oh, why not sprinkle over it all some fairy dust from a reincarnated Web Objects offering, and checkout process management provided by Apple, and I can just about see the next two decades of extraordinary growth that's presently challenged by the sheer scale of Apple's business, coming to fruition in ways that we'll be mostly very happy with for giving us all jobs back that are dying out in the white heat of today's oligopolist clouds now gathering and blocking the sun.
And the many times they have tried it either flopped outright or fizzled with little notice. The xServe and xRAID were very cool looking and actually quite functional, but never did integrate well into overall datacenter operations and ultimately that's what did them in. Unless Apple wanted to take it on to eliminate reliance on AWS, Azure and others I don't see them as having much incentive to care about the server space.
Indeed, if you ever see rumblings of them bringing more datacenter operations to be entirely in house then at some future point I could see them leveraging that to sell externally.
Unless they got really ambitious and planned on supplanting AWS entirely and thus their hardware gives them a significant edge.
But Apple has been so bad at software in general for so long, especially on the server side of things, I'm not holding my breath for that. They certainly have the resources to do it if they had the right person to drive it though, so never say never.
Ampere claims this future firmware will be "OSF certified", which doesn't mean much because "OSF does not require vendors to deliver firmware in open source form."
Servers/Datacenters aren't just about being able to run Linux though. How are they going to do firmware updates? Apple can break stuff as there is no contract there like there is with x86 server vendors who test all their updates to make sure Windows and Linux runs. Besides there's the whole issue of configuration/systems management, remote access and bulk chip availability.
I would love to see the shakeup with more efficient chips pressuring Intel to innovate in the datacenter space. Being able to decrease our hosting bill would reflect well on me, and it just seems like the way the space will eventually progress.
However, I run a hackintosh at home. I personally love it; I had access to top of the line hardware years before Apple offered it, with an OS that I understand and work well in, etc. It is absolutely not the way to go for any kind of must-work production. It is brittle. Apple is in the enviable position of being the only consumer of its APIs, and they barely publish documentation on the public stuff, much less internal-only provisions. Unless Apple explicitly supports (read: $$$$) a datacenter application, I can only see burning buildings (from all the hair on fire) where stable datacenters should be.
Yeah Intel and AMD both need competition from power efficient chips that perform equally well and offer the same level of support for Windows/Linux. And as you said, it's not going to be Apple - they have a consumer focus and culture and it's far fetched to imagine they will do all the stuff necessary for enterprise adoption of their ARM chips. In the cloud space Amazon has Graviton but there's not much hype about its performance and you run into compat issues a lot for normal cloud workloads at least in the Enterprise.
Nvidia has a real shot here with ARM purchase to fill the gaps but I doubt that's something they have as a focus.
My guess is that part of their perceived income flow implies that the MacOS is running on it. Music purchases, surveillance marketing, app purchases, dunno.
If they had the extra volume potential, it would be cool to see an extra cost M1 computer that boots and runs Linux with no fuss.
They offer the Mac as a Complete Product™, An Experience™ (with the OS) – basically since the original 80s Macintosh. Apple is a vertically integrated gadget business. They just aren't in the bare metal hardware business and have no reason to be there (probably doing that would only confuse the product customers).
Because of all this, when they decided to transition the Mac to custom chips, they just took the whole iDevice stack and did the absolute bare minimum changes to make it into a "technically a general purpose computer" stack. It runs basically iPhone firmware expanded to allow OS choice. They made zero effort to use any industry standards, because there was no business reason to put any effort into that, unfortunately. So we have this very unusual SoC (fucking custom interrupt controller! Even Broadcom stopped doing that crap!) running very unusual firmware (iBoot, everything is Mach-O and stuff, and the OS choice screen is actually a little macOS app). But hey it's not locked down because the Mac line is supposed to be general purpose computers, so go ahead and port Linux but you're on your own.
It's not the bare minimum, though. They designed an entire boot policy framework that allows you to multi-boot different OSes in different security contexts, so you can have your fully secure and DRM-enabled macOS next to your own Linux install (and even then their design maintains basic evil maid attack security, even for Linux or other custom kernels). That's more than pretty much any other general purpose platform.
Yeah, I know, and I agree that that is kinda cool. But in terms of everything else, especially in terms of standard vs custom stuff it is a very small change from iOS devices. If there was any business reason to adopt standards (say Boot Camp would've been deemed important) – they would've at least used UEFI.
UEFI by itself is not useful; we're going to support UEFI+DeviceTree in our boot chain, but it's not going to run Windows.
What you want is ACPI support if you want the platform to be compatible with higher-level ARM boot standards. Unfortunately, ACPI support assumes stuff like GIC and other hardware details. You probably want EL3 for PSCI too. But that would be a massive change to their silicon design.
So, effectively, there is no reason for them to support UEFI or any other firmware-level stuff if their chips are not compatible with Windows, which they aren't, and re-engineering their silicon to be something that could run Windows (without one-off patches from Microsoft, like they did for the Raspberry Pi which is in the same boat) would've obviously been a huge cost to them and not something they decided made business sense at this point.
Obviously Microsoft could choose to add support for these chips to Windows like we're doing with Linux, but that's on them; it requires core kernel changes that are not something you can do in drivers (last I checked Windows doesn't even support IRQ controller drivers as modules).
Well, going straight to seeing Linux output on the EFI framebuffer instead of having to develop things like m1n1 first would've been useful. Not very useful but still.
Yeah, not using the GIC is what I hate the most. If the SoC was less… custom, we could've even written our own ACPI tables (possibly useful ones depending on e.g. which particular Designware crap they used – the better DW PCIe controllers do support ECAM, etc.) and avoided having to redo all the work in every kernel anyone wanted to run…
but how long before these 160 cores of Ampere variety ARM servers will be entirely affordable prices for at least the average hner?
the most recent NetApp filer I was paid to evaluate for a fairly large business customer, is being sold at a price that I realised isn't actually so ridiculous for my home lab use. The amount of time expended on related work involved with using cloud computing for even very occasionally run jobs has put the likes of these Ampere servers very much inside the bounds of reasonable, even thoroughly sensible, reason for private acquisition. My first thought was "if only I was even ten years younger I'd advertise a house share and load a room with a couple of racks and dump all the interactions involved with setting up what I'm doing in the cloud for higher quality interaction with some housemates who have projects that could turn my practical investment into something much more interesting" I think a ratio of 128 cores per person feels about right?