Hacker News new | past | comments | ask | show | jobs | submit login
Dissecting the Apple M1 GPU, Part IV (rosenzweig.io)
379 points by caution on May 2, 2021 | hide | past | favorite | 121 comments



Great work and even more surprising: a great writeup (writing that as someone who always struggles to document my own work). This really gives me hope to have an usable, native Linux on the M1 machines.

I really wished Apple would see, how much benefit this would bring to their platform. Their new hardware is really exciting, the first ARM on the desktop, which doesn't just compete, but in many aspects beat current x86 chips. A lot of the Linux and tech enthusiast crowd would love to jump onto Apple Silicon. And while they might not bring huge profits by themselves, these are the people who do come up with great new technologies. Better give them a home on Apple devices. It doesn't need a complete and formal documentation of every aspect, just supporting those projects with a little bit of information would go a long way. A single engineer who would answer questions by those developers might be sufficient. So come on, Apple, do it! :)


> the first ARM on the desktop, which doesn't just compete, but in many aspects beat current x86 chips

I think the first ARM on the desktop title goes to the Acorn Archimedes (https://en.wikipedia.org/wiki/Acorn_Archimedes). It kicked ass back then as well:

> A mid-1987 Personal Computer World preview of the Archimedes based on the "A500 Development System" expressed enthusiasm about the computer's performance, that it "felt like the fastest computer I have ever used, by a considerable margin"


Well, in this millenium :). And while I remember the Archimedes well, this was another day and age and unfortunately it failed to gain much traction back then. A lot of bad tech could have been avoided, if it had become main stream. I fondly remember the times, when there were plenty of competing processor architectures, almost every one more interesting than the x86 architecture.


This is a company that actively fights right to repair and implements software DRM to lock out non-Apple authorised replacements.

Don't hold your breath. Apple's stance by actions is the opposite.


>> This is a company that actively fights right to repair and implements software DRM to lock out non-Apple authorised replacements.

But they do all these things for obvious reasons. Reasons you and I may not agree with or be happy about, but still obvious reasons. In the case of repair/replacement it's just because they want you to use expensive replacement parts, they want to lure you into their Apple stores, and they don't want any liability/accountability for repairs with 'unofficial' parts.

I don't see how providing specifications about how their GPU's work so someone can make a Linux driver out of it hurts their commercial interests or liability though. Yes people may screw up their system if they install Linux on a Mac and it doesn't boot anymore, but as long as you can still take it into an Apple store and they can restore it to MacOS, why would Apple actively fight the extremely small minority of people who want to do that? And even if more people (developers/enthusiasts) would buy M1 hardware and immediately slap Linux on it, why would they care about that? They still made the sale, and these people will still walk around with a machine with a big fat Apple logo on it?

They previously spent a lot of effort accomodating people who wanted to run Windows on Macs using Boot Camp, so why would they be worried about people running Linux on M1 macs?

Edit: I can imagine Apple want to protect their IP and hence don't want to disclose anything about out it, period. Much like NVidia and most other GPU manufacturers do. But if AMD and Intel can be OSS-friendly, Apple could be too, apparently IP protection does not have to be a deal-breaker.


Very simple. Booting a different OS gets in the way of Secure Boot and other security features. Why can't you install Linux on an iPhone?


Secure boot on macOS differs from iOS precisely because it lets you install Linux on Macs but not iPhones.


You can install Linux on iPhones via bootrom exploits:

https://checkra.in/ https://projectsandcastle.org/


I am not holding my breath, just stressing how nice it would be and how beneficial it could be for Apple themselves.


As a Linux person, I have to say my iPad almost got me into becoming also an Apple person. Let's be real, Apple would not lose a single penny to inviting the Linux crowd directly, however, some of those Linux people will become Apple people. There are two reasons to not open up, for the future outlook:

1. Why tho?

2. MacOS increasing lock-down makes even Linux attractive to a wider customer base, and therefore threatening their huge, carefree software extortion business.

Personally I think, they do number 2 on their customer base. I think not opening up to Linux is an indirect admission to their unfair competition game. I think Linux support would very much limit how much they can push their DRM, subscription, software extortion and expropriation mischief. It would allow for consumer choices.


I don’t think they care, unfortunately. Apple sells billions of devices to non-techies. The user base that might run Linux or do development on Apple machines is a small fraction like 0.001% or less. The reasons behind preventing you from installing Linux is likely more a consequence of preventing device theft or software piracy.


So well written, so engaging, such love in the labour.

"We’re looking for conspicuous gaps in our understanding of the hardware, like looking for black holes by observing the absence of light."

Gorgeous


Seconded. Despite being a genuinely challenging and complicated process with countless moving parts, it was written to be as approachable as possible. They didn't feel a need to "prove" how hard the work is by writing from deep within the weeds.


I have also enjoyed reading these progress reports.


Alyssa is a hero for her work on Panfrost, which gave us open-source 3D graphics on ARM Mali GPUs. I am eternally grateful for this; the Samsung Kevin I use is the only blobless laptop currently in production, and now it has 3D accelerated graphics thanks to her work!

But part of me is sad to see her working on such closed hardware now.

Does anybody think Linux on the M1 will ever be able to touch the internal SSD? Apple has been locking that down with proprietary controllers and signed firmware since their Intel days (i.e. T1 chip). Are people really going to drag around their shiny new macbooks with an external USB-C dongle hanging off of it because that's required in order to run Linux?

I worry that the endgame here is Linux becoming Just Another MacOS App. Apple is quite happy for Linux to be a MacOS app running in their VM. Elite developers will buy Macbooks and run Linux in a VM because "we'll have Linux on the bare metal soon, it's just temporary" and that will just keep getting pushed back and pushed back and pushed back...


Apple don't block access to their NVMe controllers at all. They do appear to have a, well, interesting approach to spec compliance, but Linux is now entirely capable of handling the SSD on all x86 Apple hardware. In the M1 case the NVMe controller isn't exposed via PCI so the existing driver won't work, but there's already in-kernel abstraction between the actual NVMe code and the PCI interface, so adding an alternative shouldn't be a problem.

The M1 systems depend on some number of blobs, but the amount of non-free code required to boot one looks like it'll end up being less than a typical x86 system requires.



Which is never going to fly upstream :)

I have a branch adding proper NVMe platform device support based on some work by Armd. It's currently blocked on some driver deps (clock/power) and I'm working on a hypervisor for reverse engineering for now, but once those do go in (other people are working on them) it should be simple to bring up properly.


Please, please, please drop the internal SSD myth.

That isn't true, and has never been true, ever. That is a complete bullshit story made up by a YouTuber who saw that the SSD didn't show up under Linux back when the T2 Macs were released (because it didn't have a compatible driver) and decided that must mean Apple were "blocking Linux".

All internal Mac SSDs work fine under Linux these days and have for years. I have a local branch with preparatory work to bring up the M1 SSDs already (requires some driver refactoring to do it properly).


Hector Martin believes it will be possible. https://twitter.com/marcan42/status/1383964656058781703


I don't "believe" it will be possible; I am absolutely, 100% positively certain it is possible.


(Starry-eyed idealism.)

Her work on this closed hardware might define it for the future - practicalities matter. If Linux de-facto runs and even better, if it's in wise use, that makes the hardware more open and it might help bend this new platform towards openness. Getting in this early might be a benefit there too.


> Well-written apps generally require primitive restart, and it’s almost free for apps that don’t need it.

I'm really surprised that primitive restart is forced on in Metal, but they also have a hardware bit for it! Primitive restart makes your whole pipeline slower, because you can't just chunk your index list when building your vertex packets to send to the work distributors; the possibility of a primitive restart index means you have to linearly scan for it. Though I'm not a hardware guy -- maybe the degenerate tris thing makes it just as annoying in practice.

I'd guess that the limitation to force primitive restart on in Metal came from another IHV limitation (maybe old PVR chips?)

That said, you really shouldn't be using tristrips in 2021 anyway, they have poor locality, and are hard to optimize. Index buffers solve the problems that tristrips wanted to solve to begin with.

IME, a well-written app should just ignore tristrips / restartable topologies, and just use straight tri-lists.


In theory you could do parallel scan tricks to implement primitive restart at full speed, but obviously it's going to cost you and it complicates the index fetch immensely since it effectively ends up being split into two parts.

Most likely, Apple simply doesn't build GPUs big enough to run into that problem, but it makes you wonder what they're doing on their hardware with AMD GPUs.


Hopefully one day companies like Apple will be forced to disclose documentation to the devices they sell so that consumers can make the full use of them and won't have to waste time on reverse engineering.


that would put a lot of obligation on the vendor to honor some kind of contract they don't need to do today. They can change anything they want with zero cost today.


Does documentation necessarily create such an obligation? That would jeopardize most open-source projects, most of which explicitly promise no warranty.


The big issue with this is apple does not own all of the IP in a any of their hardware, much of it is a mixture of licenses from many vendors some of it owned by apple much of it just licensed.

Apples licenses for these will include very strict NDA clauses that make it impossible for apple to share the documentation they are provided but that vendor.


Apple has moved much of the stack completely in-house, that point is relatively moot.


So those NDAs become void. Excellent.


Isn’t a warranty of merchantability thing? If something is described as doing X, and you take X away, you’ve taken value from my property.


Which makes an excellent case for mandatory docs.

I don't want my speakers to stop working because I stopped paying subscription.


As long as they are "your" speakers. Subscription sounds like you're renting speakers. I know that's probably a facetious/pedantic point but it strikes to the heart of the fact that we should be calling for more accurate description of the relationship between a consumer and the hardware they acquire i.e. calling a spade a spade. You're not buying a laptop, or an M1 CPU in a metal case. You're buying a magic Apple service delivery box that happens to permit certain other uses. Customer expectations should match description should match function.

I say this as a longstanding iPhone user, and up until recently, Mac user.


Not a legal obligation but a support obligation. Documenting implies stability of an API. It says, use this thing we intend for you to use. Usually, you mark an API as deprecated, and often provide an alternative, before removing it.

Sometimes backwards compatibility is preserved even to the point of preserving bugs (especially if the companies/apps depending on it are large):

https://www.triplefault.io/2017/07/breaking-backwards-compat...

https://arstechnica.com/gadgets/2008/11/ars-investigates-doe...


They could change what they want afterwards too presumably, they'd just need to publish the new documentation along side it.


This is what Intel/AMD have done for multiple generations of their GPU hardware now for a decade or more. Hasn't stopped them from innovating. Nvidia has PTX with a proprietary steam assembler underneath which can retarget any hardware changes. Either way, Apple can offer more documentation than it currently does.


> that would put a lot of obligation on the vendor to honor some kind of contract they don't need to do today. They can change anything they want with zero cost today.

So why can't we just allow them to keep making those changes, but still have to document them? I'm sure they document them internally anyway.


We could have a tax on documents which are non-public.

For example, the tax could be 1 cent per page per day which is non-public.

It would apply to source code, images, or anything else that might be reasonably printed on a page.

Then companies could pay the government to keep their employees work secret, or publish it.

It should help innovation as far more work done by humans gets made available for others to build from.


It will only make products more expensive and less accessible and people still won't be able to fully use _their_ devices.


Except it won't - companies which are fully open source won't have any additional costs, and their products can be priced the same as normal.

It's only companies who have fully closed tech will have to pay the tax, and really that tax is compensating the rest of the country for the work and knowledge that company wants to keep secret.


That might happen to some products, yes. No different from any other tax. I don't see a fundamental problem.


That's an interesting idea! Do you know if anyone has explored this further?


Zero cost to them perhaps. However, those costs often prove quite high for others.

Right now increasing numbers of things are, "works until it doesn't" and that imposes risks and costs to everyone.

Perhaps those get high enough to warrant legislative solutions.


No, this applies only to APIs, which are documented as such. The users of such APIs might have a certain expectation of long-term support. This is talking about documentation of the device which is sold, the software equivalent of providing schematics to eletrical devices. Which for a long time was quite common. You could buy stereo amplifiers with complete schematics. They don't create an obligation that future products have the same schematics, their purpose is solely for the repair of the device sold.


Then you'll probably get a link to a zip on the very bottom of the "legal >> open source" section of their website with badly auto-generated API docs without any comments or context, knowing big closed-source software orgs.


That’s more or less what they currently do: https://opensource.apple.com/


Under what legal principle would this be based on, and why haven’t we already seen this happen over say the last 50 years of consumer electronics?


Eh, lots of potential future legal principles.

Right to repair. Hasn't happened because it's not a right yet.

Alternatively maybe anti trust, since using your monopoly on hardware to get a monopoly on software which is the essence of anti trust anyways. Hasn't happened because anti trust enforcement is really weak. Probably also doesn't force documentation.

Alternatively copyright and patent misuse, which is using your government granted monopoly on creating the exact hardware (copyright), or some of the features in the hardware (patents) to get a monopoly in an adjacent area (running software on that hardware) voiding the copyrights and patents. Hasn't happened because while the law exists (in common law) the courts apply it very conservatively. Also probably doesn't get documentation, just the right to run things on it.

Alternatively an interpretation of the quid quo pro of intellectual property (i.e. you disclose something in exchange for a temporary monopoly) being interpreted more strongly, such that you have to disclose the details of the internals to get intellectual property rights. Hasn't happened yet because today the quid quo pro isn't interpreted that strongly.


> Right to repair.

The interactions between a silicon chip and software drivers has nothing to do with right to repair. Never has been.

> Alternatively maybe anti trust, since using your monopoly on hardware to get a monopoly on software

Apple doesn't have a monopoly on any hardware, except insofar as any company has a natural monopoly on their own products, as made explicitly legal by copyright and patent laws.

Furthermore, one has to remember that there's a world outside of commodity PC hardware. The notion of hardware and software being separate is the rare exception, not the rule. Other than commodity PC computer hardware, nearly every product sold is both software and hardware bundled as a unit, with no marketplace expectation of end users replacing the software with an alternative. Whether you're talking about a car, washing machine, television, digital camera, microwave oven, garage door opener, CD player, indoor-outdoor thermometer, label printer, or an air conditioner, the software that comes with it is seen as part of the product.

In fact, this is even true for many computer components—companies like Nvidia aren't being any more helpful about their GPUs as Apple is with theirs.

Apple's computers straddle an interesting boundary between consumer devices and commodity PCs. But make no mistake, Apple isn't obligated to do anything here. You're not entitled to something because you want it.

> using your government granted monopoly on creating the exact hardware (copyright), or some of the features in the hardware (patents) to get a monopoly in an adjacent area (running software on that hardware)

Obviously legal. See above.

> Alternatively an interpretation of the quid quo pro of intellectual property

Obviously legal. See above.


> The interactions between a silicon chip and software drivers has nothing to do with right to repair. Never has been.

Well, that's perhaps because that interaction's importance for the functioning of ubuquitous things in life is a relatively recent thing. I don't understand why there's supposed to be some fundamental reason why we can't change our laws to encompass also the right to repair software, or to the right to repair the interactions between hardware and software. Sure, these things aren't rights today – but who's to say we can't make it so?


I didn’t say that new rights couldn’t be established in the future, only that Right to Repair does not now and has not in the past ever claimed to encompass a right to the source code of proprietary commercial software.


> The interactions between a silicon chip and software drivers has nothing to do with right to repair. Never has been.

Software bugs... you could reasonably end up with a right to fix them given the current direction of right to repair lobbying. And that right might include the right to documentation.

> Apple doesn't have a monopoly on any hardware, except insofar as any company has a natural monopoly on their own products, as made explicitly legal by copyright and patent laws.

Which is to say they have a monopoly...

Anti trust never makes having a monopoly illegal, it makes exploiting that monopoly to gain further monopolies illegal. It doesn't care about where the monopoly came from.

It does care about the kind of monopoly, current anti trust rules probably don't consider the monopoly Apple has from copyright and patents on hardware to be of the right category (to cover an entire market)... but that could easily change.

> Obviously legal. See above.

On the contrary...

https://en.wikipedia.org/wiki/Copyright_misuse

https://en.wikipedia.org/wiki/Patent_misuse

> Obviously legal. See above.

Yes, I agree under current definitions, but it would be a reasonable way for the laws to evolve. It is very similar in nature to the existing limits on patents, for example: https://www.nytimes.com/2003/03/06/business/university-s-dru...


If apple were found to be in violation of Patent misuses the result would not be that apple is forced to open up the internals of how their products work but rather that patens apple has would be make null and void and others could copy them (based on the already public patent filling and nothing more).


Indeed, but if the doctrine started becoming more widely applied I imagine that they would start actively trying to avoid having it apply to them, e.g. by not using their hardware monopolies (patents) in ways that create software monopolies.


> Which is to say they have a monopoly...

Sure, but by that definition every commercial product constructed from multiple parts has a monopoly over all their components. Starbucks has a “monopoly” over what coffee is sold within Starbucks cups. Sorry, but I just can’t take that kind of argument seriously.

> current anti trust rules probably don't consider the monopoly Apple has from copyright and patents on hardware to be of the right category (to cover an entire market)... but that could easily change.

Copyright and patents are the very definition of a legal monopoly. I don't think you're being serious.


> Sure, but by that definition every commercial product constructed from multiple parts has a monopoly over all their components. Starbucks has a “monopoly” over what coffee is sold within Starbucks cups. Sorry, but I just can’t take that kind of argument seriously.

You're right, my reply there was poorly worded, fortunately you seem to agree with the point of the reply anyways since you say

> Copyright and patents are the very definition of a legal monopoly.

Which really is just the point.

However I am being entirely serious that there is more that not all monopolies implicate current anti trust law. Whether or not they do depends on whether or not they're a monopoly over the entirety of a market, not over a certain invention (patent) or copying a certain work of art (copyright).

You can see that at play in the current Epic v Apple case for instance. It's unambiguous that Apple has a monopoly over distributing apps to iPhones, it is ambiguous over whether the relevant market here is iPhone users, and therefore whether or not anti trust law applies.


Probably also doesn't force documentation.

Samba was able to get documentation for SMB/CIFS this way.


I mean, we also haven't seen it in the last 150 years of consumer mechanical parts either!


Until the early 90s probably you could get documentation for every device sold - schematics, calibration procedures, BOMs, technical drawings and so on. That ended when companies from countries with less strict IP laws started copying the products that sell. Governments didn't do much to protect businesses and such secrecy is now the effect of this.


Reading documentation? Where's the fun in that?

Or to put it less insolently: I've learned my craft by reverse engineering; as a auto-didactic tool it's a primary learning tool, definitely beats reading the manual.


They'd just abuse patents to stop you from doing it anyway.


Word of warning, stay away from the metal shader bitcode (AIR), or they are gonna sue ya, they did a couple academics for publishing on it.


That whole stack is conveniently avoided in this research, because they're going from NIR straight to the GPU machine code, without touching AIR.


Ah, very clever, will have see what the future brings. I would assume apple would not be too pleased though, with either approach.


Why would Apple go through the effort of adding support for booting alternative OS's on M1 Macs and then stand in the way of implementing drivers for the hardware? That doesn't make a lot of sense, IMHO. Especially considering Apple could have gone the exact same path as iOS devices, which is a heavily locked-down platform that shares a lot in common.


A platform for amateur XNU kernel research was required, since the only other one besides the Mac is the Security Research iPhone (which is specifically not for amateurs). All of Apple’s documentation (of which there is little) and the boot loader design itself (what with the blessing process for a secondary drive being to install a tiny XNU partition with the bootloader app), point to the process being specifically designed for loading different versions of macOS. Any ability to install Linux (or Windows) is at this point coincidental.


Perhaps that's true. From what I read it shipped in a non-working state (manuals were there, but not some utilities were not), and I recall Apple engineers came out on Twitter saying it was being worked on and included it post-launch in macOS betas, even incorporating feedback. I believe Apple did tease at least Windows, maybe Linux in marketing, but I could be mistaken..

Either way, going after OS developers would be an extremely bad look for Apple, especially when they could just as easily disable the feature, or not have bothered finishing the implementation (or done it completely differently) after news of the Linux porting efforts started to spread (especially the Corellium public demo running Ubuntu).


Which team is "they", and are "they" communicating with every other team?


[flagged]


What are you talking about?

Nobody here said about Apple "directly upstreaming code to Linux to support M1".

Parent said "adding support for booting alternative OS's on M1 Macs", which Apple did do.


What are _you_ talking about? Apple has never directly supported both directly through software support or otherwise, support for booting anything else but macOS on bare-metal M1.


Not at all true. There's lots of code in macOS to support installing a different OS.


Maybe your feeble brain couldn't read what I said. I said M1, not macOS as a whole. This discussion in about M1, think again. I can guarantee you will find nothing pointing in that direction for M1. Go ahead, look.


Maybe this an option that’s in place just in case Apple decides to license the chip/boards to one or more OEM’s.

OEM licensing makes sense for a SOC, plus the more the chip sells the lower the price per unit, so Apple would make money on the license, and lower their costs.


The support is in the OS. It is there to allow you to install other OSes.


Do you have references to the incident(s) or published work(s)?


On what grounds do they sue? You can't talk about hardware you bought?


Anyone with enough money for expensive lawyers can sue you into oblivion. Sony being on the wrong end w/Bleem (and eventually losing) didn't stop them from running that company into the ground by outspending them.

You won't win a legal fight against Apple unless you have more money than they're willing to spend.


> You won't win a legal fight against Apple unless you have more money than they're willing to spend.

How does this process work, specifically? Does Apple secretly pay judges so that they favor them or what? Otherwise, wouldn't any judge dismiss Apple's claims as ridiculous?


The problem is the U.S. court system allows those with deep pockets to tie up the proceedings with endless delays over all kinds of arbitrary procedural matters. Those on the receiving end need to keep spending all this money to pay for their lawyers to handle all of the paperwork.

I don’t think it would be a problem in this case though. If Apple tried to do that to Alyssa I think she’d have the community jump to defend her. Perhaps even the EFF would take on her case. Apple would end up damaging their own brand with all the negative press that would generate. All of that on top of the stupidity of preventing people from doing what Apple themselves went out of their way to allow people to do.


> the U.S. court system allows those with deep pockets to tie up the proceedings with endless delays over all kinds of arbitrary procedural matters.

This is horrific, and obviously contrary to the concepts of justice and democracy. Why do people put up with this? Can't these behaviors be voted away?



> Can't these behaviors be voted away?

Politicians in USA has the same problem though. You either vote for politicians bought by corporations or your vote is "wasted".


Case law on IP matters like this is rarely clear enough to make any lawsuit ridiculous on the face of it. Look at the Google/Oracle suit that made it to the Supreme Court: these are complex issues.


Did you agree to not reverse engineer the tech stack? If so, they can sue you for breaking that agreement.


Was that in capital bold letters on the product package ?

In the EU, EULAS that are either too long or too hard for normal people to read are not enforceable.

Also if you bought it, its yours and you can do whatever you want with it, like resell it (and Apple is being sued for not allowing users to resell AppStore apps..).

Also, the EU is going to ship a right to repair law that will force companies to make sure their products can be repaired after support ends. And that includes software. So any mac that apple sells will need to be fully usable / repairable / etc. just for the case that apple goes out of business.


>force companies to make sure their products can be repaired after support ends. And that includes software

How is this possible? They force them to provide source and build info? Any software is "repairable" with enough effort, just ask the NSA.


They don't need to provide software.

Enough effort here means users should be able to bring their own software on the device.


EULAs are not legally binding in many jurisdictions.


That's what the court decides - so they sue to get the court to give an opinion.


Typically reverse engineering requires intent to reproduce a tech, not just learn a tech.

“ the reproduction of another manufacturer's product following detailed examination of its construction or composition.”


Are you quoting from a precedent there?


RE for interoperability is protected, that suit is dead in the water.



That never went to court.


That’s an impressive amount of progress already. Congrats on the work. I’m waiting apple’s next silicon to jump to arm, and hopefully by then I’m going to be able to run linux on it.


Natively, it's gonna take a long time, I'd wager. Virtualization works pretty well today, however.


Pretty much all the things except the GPU are working in Corellium's quick'n'dirty "demo" Linux port. And you see how amazingly quickly the GPU driver is coming along… So not that long of a time after all (I'm surprised too).


I'm positively surprised as well. But working well for day to day use… I don't know. But it's not my field, just more or less educated guess.


So, given the current trend in progress, when will we have truly usable linux on M1?


Would server vendors start putting out M1 in data centers once Linux truly runs on it?

Doesn't opening up driver specification also let M1 start ruling in the server spaces?


Why would a datacenter want a (weird, tiny, embedded) M1 when they can get an 80-core, 8-channel, 128-lane Ampere Altra – which is actually designed to be standards compliant and even has an open source firmware option?

(Even a future "large" Apple SoC is unlikely to be datacenter large…)


try as I have, I can't imagine how Apple wouldn't sell more servers if they bought Ampere than the entire generic ARM server OEM industry can collectively. By generic, I mean to exclude custom silicon per AWS, Azure, et.al., but I'm tempted to think that the only reason why there's been nothing from Apple in the server category, simply because they're quietly waiting for a good candidate for acquisition to emerge. Ampere looks very interesting at the moment, but combine the Ampere chips with proven track record of optimisation and integration with the OS, and fast forward a few years to a time when we're more accommodating of vendor OS control in exchange for getting back some independence from the cloud hegemony players, oh, why not sprinkle over it all some fairy dust from a reincarnated Web Objects offering, and checkout process management provided by Apple, and I can just about see the next two decades of extraordinary growth that's presently challenged by the sheer scale of Apple's business, coming to fruition in ways that we'll be mostly very happy with for giving us all jobs back that are dying out in the white heat of today's oligopolist clouds now gathering and blocking the sun.


Apple isn't really into the server business.


And the many times they have tried it either flopped outright or fizzled with little notice. The xServe and xRAID were very cool looking and actually quite functional, but never did integrate well into overall datacenter operations and ultimately that's what did them in. Unless Apple wanted to take it on to eliminate reliance on AWS, Azure and others I don't see them as having much incentive to care about the server space.

Indeed, if you ever see rumblings of them bringing more datacenter operations to be entirely in house then at some future point I could see them leveraging that to sell externally.

Unless they got really ambitious and planned on supplanting AWS entirely and thus their hardware gives them a significant edge.

But Apple has been so bad at software in general for so long, especially on the server side of things, I'm not holding my breath for that. They certainly have the resources to do it if they had the right person to drive it though, so never say never.


Please provide a link for this supposed Ampere open source firmware option.

So far all they've provided is an aspirational press release about how they are "working" to produce firmware with "freely redistributable binaries".

https://www.phoronix.com/scan.php?page=news_item&px=Ampere-O...

Ampere claims this future firmware will be "OSF certified", which doesn't mean much because "OSF does not require vendors to deliver firmware in open source form."

https://www.opencompute.org/projects/open-system-firmware



Servers/Datacenters aren't just about being able to run Linux though. How are they going to do firmware updates? Apple can break stuff as there is no contract there like there is with x86 server vendors who test all their updates to make sure Windows and Linux runs. Besides there's the whole issue of configuration/systems management, remote access and bulk chip availability.


I would love to see the shakeup with more efficient chips pressuring Intel to innovate in the datacenter space. Being able to decrease our hosting bill would reflect well on me, and it just seems like the way the space will eventually progress.

However, I run a hackintosh at home. I personally love it; I had access to top of the line hardware years before Apple offered it, with an OS that I understand and work well in, etc. It is absolutely not the way to go for any kind of must-work production. It is brittle. Apple is in the enviable position of being the only consumer of its APIs, and they barely publish documentation on the public stuff, much less internal-only provisions. Unless Apple explicitly supports (read: $$$$) a datacenter application, I can only see burning buildings (from all the hair on fire) where stable datacenters should be.


Yeah Intel and AMD both need competition from power efficient chips that perform equally well and offer the same level of support for Windows/Linux. And as you said, it's not going to be Apple - they have a consumer focus and culture and it's far fetched to imagine they will do all the stuff necessary for enterprise adoption of their ARM chips. In the cloud space Amazon has Graviton but there's not much hype about its performance and you run into compat issues a lot for normal cloud workloads at least in the Enterprise.

Nvidia has a real shot here with ARM purchase to fill the gaps but I doubt that's something they have as a focus.


My guess is that part of their perceived income flow implies that the MacOS is running on it. Music purchases, surveillance marketing, app purchases, dunno.

If they had the extra volume potential, it would be cool to see an extra cost M1 computer that boots and runs Linux with no fuss.


They offer the Mac as a Complete Product™, An Experience™ (with the OS) – basically since the original 80s Macintosh. Apple is a vertically integrated gadget business. They just aren't in the bare metal hardware business and have no reason to be there (probably doing that would only confuse the product customers).

Because of all this, when they decided to transition the Mac to custom chips, they just took the whole iDevice stack and did the absolute bare minimum changes to make it into a "technically a general purpose computer" stack. It runs basically iPhone firmware expanded to allow OS choice. They made zero effort to use any industry standards, because there was no business reason to put any effort into that, unfortunately. So we have this very unusual SoC (fucking custom interrupt controller! Even Broadcom stopped doing that crap!) running very unusual firmware (iBoot, everything is Mach-O and stuff, and the OS choice screen is actually a little macOS app). But hey it's not locked down because the Mac line is supposed to be general purpose computers, so go ahead and port Linux but you're on your own.


It's not the bare minimum, though. They designed an entire boot policy framework that allows you to multi-boot different OSes in different security contexts, so you can have your fully secure and DRM-enabled macOS next to your own Linux install (and even then their design maintains basic evil maid attack security, even for Linux or other custom kernels). That's more than pretty much any other general purpose platform.


Yeah, I know, and I agree that that is kinda cool. But in terms of everything else, especially in terms of standard vs custom stuff it is a very small change from iOS devices. If there was any business reason to adopt standards (say Boot Camp would've been deemed important) – they would've at least used UEFI.


UEFI by itself is not useful; we're going to support UEFI+DeviceTree in our boot chain, but it's not going to run Windows.

What you want is ACPI support if you want the platform to be compatible with higher-level ARM boot standards. Unfortunately, ACPI support assumes stuff like GIC and other hardware details. You probably want EL3 for PSCI too. But that would be a massive change to their silicon design.

So, effectively, there is no reason for them to support UEFI or any other firmware-level stuff if their chips are not compatible with Windows, which they aren't, and re-engineering their silicon to be something that could run Windows (without one-off patches from Microsoft, like they did for the Raspberry Pi which is in the same boat) would've obviously been a huge cost to them and not something they decided made business sense at this point.

Obviously Microsoft could choose to add support for these chips to Windows like we're doing with Linux, but that's on them; it requires core kernel changes that are not something you can do in drivers (last I checked Windows doesn't even support IRQ controller drivers as modules).


Well, going straight to seeing Linux output on the EFI framebuffer instead of having to develop things like m1n1 first would've been useful. Not very useful but still.

Yeah, not using the GIC is what I hate the most. If the SoC was less… custom, we could've even written our own ACPI tables (possibly useful ones depending on e.g. which particular Designware crap they used – the better DW PCIe controllers do support ECAM, etc.) and avoided having to redo all the work in every kernel anyone wanted to run…


UEFI is beyond crusty. So glad they did NOT go down that path.


but how long before these 160 cores of Ampere variety ARM servers will be entirely affordable prices for at least the average hner?

the most recent NetApp filer I was paid to evaluate for a fairly large business customer, is being sold at a price that I realised isn't actually so ridiculous for my home lab use. The amount of time expended on related work involved with using cloud computing for even very occasionally run jobs has put the likes of these Ampere servers very much inside the bounds of reasonable, even thoroughly sensible, reason for private acquisition. My first thought was "if only I was even ten years younger I'd advertise a house share and load a room with a couple of racks and dump all the interactions involved with setting up what I'm doing in the cloud for higher quality interaction with some housemates who have projects that could turn my practical investment into something much more interesting" I think a ratio of 128 cores per person feels about right?


Meanwhile I'm just going around running consumer Ryzens with ECC memory and all that. Rock solid.


A big shoutout to Alyssa here. For those of you who don't know, she is doing this work as a college student.


She's brilliant. Here's hoping she'll leave her mark on software industry.


College students are just as smart as graduates and they tend to have much more time on their hands.

So yeah, this is cool work, kudos and everything, but the fact that a college student is doing it is the least surprising part of it if you really think about it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: