Hacker News new | past | comments | ask | show | jobs | submit login

Apple's M1 has made ARM mainstream for laptops, Lets see which company does same for server space.

Hopefully ARM on cloud will result in cheaper prices.




I nominate Amazon for this award. As mentioned in another comment here, ~50% of newly allocated EC2 instances are ARM.


Apple's M1 will make ARM mainstream on the server side.


Apple hasn't seemed interested historically. And the Nuvia folks left Apple to found their company explicitly because they thought an M1 style CPU core would do well in servers but Apple wasn't interested in doing that.


it's not that apple will sell server chips. it's that developers can locally work on arm which makes it easier to deploy to severs. linus torvalds had a quote about this...


Linus' quote/post:

""" And the only way that changes is if you end up saying "look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on". """

(emphasis in original)

https://www.realworldtech.com/forum/?threadid=183440&curpost...

Thanks, I did not know about this!


It’s wild to consider that my next computer (an arm M1 Mac) will be compiling code for mobile (arm) and the cloud (arm). I wonder if we’ll ever see AMD releasing a competitive arm chip and joining the bandwagon.


Now with NVIDIA owning ARM I'm not sure AMD will do that


Ah, ok, that makes sense.


Personally, I don’t see many server admins choosing to pay the Apple Tax to get M1 into their data center. I don’t see how the watt/performance ratio could pay off that kind of tax.


I did not mean to imply that the actual M1 will be used in data centers. Apple is quite popular among developers and its also a trendsetter which will probably lead to other computer manufacturers to adopt ARM for personal computers. So having more people use ARM on their personal computers will lead to more ARM adoption in the data center.


> Apple is quite popular among developers...

The great majority of developers use Windows or Linux according to every Stack Overflow survey from the past ten years. Only ~25% use a Mac.


I believe interpreting statistics from those surveys in this way isn't fair. There are so many developers around the world but the pattern of value/money generation by them is not uniform; in other words, a small percentage of developers work for companies that pay the largest share of server bills and penetration rate of macOS devices among developers of top companies is probably higher than average. (I'm not implying that developers who work on non-macOS devices, make less value because your device doesn't have - nearly - anything to do with your impact. I'm just talking about a trend and possible misinterpretation of data)


25% is still "quite popular".

If 25% of servers switch to ARM that is massive.


Ah, ok, I understand. Thanks for clarifying!


The OP wasn't suggesting Apple M1 chips in the data centre, but rather that Apple M1 chips in developer workstations will disrupt the inertia of x64 dev –> x64 prod. It will be easier for developers to choose ARM in production when their local box is ARM.


Apple have been hiring for Kubernetes and related roles. This may well be for their own devops for Apple services.

However I'd be amazed if they don't release some kind of managed service for running Swift code in the cloud. Caveat emptor, though.


I have a similar view of what Apple are up to. Too many high-profile hires to be purely toiling in the mines.


this is inevitable.


My operating systems teacher 2001 was a total RISC fan and always said it would eventually overtake CISC.

I guess, he didn't expect this to take well after his retirement.


ARM today is probably more CISCy than what he considered CISC in 2001.


The best analysis of RISC vs CISC is John Mashey's classic Usenet comp.arch post, https://www.yarchive.net/comp/risc_definition.html

There he analyses existing RISC and CISC architectures, and counts various features of their instruction sets. They clearly fall into distinct camps.

But!

Back then (mid 1990s) x86 was the least CISCy CISC, and ARM was the least RISCy RISC.

However, Mashey's article was looking at arm32 which is relatively weird; arm64 is more like a conventional RISC.

So if anything, arm is more RISC now than it was in 2001.


amd64 is more RISC now than ia32 was in 2001 as well.


AArch64 is load-store + fixed-instruction-length, which is basically what "RISC" has come to mean in the modern day. X86 in 2001 was already… not that :)


I always understood it as that too.


Eh, it has a lot of instructions, but that was only the surface of RISC. It's a deeper design philosophy than that.


Also, isn't x86 ISA just a translation layer today? I thought on the metal, there is a RISC like architecture these days anyway.


Not really, because the variable length instructions have consequences - mostly good ones because they fit in memory better.

Also, the complex memory operands can be executed directly because you can add more ALUs inside the load/store unit. ARM also has more types of memory operands than a traditional RISC (which was just whatever MIPS did.)


I had the impression that M1 would outperform others because it didn't had variable length instructions.

Why do you think they have good conequences?


I've understood it, the tradeoff is

The upside to variable length instructions is that they are on average shorter so you can fit more into your limited cache and you make better use of your RAM bandwidth.

The downside is that your decoder gets way more complex. By having a simpler decoder Apple instead has more of them (8 wide decode) and a big reorder buffer to keep them filled.

Supposedly Apple solved the downside by simply throwing lots of cache at the problem and putting the RAM on-chip.

I'm not a CPU guy and this is what I've gathered from various discussions so I'm happy to be corrected.


In most cases, yes, but it doesn't get rid of the complexity for compiler backends that can't directly target the real instruction sets Intel uses and have to target the compatibility shim layer instead.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: