Hacker News new | past | comments | ask | show | jobs | submit login

I don't know anything about chips. But I know ARM architecture was around for decades. Why it's hot again? I get the point for using it in smartphones and tablets, but why should servers use ARM?



It's hot because the tablet and smartphone market exploded, creating a demand for high performance, low-power cores that could be flexibly integrated into SoC's with other parts. With that new market came volume, and in the processor market, volume is important. The reason x86 overtook RISC architectures is that the high volume of x86 chips generated revenues that allowed for massive capital investment in x86 designs. The tablet and phone market is driving a similar process for ARM chips. Right now, there are at least three very well-funded lines of ARM micro architectures: Qualcomm's, Apple's, and ARM's. It's been a long time since a non-x86 platform got that kind of investment.


You are right that smartphones have drive demand for higher performance and hence more expensive / higher margin ARM CPUs.

But overall ARM volume has been far higher than x86 volume for a long time even excluding all smartphones and tablets.

Most of our x86 servers at work have more ARM CPU's on them than they have x86 cores (most of the harddrives have controllers with ARM CPU's - some of them multi-core etc.). You'll also find it all over the place from washing machines to set-top boxes to microwaves. You find ARM cores in some sd-cards even.

I believe the projected number of cores for ARM last year was around 3 billion. I doubt x86 passed 500 million, which also means that both MIPS and PPC is competing with x86 for second place in number of cores for 32bit+ CPU's. (On the 16 bit or below end you also have surprises like 6502 derivatives shipping in ludicrous volumes)

So x86 has been "hot" for the market for main CPU's in devices consumers recognise as computers, and has been by far the most profitable architecture for a long time. Outside of that, though, it's at best at second place in total volume, and in most non-computer markets it's more likely to place in 3rd to 5th place in volume.


Intel is a big ship to get turned around. When they finally do they will end up more than competitive.


That, and because historically, the ARM instruction set has always been the gold-standard power-efficient 32-bit architecture. Especially in thumb mode.

Open-source software and the extreme efficiency goals of data centers make an interesting alternative to x86 now.


If you look at it from the perspective of expressiveness, the CISC-ness of the x86 ISA also allows far more opportunity for hardware-level enhancements than a RISC style one: the code density is higher, meaning better cache usage and less memory bandwidth needed (especially with multiple cores), and there's still a lot of relatively complex instructions with the potential to be made even faster. RISC came from a time when memory bandwidths were higher and the bottleneck was instruction execution inside the CPU, but now it's the opposite; memory bandwidth and latency is becoming the bottleneck. There's only so much you can do to speed up an ARM core without adding new instructions.

Linus has some interesting things to say about this too: http://yarchive.net/comp/linux/x86.html


It's pretty weird to think that x86 gives Intel any advantage over ARM.

Let's see: x86 code density is horrible for a CISC, there is hardly any advantage over ARM, which does great being a RISC. Also remember that the memory bandwidth is primarily a problem for data, but not code. ARM64 is a brand new ISA, it's the x86 ISA that is a relict from the times when processors were programmed with microcode. Intel is doing a great job to handle all this baggage, but to claim that the ISA gives Intel an advantage is ridiculous.

And finally, Linus has been an Intel fanboy since day one. Go read the USENET archives to find out. He received quite a bit of critique because the first versions of Linux were not portable but tied to i386.


x86 code density may not be optimal but it's better than regular ARM - only thumb-mode can beat it, and just barely.

> Also remember that the memory bandwidth is primarily a problem for data, but not code

RISCs, by design, need to bring the data into the processor for processing; but I see things like http://en.wikipedia.org/wiki/Computational_RAM being more widely used in the future, where the computation is brought to the data, and this becomes much easier to fit to a CISC like the x86 with its ability to operate on data in memory directly with a single instruction. Currently this is done with implicit reads/writes, but what I'm saying is that the hardware can then optimise these instructions however it likes.

The underlying principle is that breaking down complex operations into a series of simpler ones is easy, combining a series of simpler operations into a complex one, once hardware can handle doing the complex one faster, is much harder. x86 lagged behind in performance at the beginning because of a sequential microsequencer, but once Intel figured out how to parallelise that with the P6, they leapt ahead.

Linus being an Intel fanboy has nothing to do with whether x86 has an advantage or not. But even if you look at cross-CPU benchmarks like SPEC, x86 is consistently at the top of per-thread per-GHz performance, beating out the SPARCs and POWERs, and those are high performance, very expensive RISCs. I'd really like to see whether AMD's ARMs can do better than that.


Interesting, but I wonder how much faith to put in Linus' claims given the demise of Transmeta.


I already start to feel like a grandpa talking about 8080, x86 processors. The next generation may not even remember what x86 is. The movie mimzy predicted intel to stay there until the far future, when they are able to fabric self-assembling smart chips.


Same reason that x86 became hot, really. There are these newfangled PCs/smartphones that are providing ridiculous volumes that create network effects and defray design expenses. Back in the day the idea of x86 in a server was crazy, but they were able to break into the server market from the bottom and mostly consumed it. The same might happen with ARM, or it might not since Intel is in a better position than the RISC vendors were with it's near monopoly giving it access to phenomenal engineering resources.


> Intel is in a better position than the RISC vendors were

Actually, Intel might be in a worse position with respect to vendor lock-in. I'm guessing a lot of early servers' lower layers like OS, webserver, etc. were proprietary; convincing the vendor to support x86 would have been a hard sell; and porting your application to an x86 environment was difficult.

All of these things would have had a tendency to lock people into their existing hosting choices.

Nowadays most servers run mostly / completely FOSS (at the lower layers) that can be easily ported to ARM. I'd imagine porting code to x86 from VAX or DEC or mainframe or whatever, was a lot more painful than porting PHP, Django or Ruby web apps to ARM today.

Of course, Intel does have deeper pockets and much of the desktop market, and may well be able to use that to keep ARM in check despite the fact that switching CPU architectures is probably much easier for website owners today than it was when Intel was trying to break into the server market.


Superior performance per watt.


X86 chips (especially Intel's) are leagues ahead in terms of performance per watt.

In fact not only regarding performance per watt, but also performance per dollar. It's just that ARM designs for lowest power consumption while Intel/AMD design for maximum perfomance.


There are three metrics getting thrown around:

* Performance per dollar operating cost (performance per watt is closest to this)

* Performance per dollar capital expenditure (important for desktop systems, where operating costs are low)

* Performance per dollar TCO (sum of the above two)

The third one is the important one.


Good summary, but after reading this thread I'm still confused.

Why ARM?

How does the ISA impact the aforementioned criteria?

Why would a phones demand a different ISA?


The original choice of ARM for mobile and x86 for desktop is basically a historical accident.

The differences between modern ARM cpus and modern x86 have less to do with the ISA itself and more to do with the way ARM cpus have been designed to be low-power for decades and have worked their way up the performance scale, while x86 has been designed for performance and has only lately been emphasizing low power. These lead to different design points.



Because everything today is about the heat generated by computation. In a phone, it wastes the battery and is unpleasant for the user. In the datacentre, heat determines how much computation you can do in the volume of space you have, and how much you have to spend on cooling systems (the running of which is expensive too). So datacentre operators that already have a building are facing a choice: get a new building, or make better use of the one they have.

ARM cores are typically slower in absolute terms than Intel cores, but at a given level of power, you can run more of them.


What evidence is there that performance per watt is actually better on ARM when dealing with server processors?


Because there isn't any type of x86 processor that beats a comparable ARM processor for efficiency. If you could make an efficient x86 processor Atom would be it, and it's less efficient than ARM.

The x86 ISA fundamentally takes more silicon to implement than ARM. More gates = more power.


Everything Intel sells today clobbers any currently-marketed ARM chip on per-unit-energy computation performed. The race is not even close. ARM is only of interest if you are constrained by something other than compute (phones) or you don't know how to program and you are wasting most of the performance of your Xeons. The latter category contains nearly the entire enterprise software market and most other programmers as well.


Or, your program is entirely constrained by IO so most of the power of Xeon is wasted, while you still have to pay the premium for it.

This chip is interesting not because of the cpu core in it, but because it has two presumably fast 10GbE interfaces and possibility for a large amount of ram in a cheap-ish chip.


> More gates = more power.

This is not strictly true, the processor throughput also matters.

Total Power consumed = Power consumed by gates * Time taken to finish the job


There's another variable to throw into the mix: all gates are not created equal. A 28nm (this new processor) takes a lot more power than a 22nm (new intel processors) gate.


strictly speaking:

Total energy consumed = Power * Time


Do you have a source for any of this? x86 is much more powerful than ARM by watt, being exponentially faster at most math. I've never had anyone seriously propose that ARM is more efficient than x86 at anything then not pulling watts from a Li Ion battery.


Can you elaborate what you mean by "exponentially"?

For ARMv7 vs x86, yes, x86 just destroys ARMv7 (Cortex A15 etc.) in double (float64) performance.

While I do think x86 is still faster vs ARMv8, the gap is likely much less per GHz, because ARMv8 Neon now supports doubles much like SSE. Of course Haswell has wider AVX (256-bit) and ability to issue two 256-bit wide FMAs per cycle (16 float64 ops). Cortex A57 can handle just 1/4th of that, 4 FMA float64 ops per cycle.

That said, low to mid level servers are not really crunching much numbers. They're all about branchy code such as business logic, encoding / decoding, etc. Or waiting for I/O to complete.

So why would you care about math in a low end server CPU if it's not being used anyways?


Maximizing density and still keeping everything within operating temperatures is one of the toughest parts of data center operations. Not to mention the cost of generating all that waste heat. Many tasks are not CPU bounded, so the ARM is plenty good for them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: