Hacker News new | past | comments | ask | show | jobs | submit login
IBM opens up Power chips, ARM-style, to take on Chipzilla (theregister.co.uk)
137 points by mmc on Aug 6, 2013 | hide | past | favorite | 94 comments



I'm not quite sure where the news is here. IBM has been licensing the architecture and cores for years (2007 is mentioned[1]).

All I see in the actual announcement[2] is marketing hype (not news) about software that already exists ("open firmware" == U-Boot, "open software" == linux) and hardware that has been licensable for years.

[1] http://en.wikipedia.or/wiki/Power_Architecture#Licensing

[2] http://www-03.ibm.com/press/us/en/pressrelease/41684.wss


Reading the press release, the firmware mentioned is neither das u boot (a bootloader) nor "open firmware" [1] a boot standard used by IBM's Power systems. It is the actual chip firmware, and this is the first I've ever heard of a silicon vendor doing this! I'm excited to see what the open source community will be able to do with it.

[1] http://en.wikipedia.org/wiki/Open_Firmware


ARM is an "open" architecture—it's fairly easy to roll your own if you have an FPGA and patience to read through arch specifications for a couple months. I would be highly surprised if you could get equivalent documentation for POWER without a significant bit of cash. Not that I blame them, even open standards can cost a couple hundred.

I don't think this would change anything, though, there's not enough gain to justify an architecture switch for either hardware or software people. And that's a damn shame. I cut my teeth on PowerPC and I couldn't imagine a better way, it's a beautiful architecture. Altivec STILL makes intel's vector processing look like a toy.


https://www.power.org/documentation/power-isa-version-2-07/

BTW, if you want to roll your own ARM you have to negotiate for an architectural license.


> BTW, if you want to roll your own ARM you have to negotiate for an architectural license.

I would hope not for a personal FPGA... that's hilariously unenforcable.


Reading between the lines, it sounds like a Tyan whitebox with Power8 processors, Nvidia GPUs, and Mellanox NICs running Linux. Might be good for HPC.


Probably will cost $50,000 and run as fast a new smartphone...


Don't worry, you can pay $1,000/day to unlock a new core when you need the power of two smartphones.


Don't forget the $30000 every year thereafter for the maintenance contract.


Don't forget the million dollar annual license for their "top notch" coffcoff* sw


"With its embedded Power chip business under assault from makers of ARM and x86 processors"

I didn't know Power chip business still existed.


I work in automotive software, and PowerPC CPUs (albeit from Freescale and STMicroelectronics, not IBM) are more popular than ever. It's a very common choice for powertrain ECUs.


Big in avionics for some reason too. Wasn't it the F/22 which received an upgraded PPC that was in the news a while back?


A bit of research says that the F-22 uses a pair of Raytheon CIPs, which are PowerPC based.

http://www.raytheon.com/capabilities/products/f22_cip/


Power Architecture is used in spacecrafts as well:

The [PowerPC] 603e processors also power all 66 satellites in the Iridium satellite phone fleet. The satellites each contain seven Motorola/Freescale PowerPC 603e processors running at roughly 200 MHz each.[1]

There is a radiation hardened version called "RHPPC" based on PowerPC 603e made by Honeywell & Freescale. RHPPC is equivalent to the commercial PowerPC 603e processor with the minor exceptions of the phase locked loop (PLL) and the processor version register (PVR). [2]

[1] http://en.wikipedia.org/wiki/PowerPC_600#PowerPC_603e_and_60...

[2] http://en.wikipedia.org/wiki/RHPPC


The F-22 uses PowerPC processors in the upgraded Common Integrated Processor(CIP) avionics, partly for compatibility with the F-35 Integrated Core Processor (ICP) avionics.

F/A-18s also use PowerPC processors in the Advanced Mission Computer (AMC) avionics.


It's also in the Airbus A400M, and other Airbus vehicles.


any particular reason? error rate, power use, ?


General Motors was a big user of Motorola 68k variants. Workstations and personal computers that used the 680x0 series switched to the Power series, maybe the developers just find them easier to migrate to.


Personally I've just found the architecture clean and well-designed. Even if this only helps the developers I'd say it's worth it.


The ISA is not better than others in any real way, but IBM put a lot of early work into RAS features for their server chipsets, and this sort of leaked into the automotive and aviation world.

If you want to run 3 cpus lockstep, verifying each other's results, the PPC world already has the infrastructure. This and other similar things make it an easy choice for some applications.


IIRC, the service processor on IBM x86 servers is a PPC. (runs the light-path diagnostics, and manages remote boots, etc)


All three of the PS3/XBox360/Wii generation consoles used POWER-derived chips: very different chips, of course, but POWER all the same. The Wii U uses a POWER chip too.

POWER stalled in the performance/watt category around the time of the Power Mac G5: unfortunate, since this was around the time that performance/watt was starting to be considered a real thing. That hurt the architecture's standing terribly. But it's still around.


Any ideas why PowerPCs stalled in performance? Is PowerPC really an architectural dead end?


Intel opened a CPU design facility down the street from Motorola's PowerPC operation and bought away key parts of the team in '98-'99 setting Motorola back enough that they could no longer be ahead in the horse race. (For years, PPC had beaten Intel in some categories when the new PPC architectures came out, then Intel would pull ahead until the next generation. Apple always found something good to put on a Keynote slide. After the brain drain that wasn't going to happen.)


My take is that it's all about the execution of creating a chip. x86 for example is a much worse ISA, but there is some research (no source on this assertion) that says at the end of the day it doesn't matter too much since the machinecode is just translated into some internal RISC code.

So, it isn't that PowerPC is doomed from a technical standpoint. Instead it's all about money, business cycle stuff. Less sales means less R&D. Less R&D means you fall behind of the competition. IBM doesn't really have the heavy hitters they used to in the chip business (Relative to Intel/ARM/TSMC). If you want the newest flashiest tech, you can't really use their fab - that sort of stuff.

Every chip technology node is getting more expensive for foundries, which means the chip market will likely naturally converge to a small number of players.


I believe you're thinking of this [1].

[1] http://research.cs.wisc.edu/vertical/papers/2013/hpca13-isa-...


I think it's truer to say that PowerPC stalled in performance per watt. The high end Power7/Power8 chips are massively powerful but that comes with Power and Cooling requirements that aren't going to work in a laptop.


Those post-date the G5 by several years, though. Things had gotten better for POWER by then, but it had already lost much of its market.


Kindof true. I'd forgotten how much of a Mhz bump the P6 was -- It came out 12 months after the first intel Mac Pro (and ran at 4.7Ghz at the high end).

The P5 was a contemporary of the G5 (although the G5 was really a P4). But it wasn't quite the beast the P6 was.


And the G5 (the PPC 970 in IBM's nomenclature) was an "ultra-light" POWER4 — the POWER5 was ~30% quicker than the G5. POWER never really fell behind at the high end — there was just no real focus on anything except the high-end.


The high end POWER stuff always had crazy cooling, though.


Last I heard (it's been about a year since I looked into numbers), Power/PPC chips appeared to be outselling x86 chips by a substantial margin with the caveat that it's hard to come by numbers from the "smaller" x86 vendors (like Via) that might very well be selling larger numbers than expected of the low end alternatives but at much lower prices and margins and so fly under the radar.

x86 is likely something like the 4th or 5th largest chip architecture by volume shipped today. Last estimate I've seen was in the 360 million range per year, maybe as high as 400 million.

That's after ARM, likely to ship 3 billion this year, MIPS and PPC probably in the 500+ million range each unless there's been massive unexpected changes over the last year.

X86 gets all the attention because it's on desktops and in laptops and because Intel is disproportionally important because their revenue is several times that of any other CPU manufacturer because nobody else ships nearly as many high end chips (e.g this puts Intel at 7 times Qualcomm, at second place, in revenue from CPU/MPU's last year: http://www.xbitlabs.com/news/cpu/display/20130521205843_Inte... )

And for the surprise contender, it is unclear where the 6502 architecture falls: It ships in "hundreds of millions" a year according to Western Design Centre). Note that this might very well largely be in the form of licenses for embedding the cores in custom ASICs or in FPGAs, so whether you'd want to count that is another matter (as an example, some Amiga's had keyboards with an embedded 6502 core + PROM and a tiny amount of RAM). It's possible that some of the other extremely low end 8-bit CPU cores that are still being used as micro-controllers might also ship volumes like that.

I've seen no indication that Sparc is anywhere in the running


Of course it still exists. They're doing this now because they don't want to be steamrolled by the raging success that is the OpenSPARC project.


Some of us had m88k systems (not bad actually) and 88open helped that architecture take over the world. It even merits two whole sentences at wikipedia! http://en.wikipedia.org/wiki/88open


"Mostly harmless."

For some reason that was the first thing that came to mind, reading your comment ;-)


Motorola thought it was going to take over the world. Our company made a desktop environment for Unix so all the vendors would send us their equipment to port to. At one point I counted over 20 different versions of Unix and associated hardware. We even had a Sony Workstation.

The M88K system had big vertically stackable blocks with ribbon cable connectors at the back between the units for power and data. [1] [2] One unit consisted of a tape drive and a floppy drive. The floppy drive was actually SCSI and very fast (over 100kb/s when most floppy drives top out at 25kb/s). I can't imagine how much that drive must have cost.

The system arrived in a massive box and for some bizarre reason they stuffed the empty space with O'Reilly books. There were lots of "read me first" and "read me first" for the read me firsts. I ignored all of them.

About a year later the machine failed. It turned out there was a filter by one of the fans and one of the readme firsts told you to clean it once a month. Eventually the system had overheated and shut down.

We also had a Data General system that used the m88k. They called it the Aviion which was annoying to read and type. The DG folk we dealt with were by far the nicest out of all the vendors. Both the DG system and the Motorola system ran lightly modified SVR4. It was basically Unix of the time, and worked just fine.

The Motorola system ended up acting as the office server for various things because of its high spec. Hold onto your chair - it was blazingly fast at 40MHz, and had a whopping 64MB of memory. At one point we spent a thousand pounds to get a 1GB hard drive and used it as a Usenet server.

[1] Front view: http://www.openbsd.org/images/mvme187-1.jpg

[2] Back view although the system I used didn't have that much networking http://www.openbsd.org/images/mvme187-2.jpg


Thanks for sharing this. The machine looks like something out of an alternative future...


It is the execution that counts. The world could use a third mass architecture. Especially one that is not too tightly IP locked. The whole web 0.1, 1.0,2.0 came from the fact that PC clones were everywhere.


> The whole web 0.1, 1.0,2.0 came from the fact that PC clones were everywhere

I was there and it didn't! TBL did development on Next. There were some text mode browsers that worked on Unix only. The popular graphical browser was Mosaic[1] which started out as Unix/X windows only. It was run on Sun, HP, IBM, SGI etc workstations (32 bit).

At that time popular Windows was still 16 bit. It didn't even include TCP/IP with various third party stacks (for a price) and later a Microsoft stack for Windows 3.11 for Workgroups. Some brave people did start porting Mosaic but it was hard because a completely different GUI API and semantics was needed, as well as dealing with the cramped machines compared to the 32 bit workstations. It was late 1994 before these ports became somewhat usable.

Netscape was formed around then, and the big difference was they made their code portable to multiple guis from the very beginning (a lot easier than retrofitting it). By 1995 every platform had to have TCP/IP and a web browser to be relevant. The web spread because no one was in charge, and everything had to work everywhere on a wide variety of screen sizes, operating systems and user environments.

ie it was the diversity of systems out there that was the cause, not that you could buy the PC architecture from different companies.

[1] http://en.wikipedia.org/wiki/Mosaic_(web_browser)


Its not clear if it is third or fourth now. MIPS may be third after ARM.


It really depends on whether your counting number of chips or dollars in sales. POWER chips sell at a premium inside high end enterprise systems, while MIPS is embedded in lots of places like handheld consoles and routers for more chips sold but for less money.


The high end POWER chips from IBM makes up a miniscule fraction of the overall Power/PPC market in terms of units. PPC sells in comparable unit numbers to MIPS, but mostly from architecture licensees like Freescale.


Depends if you count by revenue, or by units.

In shipped units, MIPS is quite likely either second after ARM, or third after ARM and PPC.

MIPS was estimating an expected 500 million units for last year, I believe - I don't know if they met it. PPC has been estimated in the same ballpark.

Unless Via's x86 sales are far higher than expected, x86 is likely below 400 million units shipped a year.


>The whole web 0.1, 1.0,2.0 came from the fact that PC clones were everywhere.

Can you explain that? Because I cannot make any sense out of it.


Cheap IBM PC clones make possible having a lot more computing devices in every home, which allowed the dot com boom in the late 90's.

The clones were possible because IBM due to various business, legal and other stuff could not stamp them down. So we got to the point where computing penetration was fast and high enough for the whole net thing to make sense.


Well, OK, I guess.

I just hope people realize that the Internet was the most compelling and most popular way to "get online" even before there were significant numbers of PC clones on the Internet.

Specifically, although it was technically possible to give a Windows machine a direct TCP/IP connection to the Internet, if you were using a PC clone to access the internet before July 1993, you were probably using the PC clone to run a terminal-emulation program (e.g., Kermit) to log in to a Unix shell account.

(I chose July 1993 as the date by the way because that was the month in which the New Yorker ran the cartoon, "On the Internet, nobody knows you're a dog," which was the first reference to the Internet in a mainstream publication that seemed to arouse the interest or the curiosity of large numbers of readers.)


Actually, the most popular way for ordinary (non-academic) users to get online was through services such as CompuServe, AOL and Prodigy. The web thing and Windows really took off after the widely-publicized launch of Windows 95, which came shortly after the widely-publicized Netscape IPO.


The Internet had more non-academic users in July 1993 than CompuServe, AOL or Prodigy. Most of those non-academic users connected "through work" (either at work or by dialing in to a pool of modems maintained by their employer).

If you remove people who connected through work from the definition of ordinary users, then AOL or Compuserve might have had more ordinary users than the Internet, but not vastly more. There were at least a dozen ISPs offering shell-account-style access to the internet in July 1993, Netcom, Best, Panix and The World being big US-based ones.


Depends how you define "internet users". You're certainly right if you think of email and FTP. However, online services were more widely used by ordinary Americans until they got web access ... and an awful lot of them got their first web access via AOL.

You may recall the huge impact that AOL had when it connected to the web. AOL also bought Netscape, which had been a dominant force in the early commercialization of the web (along with Windows 95), before taking over Time-Warner.


There's a reason I put a date in my sentences. Yes, a lot of Americans got their first web access via AOL, but I would be shocked to learn that it was possible to browse the web via AOL in July 1993. I did not succeed in my attempt just now to find out when it became possible, but consider that AOL did not provide its users with access to Usenet until September 1993.

And consider that that in July 1993 Usenet was still much bigger and more important than the web. The web grew very quickly, but it takes a while to grow from zero users. (To help jog people's memories: Netscape Communications -- as "Mosaic Communications Corporation" -- was not founded till April 1994. Altavista opened to the public in December 1995.)

We've gone very far from the topic of this comment section.


I laughed.


Besides the fact that PowerPC is still very common in embedded systems (and provided the CPU in all three of the last generation of game consoles), POWER != PowerPC. IBM Watson was a POWER7 system, as are a bunch of the top 500 supercomputers and many many large business big iron servers running AIX.


Currently 5 out of the top 10 supercomputers are POWER: http://top500.org/


The Xbox 360, Wii, Wii U, and the PS3 all use Power PC chips.


Right, but the Xbox One and PS4 are x86. PowerPC isn't nearly as dominant in the console market as it once was.


It's completely dominant in the console market. That is going to change with the new generation though but if you go buy a console today, you're buying and IBM PowerPC.


That's PowerPC-based chips, not actual off-the-shelf PowerPC CPUs. Big difference.

Also, the main thing with the PS3 was the Cell architecture which was going to appear in every type of device from TVs to mainframes and make the rest of the processor industry obsolete.


ARM chips outsell x86 chips by at least a factor of two, and I'm pretty sure more than that. For each Intel desktop/laptop there are twenty ARM chips in microwaves, cars and airplanes.

In the beginning it had a lot to do with toolchain support, these days I think it's a combination of force of habit and the fact that you can fry an egg on an x86 floating point unit.


From what I know of the space, ARM is not used very much at all in cars or aviation. These are both areas where PowerPC dominates. I'd guess most modern cars have at least a dozen PowerPC chips.

And both ARM and PowerPC outsel x86 by a hell of a lot more than a factor of 2. It's at least a factor of 10, and that's almost certainly low too. x86, in terms of units sold, is an extremely small market.


You're right, I meant to write PowerPC.


at first, I took factor 10 for a binary joke :)


Not so much in cars and airplanes, but somewhat ironically, a typical x86 pc has many more ARM cores than it has x86 cores. My hard drive has 3 arm cores, my ssd has 2, my sound chip has one, my network chip has one...


More like a factor of 10.


They still show up in HPC systems from time to time. It's a nice architecture.


From time to time? So 5 out of the top 10 on the top500 is from time to time?


If I weren't tied to consumer electronics I'd use POWER in a heartbeat.


AIX and iOS; AS/400

Big data still requires a lot of power to move it.


I would love to see a Rasperry Pi type device with a PPC chip on it, a 603e equivalent for example, or even better, a 7400 series "G4" chip.


FWIW: http://www.ebay.com/sch/Desktops-AllInOnes-/171957/i.html?_s...

Costs more than the Raspberry Pi though.


Watch out for used G5s that have leaked their coolant.


Just avoid the G5s whatsoever. They're enormous, hot, and very poor value for money because they're the fastest Macs that can run Classic, which means they continue to get bid up by people with legacy software. For the same money you can get a Core 2 Duo which will be superior in every way provided you don't need to run MacOS on it.


Get out your time machine and you can http://en.wikipedia.org/wiki/BeBox - note the geekport.

Admittedly it was a smidgeon more expensive than the Raspberry Pi.



Which has been replaced by: Low cost, low power ARM based computer...


The problem is that the PPC's with decent performance are ludicrously expensive in small volumes - you can "easily" spend $1000 for relatively anaemic PPC models, so a manufacturer would need to either be able to bet big, or costs are going to be crazy. ARM is much simpler in that respect because of the much wider choice in manufacturers targeting the low end of the market.


Is there any reason why this is necessarily the case? Also if these are being used in embedded applications, I highly doubt it costs $1,000 per unit in quantity.

They're making a kajillion PPC based processors for the Xbox 360, PS3, and Wii, so it's not like they don't have designs for current manufacturing processes. The chip in the 360 can't cost more than $50 today, and if stripped down (2 cores vs. 3), clocked less aggressively (1GHz vs. 3.2GHz) could probably be cut to $25 or less.



There was even a version of Windows NT 4.0 for that. It was great. Then Motorola decided they had no clue what they were doing with computers, IBM gave up and shut the whole thing down.


All of the RISC chips were dying anyway. Intel was starting to build steam with the Pentium Pro and Pentium II, Wintel servers were getting cheaper and more common, Windows was becoming 'good enough' and Linux started to push the expensive traditional Linuxes off the table.


That's really just the standard Innovator's Dilemma effect where low-cost, high-volume products often tend to eat their way up the market. Now that the semi-RISC ARM processor is attacking Intel from the low end it's having the same sort of success as x86 did over MIPS, etc.


Cheap Mips and other RISC chips were supposed to replace all the clunky old x86 style processors, so it's one case where the Innovator's Dilemma effect turned out to be wrong.


Neither RISC nor CISC won in the end. Intel's current generation processors, like most others of similar performance specifications, break down the incoming instruction stream into micro ops (http://en.wikipedia.org/wiki/Micro-operation), the units of processing that are actually executed. This renders the difference between CISC and RISC a case of semantics.

Before this, the idea was that RISC was simpler to implement and could be optimized more easily, ultimately be more cost effective. What wasn't factored in was how good Intel is at optimizing, and how hard they'd push their process, beating the RISC side despite all the disadvantages CISC had.

Now it's the GPU that's eating Intel's lunch, high performance floating point code on the CPU is several orders of magnitude slower than a high-end GPU, so Intel's trying to fight back with their "pile of CPUs" strategy (http://en.wikipedia.org/wiki/Larrabee_(microarchitecture)). It's not working out very well so far.


In defence of Intel here, if you look at performance per watt GPUs aren't all that far ahead of Intel. It's mostly that every instruction an modern CPU executes is predicted by a branch predictor, makes it's way through several levels of cache, is run through a reorder buffer and register renaming and a reservation station before finally being executed. And all of that takes energy, though it speeds up the rate at which sequential instructions can be issued by the processor quite a bit.

As to RISC vs CISC, well, it's true that x86 instructions are decoded to micro Ops inside a modern processor but the fact that the instruction was complicated does have a cost even for a modern processor. The act of just decoding four instructions in a clock cycle and transforming them into uOps is quite a bit of work, on the same order as finally executing them if they're simple additions or such. And the uOps that make up an instruction have to be completed all together or else when the processor is interrupted by a page fault or such it will resume in an inconsistent state. And the first time you run through a segment of code you can only run one instruction at a time since figuring out where instruction boundaries are is hard, though you can store the location of those boundaries with just another bit per byte when they're in the L1 instruction cache.

On the other hand, complex variable length instructions mean that you don't need as many bytes to express some piece of code both since you're using less bytes per instruction on average and because complex instructions mean you sometimes use fewer of them.

Of course, Intel is the biggest CPU vendor out there and has correspondingly large and brilliant design teams working hand in hand with the most advanced fabs in the industry.

Now, there are many RISC instruction sets that have taken on x86 before, but they all attacked it from the high end, from upmarket. Doing just the opposite of what ARM is doing now. Will it succeed in dethroning x86 from the low end the way x86 did to it's rivals? Who knows. But I think that previous fights don't tell us much about this one.


Quite. But there was plenty of "Intel is doomed" hype in the 1980s when RISC chips first appeared. Indeed, Microsoft didn't write either of its home grown operating systems -- NT and CE -- on x86 processors.

Of course, "Intel is doomed" (and "Microsoft is doomed") have been staples of clueless fanboy hype for 40 years. I'm still waiting for one of them to be right....


Next couple of years are going to be interesting. Intels been winning for so long that it seems that theyre immune to the innovators dilemna. Will be interesting to see if the arm platform will break into the mainstream of desktop / server computing or if Intel will prevail and have 80% of the desktop and mobile market.


How good is ARM at running just generic ARM binaries? There are all the custom hardware parts, we can ignore them, but can I build a 32bit ARM binary that will run on a wide range of ARM cores with good/great performance?

Historically, it has been my experience that pretty much all the non-x86 platforms the compiler and hardware specific optimizations tend to have a pretty dramatic impact. Intel just has so much code and existing code streams to factor in to their designs for new hardware. Maybe this has changed. It's a hard road if mismatched or non-hardware optimized binaries are slow and pokey and hardware specific optimized binaries are competitive. Come out with a great 64bit ARM core that can run nearly all ARM binaries with decent performance (clearly, excluding stuff that needs custom hardware..) and ARM could be pretty disruptive.


ARM realized that this was a problem when they got into smartphones, and while the lineup was a total and complete mess in 2008, their modern high-end chips actually provide a pretty uniform experience.

The half-watt microcontroller replacements still need custom builds, but the chips used in top-line smartphones can now all run the same compiled OS and apps. They are going to do a 64-bit transition soon, it will be very interesting how that will turn out.


Historically the big thing has been the variety of floating point units — nowadays VFP3 is pretty much a de-facto standard on the high-end ARM chips (it's required from Cortex-A8 onwards in the application profile), and what's done on Android (where you have a huge diversity of hardware, some with FPUs, some without) where performance matters is you ship one hardfloat binary and one softfloat binary.


Man, talk about rubbing salt in an (old) open wound. I was SO excited when they announced that, and was really hoping that we'd get cheap, widely available Power based motherboards in a standard form-factor, capable of running Linux or BSD or whatever, etc.

Yeah, no.

Some more competition for Intel x86 and some widespread availability of Power machines (that don't cost a bazillion dollars) has felt like a pipe dream for years, and I'm not optimistic now...


Chipzilla = Intel.


The final paragraph sounds like an answer to the AMD/ARM-driven HSA Foundation.


I assume this is mostly targeted at the very high-end - servers and super-computers and such.


The vast majority of Power Architecture CPU's, by units shipped, go into lower end devices, so I'm not so sure that's the only area they'd target.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: