I think this article paints a rosy picture of the PowerPC. I was a Mac user and owned G3 and G4 Macs (and a PowerPC 603 Mac). It wasn't a happy time that suddenly came to an end with the G5 and a decaying relationship. IBM and Motorola had been struggling to keep up with Intel for a long time. Apple kept trying to spin it and the next-great-thing was always just around the corner...the problem is that Intel kept getting there faster and cheaper.
Apple would talk about the "MHz-myth" a lot. While it's true that MHz doesn't equal performance, Intel was doubling the PowerPC's performance most of the time. The G3 saw Apple do OK, but then Intel went back to dominating in short order. The PowerPC never matched Intel again.
It was really bad. People with Windows computers just had processors that were so much more powerful and so much cheaper.
You can say that Apple always charges a premium, but not too much today on their main lines. Apple simply doesn't sell low-end stuff. Yes, a MacBook Pro 2GHz costs $1,800 which is a lot. However, you can't compare it to laptops with crappy 250-nit, 1080p screens or laptops made of plastic, or laptops with 15W processors. A ThinkPad X1 Carbon starts at $1,553 and that's with a 1080p display rather than 1600p, 400-nits rather than 500-nits, 8GB of RAM rather than 16GB (both soldiered), and a 15-watt 1.6GHz processor rather than the 28-watt 2GHz part. Heck, for $1,299 you can get something very similar to the ThinkPad X1 Carbon from Apple (though with a 1.4GHz processor rather than 1.6GHz) - $250 cheaper!
The point of this isn't to say that you can't get good deals on Windows computers or that there's no Apple premium or even that there's any value in Apple's fit-and-finsh that you're paying for. This is to say that I remember things like the original iMac with CRT display, 233MHz G3 processor, 13" screen (when people wanted 15-17" screens), and an atrocious mouse going against Intel machines for half the price with nearly double the speed and better specs on everything other than aesthetics. Things were really bad trying to argue that someone should spend $1,300 for an iMac when they could get a Gateway, eMachine, Acer, etc. for $600 with a 400MHz processor rather than 233MHz. A year later, Apple's at 266MHz while Intel has released the Pentium III and is cranking it up from 400MHz to 600MHz that year.
Yea, you can point to $700 laptops today and say, "why buy an Apple for $1,300?" Sure, but at least I can say that the display is so much better (500 nits vs 250 nits and retina), it's lighter than those bargain laptops, fit-and-finish is so much better, etc. At least I'm not saying, "um, no...all those benchmarks showing the Windows machine twice as fast...um...and the mouse is so cool because it's translucent...you get used to it being terrible." It's very, very different from the dark days of 2000.
Plus, today, a price premium doesn't seem as bad. Back in 2000 when you thought you'd be upgrading ever 2-3 years, you'd be shelling out a lot more frequently. If performance doubled every 18 months, 3 years later you'd be stuck with a computer running at 1/4th the speed of something new. With the slowdown in processor upgrades, paying for premium hardware doesn't seem like throwing money away in the same way.
The article also paints the RISC architecture as superior. I'm not a chip expert, but most people seem to say that while RISC and CISC architectures have a different history, modern CPUs are hybrids of the approaches without huge advantages inherent in their ideology. Frankly, if Intel were able to get down to 7nm and 5nm, Apple might not be looking at ARM as strongly.
I think it also paints Apple as some sort of more demanding customer. In some ways, sure. Apple likes to move things forward. However, it's not like MacBooks are that different from PC notebooks. The difference is that Apple has options. They can move to another architecture. Windows manufacturers don't really have that. Sure, Windows on ARM has been a thing, but Microsoft isn't really committed to it. Plus, Windows devs aren't as compliant when it comes to moving architectures so a lot of programs would be running slowly under CPU emulation.
The big issue is that Intel has been stuck for so long. Yes, they've shipped some 10nm 15-watt parts and even made a bespoke 28-watt part for Apple. It's not enough. I'd argue that PC sales are slow because Intel hasn't compellingly upgraded their processors in a long time. It used to be that every 18 months, we'd see a processor that was a huge upgrade. Now it's 5 years to get that upgrade.
There's a trade-off between custom products and economies of scale. With the iPhone using so many processors and TSMC doing so well with its fab, Apple now kinda doesn't have to choose. Intel has been charging a huge premium for its processors because people were locked into the x86 and it takes a while for new competition to happen. Their fabs have fallen behind. It looked like they might be able to do 10nm and move forward from that, but that doesn't seem to be working out too well for them.
The transition from PowerPC to Intel was about IBM and Motorola not being able to deliver parts. They were falling behind on fabs, they weren't making the parts needed for Apple's product line, and it was leaving Apple in a position where they simply had inferior machines. The transition from Intel to ARM is about Intel not being able to deliver parts. It wasn't simply a short time when they couldn't deliver enhancements, but a decently long trend on both accounts. Apple knows it can deliver the parts it wants with its own processors at this point. The iPhone business is large enough to ensure that and they can make laptop parts that really fit what they're trying to market. Intel got Apple's business because they produced superior parts at a lower price. They're losing Apple's business for the same reason.
> This is to say that I remember things like the original iMac with CRT display, 233MHz G3 processor, 13" screen (when people wanted 15-17" screens), and an atrocious mouse going against Intel machines for half the price with nearly double the speed and better specs on everything other than aesthetics. Things were really bad trying to argue that someone should spend $1,300 for an iMac when they could get a Gateway, eMachine, Acer, etc. for $600 with a 400MHz processor rather than 233MHz. A year later, Apple's at 266MHz while Intel has released the Pentium III and is cranking it up from 400MHz to 600MHz that year.
There are a lot of inaccuracies in your memories. The original iMac came out in 1998. At the time Apple’s G3 processors were very competitive with everything offered by Intel:
And you were not getting double performance machines for half the price of the iMac. You are also incorrect about the performance situation one year later which would be 1999. Yes, that was the year the top of the line Pentium III came out at 600 MHz, but it was also the year that the top of the line Power Mac G4 came out at 500 MHz (a machine I owned after it was delayed because yields on the 500 model were poor (we were offered a 450 MHz part as a replacement)). The G4 500 was superior to the P3 600 in many benchmarks, and crushed it in others thanks to the altivec vector unit:
I just linked to one pretty biased site here (first that came up on Google). But your extraordinary claims need some source because I don’t remember it like that at all in 1998 and 1999. PPC was a solid contender throughout the late 90s.
Also the original iMac had a 15” screen, not 13”. I had an iMac too.
I do remember it like GP. The G4 Powermac came out in '99, and while at 500mhz it could beat a 600mhz Pentium, Intel also released an 800mhz part that year. AMD would release a 1GHz K7 within the year as well.
So yeah, the G4 was perhaps winning the IPC battle, Intel and AMD were more than making up for it with higher frequencies.
That's exactly when motorola dropped the ball (as the article mentions) - a year before they were still more or less head to head, but once amd & intel started reaching for the 1 ghz barrier they left motorola & ibm behind.
Somehow the various RISC vendors managed to remain competitive for a bit longer at the top end[0] - maybe it was easier to compete at the high workstation/server/supercomputer end than at consumer hardware?
"We are starting to see some great games come back to the Mac, but this is one of the coolest I've ever seen...this is the first time anybody has ever seen it, the first time they've debuted it...Halo is the name of this game, and we're going to see, for the first time: Halo."
> I'd argue that PC sales are slow because Intel hasn't compellingly upgraded their processors in a long time. It used to be that every 18 months, we'd see a processor that was a huge upgrade. Now it's 5 years to get that upgrade.
I think it’s Moore’s law approaching its limits and the direction of chip improvements isn’t core speed but the number of them, power usage, etc which make a difference but for most folk it doesn’t appear like an improvement like say doubling the frequency every 18 months. I have an old laptop and it keeps up quite nicely after 8 years...
This is me today. I'm typing this on an 8 year old macbook pro. First gen retina. 4-cores and 16GB of ram. I want to upgrade, I really really do. I have a 16" with 8-cores and 64GB of ram at work, but I can't bring myself to purchase one for myself since I've been telling myself I would wait for 10nm.
The first 14nm processors started shipping in the 15" in 2015 - 5 years ago.
My current 8 year old macbook has a 22nm processor. I would never have thought 8 years ago that Intel would only have managed a single node shrink since then.
>I have a 16" with 8-cores and 64GB of ram at work
I'm jealous and genuinely curious where do you guys work that your employers can afford to get everyone such expensive machines.
I've been a dev in the EU for 8 years now and at most places I've worked or interviewed(not FAANG) the machines you get are some cheapo HP/Lenovo/Dell with only the executives having Apple hardware.
I never understood why companies in the west cheap out on hardware so much since compared to the cost of office rent and employee salaries that's a drop in the ocean, they could buy everyone MacBooks or Ryzen towers and it wouldn't even dent their bottom line.
Silicon Valley startups are pretty much, hey what do you want as far as laptop specs go? At my company new hires can pretty much order anything they want (which is mostly MacBook Pro max CPU and RAM) and one or more monitors, no issue. Heck we have one or two crazy people with Windows laptops :) Old employees are welcome to refresh at 2 years no issues. I am on my 3rd laptop in 5 years. In my case I travel so I have swapped MacBook Pro for a MacBook then a MacBook Air. I have the same Apple 4K LG on my desk at home and work paid for by the company. Same mechanical keyboards also.
This is pretty typical here.
Heck we swap build servers every 6-12 months based on test speed. We buy one of every new CPU and do a test run of auto build and ptest. If it is reasonably faster we order a new rack and replace the old boxes. Power and cooling is way more $$ then the HW. Every month we do not need a new cage is a win. We are deploying AMD Epyc now with 10G-T + 4x1G (not network limited in our test, just segmentation to test DUTs) with a core of 100G for fileservers and 25G for services. File servers are TrueNAS SSD shelves for test and with spinning rust for build artifacts. We run ~1000 containers on a single server in ptest (scaling that was fun to figure out ... hint you have to play with networking stack ARP timers - strace is your friend).
Big companies too. It’s too important not to. I’ve been in companies where employees constantly complain about their machines, and it legitimately causes people to leave their jobs. I’ve seen people offer to buy their own laptop, if they were just allowed to use it.
In a place where talent is as competitive as the bay, you wouldn’t survive making people use subpar machines.
There are a few places that kind of go 'to the nines' for employees and give adequate and even overpowered workstations. I think the crowd that gets that treatment is slightly over-represented on HN.
But most businesses here are the same, you get lucky to get any nice feature over the 'same laptop that sales gets' which is barely more than a Chromebook. And getting an external monitor that's not the cheapest bulk-buy model was also pretty hard to do (I had a friend in marketing who helped me get a larger display with better colors at that place).
Or you go self employed and get your own fancy workstation since you know it’s easily worth your money in the long run. Don’t work for people that don’t understand this.
The rule of thumb is that a typical dev here is 250k fully loaded (benefits, office, etc). When you are trying hire and your are competing with Google and Facebook that extra 2K on the laptop kit is a rounding error. It’s hard to hire as FANGs are a sure thing money wise.
I am in the UK, and I've only found startups willing to spend money on developer hardware. The larger companies seem to pick up low-end Lenovos for developers, and better i7 Lenovos for those managers who would never use them.
Though I would gladly dump the MacBook Pro 16" I currently have to use for work in an instant for a high-end Lenovo/Dell. Apart from MacOS being extremely flakey these days (why does Spotlight only seem to pop up 50% of the time), I don't understand why they don't provide proper ISO and instead give us some form of ANSI that has the dancing alien (§) dedicated key, and why they hide the # which as a Pro developer I use all the time.
That it also spends its entire time overheating so it burns my lap is just the icing on the cake.
No, switching the charging did nothing for me or any of my colleagues.
But when I used a MacBook Pro about 10 years ago the machine overheated all the time and burnt my lap. They are just a shitty design, but they look pretty.
If you are being paid 30k a 2K upgrade would be worthwhile if it resulted in a 6% increase in productivity. At 60k it would be justified by a 3% increase in productivity. Maybe your company just has no meaningful way to measure or understand what effects productivity.
Hm, when I worked for a public lab in France we got the "pro" line Dell laptops. We actually got complaints from users that the software was slow because we only ever tested it on high end machines. Later when working for a startup they gave us choice for a reasonable workstation, this was when the company was doing well in the beginning though.
I think if you have any way of escalating then the best thing is to come up with numbers, such as "a faster computer would let me compile the program in 20 less seconds, which is this much time earned", or "a better screen would not require me to have an external monitor".
Now, I think that to get a 16" MBP for work requires quite a specific use case, because it's really a machine one should use only when it's the only computer they have. For the same price I think you could get a faster desktop + a more portable laptop.
I do contracting for a professional services outfit, and they gave me the Windows corporate laptop (Dell Latitude) and also sent me a MacBook Pro 16", which my manager had to explicitly ask for.
The only difference? I can receive encrypted emails only on the Windows laptop due to some software not being available for the Mac.
I do all my development on macOS, and if spending $4k on a machine means I am more productive, can get work done faster, it's a return on investment that pays back multiple times.
Do note that laptop refreshes in my past companies (current is ~2 years) have been on average every 2.5 year... so it's not like I get a new laptop yearly.
That Dell Latitude is frustrating to use. The trackpad is absolutely atrocious, the display is dim, the keyboard is really mushy and causes pain in my hands when I use it for short periods of time...
Had a similar situation (a Windows and a MBP) and just putting VMWare and Windows 10 on the MBP solved pretty much all the problems of having to lug around two machines.
There's a Citrix setup as well, which while slow works fine for the one or two times a month that I need access to encrypted email... so I haven't carried around the Windows laptop.
Most of the top Finnish software consultancies have a (basically) unlimited budget for your main laptop and give you freedom to pick whatever you want.
My company is working on upgrading all of us to that config. Currently we mostly use the 2013 13", and people get upgraded when that one burns out. I'm debating asking for a halfway trade, getting a nice Thinkpad X1 and being able to use my favorite linux tools on it.
I'm in the US, but I've been in a similar position to you; my last job gave me a tower with 8GB of ram and a Core 2 Duo running Windows 7 32-bit. Utterly useless. I had to sneak Ubuntu onto it when nobody was looking. They couldn't even tell the difference.
I ended up having the last laugh when every computer in the office got wiped because somebody decided running the mail server on our ActiveDirectory server was a good idea, and also thought that leaving ports open on the mail server was a good idea.
My boss is very nice. He treats every person as an individual. Also we are fairly small.
Last place I worked I got an HP Windows desktop and told to remote in from a 12" laptop whenever I needed to work from home or show off something in a meeting. That place also wanted to knock down three single-person offices so they could fit 10 developers in the same space. And for the last month or so I had a developer working next to me on my own desk since management didn't prioritise us developers over HR.
My current workplace and my previous are both in the public sector in Norway.
The big subjective improvements appear to have been in screen resolutions/sharpness and in SSDs. An 8-year-old LCD will likely have dimmed substantially.
On the other hand if you like it, you like it, and cheap is beautiful.
It won't :) It has a LED backlight. Older LCD screens often had fluorescent backlight, which will and did degrade after hours of use. According to Wikipedia, LEDs are the most popular backlight in LCD screens since 2012.
The idea that LEDs last forever is a myth. LEDs degrade over time. They actually list it on the spec sheet. For example an L70 rated led with a 25k hour life will produce 70% of the light it produced when new after 25k hours.
Recently I replaced 4 Asus monitors with led backlights that were produced in 2014 and 2015. Asus says 300 nits. Tested them when I was calibrating their replacements and they were 110-120 at full brightness.
Monitors color shift and dim as they age... that’s why hardware calibrators exist.
Yes, Moore's law is hitting its limits: For example 5nm distance equals like about 25 silicon atoms. That's like: Quantum electron tunnelling will ruin your life.
So if we want increase our chip's performance any further, we need a fundamental change in our technology. And since silicon atom is 210 pm big and carbon atom (graphene is made from carbon) is 170 pm big, we need to change our architecture as shrinking transistors will not be possible anymore. I mean CISC/RISC, drop x86 support and so on.
Blaming Moore's Law is not fair since Apple, AMD, NVIDIA, Qualcomm, HiSilicon,... can deliver reasonable improvement in these 3 years while only Intel is stagnant.
To be fair, most of that is them catching up to where Intel already was. Noone seems to actually be fabbing meaningfully smaller feature sizes (or higher layer counts) than what Intel is stagnating at.
> To be fair, most of that is them catching up to where Intel already was. Noone seems to actually be fabbing meaningfully smaller feature sizes (or higher layer counts) than what Intel is stagnating at.
Eh, Intel's 14nm is 37M transistors/mm2. TSMC and SS are both up to 52M/mm2 at 10nm, and 92M/mm2 at 7nm. Both Apple and AMD's latest gen stuff is on TSMC's 7nm process _today_. Yes, Intel's 10nm is at 101M/mm2, but until they can get mass production on that they're falling substantially behind.
If you look at long-term trends transistor density has kept pace (slowed down consistently but not dramatically along the years), the big difference is that it no longer gives you as much as a performance boost as it used to.
The difference is that ARM has been able to deliver desktop-grade performance at power levels that are suitable for use in an iPad.
Intel and AMD might be able to deliver somewhat higher performance by throwing a whole bunch of cores at the problem, but they do so at a much higher cost in power requirements. And it would be easy enough to design ARM machines with an equal number of cores (or even way more), and still have much lower power requirements.
Intel stagnated, and has high power requirements. ARM has caught up, and has much lower power requirements.
Sure, but that's them having a competent (well, less incompetent) ISA and microarchitecture; that they've made better use of the transistors available, not that they've achieved better feature density than what would be expected from where they are on the Moores law curve.
Also Intel and AMD have not delivered higher performance via more cores; they delivered lower price for (say) 64 cores worth of performance, by putting them all on the same chunk of silicon (edit: or at least in the same package). (There are some slight improvements in inter-processor interrupt and cache-forwarding latency, but if that's a performance bottleneck, the problem is bad parallelization at the code level.)
> Sure, but that's them having a competent (well, less incompetent) ISA
Have you looked at the encoding of Thumb-2 (T32) and particularly A64 (their newly designed instruction set for 64 bit)? Their instruction encoding is in my opinion much more convoluted than x86.
> they delivered lower price for (say) 64 cores worth of performance, by putting them all on the same chunk of silicon.
Arguably AMD did the exact opposite - lower prices via splitting a processor into multiple pieces of silicon. (Chip price scale exponentially with area at the high end)
Well, my point there was that a 64-core CPU is not (significantly) higher performance than 64 single-core CPUs, so multi-core is - if a improvement at all - a price improvement, not a performance improvement, but fair point about the price-vs-area scaling.
> The transition from PowerPC to Intel was about IBM and Motorola not being able to deliver parts.
Actually, it was about nobody wanting to deliver a Northbridge for Apple.
I interviewed at Apple in this timeframe and was stunned that they used Northbridge ASICs with synchronizers everywhere. No clock forwarding to be found.
This kills memory and graphics performance dead.
On top of that the support ASICs were using more power than the CPU!
Once Apple switched to x86, they could use the Northbridge and Southbridge chips that everybody in the universe was using.
>The big issue is that Intel has been stuck for so long
My memory isn't so good, and I never owned a P4, but according to Wikipedia in August 2001, they released one at 2GHz on 180 nm. Almost 20 years later, my laptop i7 is running at...2.1GHz (base)? And 14 nm. That's kind of mind boggling. I think it would be interesting to read/write an article comparing the two in depth and what performance benefits you get from the newer chip.
Except that instead of 1, now it has 4 CPUs and a GPGPU as part of it, GHz aren't everything, the problem is that most programs as still written single threaded.
The frequencies may not have increased beyond 2-3 GHz but the speed has still got faster because modern processors are much smarter and are able to do more work per cycle. They have all sorts of fancy tricks to do that - speculative execution, hyperthreading, branch prediction, etc.
there's a very widely used measure here in the academic community: instructions per cycle (ipc). IPC boils down to how many instructions can you actually complete per clock cycle once you account for memory, caching, etc.
IIRC, that's maybe a 16x improvement (32x if you count 32->64 bit). Which accounts for less than half of the (orders of magnitude of) improvement we should have got from Moore's law.
(More cores aren't a performance improvement; if you were willing to deal with non-serial execution, you could have just bought 32 Pentium Fours; putting them all on the same chip is convenient (and cheap), but as a price/performance improvement, it's all price, no performance.)
> (More cores aren't a performance improvement; if you were willing to deal with non-serial execution, you could have just bought 32 Pentium Fours; putting them all on the same chip is convenient (and cheap), but as a price/performance improvement, it's all price, no performance.)
That's only true if you only consider ALU throughput for performance, but in terms of real world performance, where the interconnect between cores and memory is hugely significant, a multicore processor has many advantages over a rack of otherwise equivalent single-core NUMA nodes.
My guess is that there are now a lot of forms of hardware acceleration of specific things that make your daily experience seem faster, but I haven't seen them catalogued and put in perspective with measurements.
I haven't read about P4 and NetBurst in a long t ime, but is mu memory serve me right, P4 is usually less than 1ipc due to its very long pipeline (31 stages IIRC) that is very prone to pipeline stall. Modern processor can also do many thing faster. I think P4 takes ~110 cycles for integer division, while modern cpu can do in ~30-40 cycles. And IIRC, P4 cannot flush the division unit so if branch/jump prediction is wrong and division is speculated, it was to wait until the division unit finish computation before it can resume execution.
The biggest advantage of RISC derived designs is easy to parse instructions. The problem with X64 is not the number of instructions, as they can be thought of as macros anyway, but all the different instruction lengths and encodings. This makes decoding into a bottleneck and a source of overhead that ARM and other newer designs do not have.
Easy to parse instructions? I mean Thumb-2 is very much a variable-length encoding, though okay there's no equivalent AArch64 instruction encoding, but basically all AArch64 implementations still support Thumb.
All the modern research suggests very much that the decode stage isn't a significant difference nowadays; instruction density is increasingly significant as CPUs become ever faster compared with memory access.
The problem is that this calculation takes more cycles, and you do not know where the next instruction starts until it completes. It serializes what should be a parallel process. X64 chips use crazy hacks like caches and tables to work around this, but these add more transistors and power consumption.
In processors, I-cache and decode consume a disproportionate amount of power relative to their size. I'd also note that all the latest high-performance ARM chips include an instruction decode cache because the power cost of the cache plus the lookup is lower (and much faster) than a full decode cycle. Of course, there are diminishing returns with cache size where it becomes all about bypassing part of the pipeline to improve performance despite being less power efficient.
x86 instruction size ranges from 8 bits to 120 bits (1-15 bytes). Since common instructions often fit in just 8 or 16 bits, there are power savings to be had due to smaller I-cache size per instruction. That comes at a severe decode cost though as every single byte must be checked to see if it terminates the instruction. After the length is determined, then it must decide how many micro-ops the instruction really translates into so they can be handed off to the scheduler.
ARM breaks down into v7 and v8. The original thumb instructions were slow, but saved I-cache. Thumb 2 was faster with some I-cache savings, but basically required 3 different decodes. ARMv8 in 64-bit mode has NO 16-bit instructions. This reduces the decoder overhead, but obliterates the potential I-cache savings. No doubt that this is the reason their I-cache size doubled.
RISC-V is not being discussed here, but is the most interesting IMO. The top 2 bits in an instruction tag it as 32-bit or 16-bit (there are reserved bit schemes to allow longer instructions, but I don't believe those are implemented or are likely to be implemented any time soon). This fixed bit pattern means that length is statically analyzable. 3 of the 4 patterns are reserved for 16-bit use which reduces the instruction size penalty (effectively making them 15-bit instructions). The result is something 3-5% less dense than thumb 2, but around 15% MORE dense than x86 all without the huge decode penalties of x86 or the multi-modes and mode-switching of thumb. In addition, the effect of adding RVC instructions reduces I-cache misses almost as much as doubling the I-cache size which is another huge power consumption win while not having a negative impact on overall performance either (in fact, performance should usually increase for the same I-cache size).
I thin the whole idea of measuring computer screen as 1080p, 900p and such completely idiotic. Computer screens should have some sane dpi values and scale the resolution according to the size. This is how it was before attack of 16x9 screens and Apple is the only company that is following that old trend. Even praised System76 have the same crappy 16x9 scrrens.
I'm tempted to agree with you, but at the same time, I think the appropriate value for a sensible DPI depends on screen size. Because smaller screens are typically held closer to the user, it's kinda fair to say a 4k TV and a 4k phone screen have the same resolution, in the sense that if both are at a distance so the screen takes a reasonable fraction of your vision, they will have the same level of visible detail. Within a device category that may be less true, but between categories resolution seems like a reasonable measure.
> The transition from PowerPC to Intel was about IBM and Motorola not being able to deliver parts...The transition from Intel to ARM is about Intel not being able to deliver parts...Apple knows it can deliver the parts it wants with its own processors at this point.
I think this is the key reasoning. There's something really interesting happening here. In the past, when Apple transitioned from Motorola to PowerPC Apple wasn't big enough to design and fab their own chips, this was also true when they moved from PowerPC to Intel.
However, Apple has some choices here, and I think the decision comes down to long term supply-chain risk:
1) Switch to AMD. Their processors are blowing the doors off of Intel, at better prices. They aren't having the same process problems and their high-end components are fantastic. However, history has shown AMD surge ahead for a bit, then Intel, and back again. Apple probably doesn't want to risk this happening after they engage in some huge volume discount contract. More importantly, neither Intel nor AMD are winning in the lower power vs performance segments.
2) Become their own fabless designer. The risks are enormous. What if they can't keep pushing the performance envelope against Intel/AMD? What if their fab partners can't keep their processes moving forward? What if they fail to make this architecture jump (again)? But if gives them better supply chain control and increases their vertical integration.
In some sense it points to a weakness of highly vertically integrated companies...as a model it makes their entire product lines dependent on every component being able to progress. If any component lags, the entire product line suffers. So outside suppliers, who have multiple customers to please, become key sources of risk and it will become the instinct of the company to move the riskiest parts of its supply chain in-house.
If Apple is unable to keep advancing ARM chips in terms of performance (regardless of power) it will be a problem for them. But one final advantage of building their own, is that they can obfuscate this component from the rest of the market and make comparisons on cores/Ghz/etc virtually impossible. It's a bit like how Apple really doesn't even advertise how much RAM is in their portable devices.
> Apple wasn't big enough to design and fab their own chips, this was also true when they moved from PowerPC to Intel.
Note there were rumours going around about whether or not Apple was going to buy PA Semi (or at least contact them) for their PWRficient CPU (implementing PPC).
Ultimately they did buy PA Semi in 2008, though for the skills and they've since made all of the iPhone/iPad CPUs.
The fact that our low-power chips also happen to be lower die size is an artifact of path dependence. The primary market for lower power was battery powered devices of which phones are by far the most numerous. So, the lower power chips started there and didn't have much to do. As those have gained sophistication with time, they have also grown in die size.
Xeon and server chips generally want to maximize memory bandwidth--and they make a whole series of architectural tradeoffs to accommodate that.
Phone chips generally want to maximize power efficiency and basically don't care about memory bandwidth at all. They effectively don't want to turn on the system memory or flash, period, if they can avoid it. One way to do that is to cache things completely in local on-chip RAM.
Computer architects will make completely different tradeoffs for the two domains.
> Intel got Apple's business because they produced superior parts at a lower price
A big problem was that IBM was not designing low-power chips for laptops, and laptops were (and are) a major part of Apple's business.
Power6 (not PowerPC) hit 5 GHz in 2007 and Power has remained competitive - Power10 will be described at Hot Chips in August. Of course (perhaps consistent with "Hot Chips") these are not low-power architectures.
“Apple would talk about the "MHz-myth" a lot. While it's true that MHz doesn't equal performance, Intel was doubling the PowerPC's performance most of the time. The G3 saw Apple do OK, but then Intel went back to dominating in short order. The PowerPC never matched Intel again.
It was really bad. People with Windows computers just had processors that were so much more powerful and so much cheaper.”
Don’t know how much I would agree with that. I just took a look at the Megahertz Myth portion of the 2001 Macworld Expo video¹ where Apple compares an 867 MHz G4 processor with a 1.7 GHz Pentium 4 processor, and while I don’t know how valid Apple’s argument is in that video (don’t know enough about how processors work to judge), from the information they give, it does seem plausible that the 867 MHz G4 could outperform the 1.7 GHz Pentium 4 in some scenarios. Frequency certainly isn’t everything, especially when we’re comparing two different CPU architectures.
There were moments where PPC performance was acceptable or better than Intel, but they were brief, far between, and for most of the its life the PPC was far behind Intel.
Take the 867MHz G4 you mentioned. There might have been some applications where the G4 was beating a 1700MHz Pentium 4, but at the time of the demo the top-of-the-line from Intel was 1800MHz and they released a 2GHz only a few days later. A year later Intel was shipping 2.8GHz parts, and Apple was selling 1.25GHz G4s. So whatever architectural lead the G4 enjoyed, Intel was eroding it with faster clock scaling.
This does not even mention the mobile space, where in 2003 Intel was shipping the Pentium M, not the Pentium 4, and it was the Pentium M which derived from the Pentium Pro/II/III and foreshadowed the Core product line. The G4 had no architectural advantages over the Pentium M. Apple's mobile products were stuck on the dead-end G4 for years.
I owned a Mac of some kind throughput the PowerPC era but it was only because I had to run Mac applications. There wasn't anything good about them except on rare occasions you got to see the AltiVec unit really go crazy. Most of the time you just got to marvel at how slow and expensive they were compared to the other PC on your desk.
The G4 was where the rot started to set in, but people are forgetting about the 601, 603 and 604, which had themselves several years of history in Apple designs. The 604 in particular was a real piledriver for some applications and easily competed with x86 of the same era, and the G3's integer performance was even better (its main Achilles heel was a fairly weak FPU, but this wasn't a major issue at the time for its typical applications).
I ran a Powerbook 12inch as my main computer for 4 years while going through computer science study (i.e. doing some relatively intense assignment projects on it). While the CPU was slower I'd argue that the OS at that time made up for it - using 10.4 - 10.6 compared to XP and Vista was a breeze. I had 740MB of RAM that was very much under my control, the only background task I sometimes had to look out for was the search indexer. OSX Terminal was light years ahead of the crappy windows terminal at the time. PDF support across OS was applied to good use to create good looking reports.
Now the tables have turned though. MacOS software has suffered greatly while MS has embraced Linux and the Terminal. Win10+WSL+Windows Terminal+VS Code today is the superior toolchain IMO because it gives you access to the package managers that will also run on your target servers.
The problem is that both Apple and Wintel were fighting for consumer customers; people that were proficient enough to word process, email, and browse, but not proficient enough to understand chip architecture. They just see a spec and assume higher is better. If you have to start from a position of arguing that the your opponent's advantage is a myth, you're already at least a step behind.
Moreover, the enterprise market had lost that war already. Who cares if PowerPC was really faster? Intel was a cheaper chip that did the job. Everyone in my customer service, accounting, HR, etc department can have a PC for much cheaper than Apple.
I ported computation-heavy application code on the MacOS to PowerPC architecture, and benchmarked the results for the client. I do not have the graphs handy, but I cant agree with the first section here entirely. It is true that the performance did not live up to the expectations, but overall it was a win.
There was so much money, vanity and spin at that time, bringing millions of consumers into the world of computing, multiplied by media and unscrupulous marketers. I do not know what a consumer would expect in those days, depends on who you asked and their vantage point. Every camp was guilty of exaggeration I would say.
I don't believe it's going to be a 100% transition either way in the foreseeable future. Apple will move their low-end computers to ARM and keep their high end ones on x86. That way they will leverage over both sides whenever they want to get something out of them. "Hey, Intel, you know those expensive 10nm Xeons we wanted for the Mac Pros cheaper than you wanted to sell them? Would be too bad if we went with ARM this generation" "Hey, TSMC, know those CPUs for high-end MBPs? Give 'em to us for cheap or else."
I’d be a bit surprised about that. Apple likes to keep things like this unified because it drastically cuts costs across the board. Additionally the chip design is going to share a lot with their mobile variants. I would expect the entire lineup to be replaced for laptops. It’s less clear for things like the Mac Pro line (or if they’ll even bother continuing that line).
Apple would talk about the "MHz-myth" a lot. While it's true that MHz doesn't equal performance, Intel was doubling the PowerPC's performance most of the time. The G3 saw Apple do OK, but then Intel went back to dominating in short order. The PowerPC never matched Intel again.
My memory is more like they leapfrogged each other from time to time. The first generation of PowerMacs such as the 6100 absolutely spanked contemporary PCs (100 Mhz 486s and 50 Mhz Pentiums if I remember correctly). It was by no means obvious at that time that Intel would catch up.
What killed PPC was the interested parties (IBM, Motorola, Apple) squabbling over CHRP, and Motorola being unwilling to work on a part that would fit the thermal envelope Apple wanted for laptops. The fundamental architecture of PPC is sound, or at least, sounder than that of x86.
The problem is that the G4 problems break the narrative about the PPC->Intel transition resembling an Intel->ARM transition. The reason the G4 stagnated was because Motorola was heavily focused on embedded processors and didn't prioritize the Mac. There's a similar risk with moving the Mac to A-series, Apple-developed ARM processors because Apple themselves have been prioritizing mobile devices over the Mac in recent years, leaving a significant risk that desktop-class and even laptop-class processors from Intel and AMD will once again leave them behind in performance.
> Intel machines for half the price with nearly double the speed and better specs on everything other than aesthetics.
Back then it was because the monitor and hardware were carefully calibrated - the gamma was strange but the colors were spot on (assuming correct ambient lighting). An Apple made a lot of sense for visual artists of all disciplines, they still generally have this edge today.
What I don't get: what, other than politics, keeps Apple from going AMD?
Apple is already working with AMD when it comes to dedicated graphics chips, and people have done Hackintoshes with AMD parts for ages... so why go the (risky) ARM route instead?
Apple spends a lot of money at TSMC (about 75% of TSMC 7nm production was for Apple chips in 2018, according to a link below). AMD also makes its chips at TSMC and TSMC fab is a limited resource. Instead of paying AMD to make chips at TSMC, Apple can just make them itself.
Why does Apple do anything? I think one factor that is always a concern for them is control. They switched to Intel for more control over their product lines.
Same issue here - they’ll switch to ARM for more control. AMD has some good hardware, but they’d just end up switching one outside chip vendor for another. So while a move to AMD would be “safer”, it still wouldn’t offer any more control over their products.
It's ultimately an intermediary step. Apple already has the tech for making multi-arch software, it's already a part of the toolkit, and they already design and sell ARM chips in their other product lines. It's not a no-risk scenario, but it's a low risk transition from a company that's managed two architecture migrations before.
The Mach-O object file format supports fat binaries and has run working executables on ARM, SPARC, PA-RISC, PowerPC 32-bit, PowerPC 64-bit, x86 and x86_64 first under NeXT and eventually under Apple. It's what every implementation of Mac OS X, iOS and its derivatives use today and there's nothing stopping Apple from support other architectures down the line other than they probably don't want to or need to. If they decided they wanted to revitalize Mac OS X server and ship them with POWER9 or RISC-V CPUs, they could do that. Not sure why they want to, but it's an option.
If they're going to end a working business relationship with a supplier anyway and they have CPUs that outclass their current supplier in CPU performance (an A13 in a $400 iPhone SE outclasses the Xeons in their $6K Mac Pros in single threaded performance), they might as well go the whole hog, skip the intermediary step of switching suppliers (which might mean signing some kind of multi-year deal during which Intel might start outclassing AMD, so that's not without risk either) and run their own designs through TSMC fabs.
Apple has always been that company that likes to own the whole widget, or as much of it as possible. If they didn't, they might have switched to Windows NT in the late 90s rather than buying NeXT. They don't make everything that goes into a Mac or iPhone, they don't even fab the CPUs in their iPhones, just design them, but they figured out which parts they can control the designs for and which they can source from others and made it work by progressively integrating more of the design work in-house.
I am more worried about what it means to us consumers.
While Apple producing ARM based systems might mean slightly cheaper machines, but if it is the cost of giving up control to customize your hardware and software, I will never be looking at another iDevice.
We already have to face the bullshit of soldered RAM and soldered SSD's and locked app stores and now I fear with ARM chips we will be faced with an even more locked down ios-like macOS with installs only possible from app stores. And ofcourse, everything linked to the iCloud to leech off and data mine our personal information. The "secure" chip, and perhaps a locked bootlooder will ensure that we won't be able to install any other OS on it, and Apple could even remotely cripple the device with it.
(As you can tell, I am not at all enthused by this move. And mark my word, this is what Apple will do eventually.)
Soon we shall find out. I'm actually feeling optimistic, I'm not sure we're going to get that particular signal on Monday, but if Apple doesn't decide to lock down the OS completely with an ARM transition, I don't think they ever will and the more pressing concern will be whether Apple decides to keep the Mac line at all down the road.
That said, I keep virtual machines of operating systems that interest me up to date. I'm between 9front and Haiku OS as my eventual replacement, and I still might switch to a ThinkPad running 9front for my next computer regardless of what Apple announces. I'll still have much of what I value on a Mac in my iPad, and my laptop is essentially for writing, programming and backing up other hardware.
That PCs are the exception that confirms the rule, which existence can be traced back to the point where IBM legal team wasn't able to kill what Compaq has set free.
The 90's Apple, like everyone else, had its own vertical integration in software programming stack, network protocols and hardware. Also not every model had internal expansion slots, what we bought was what we got for the device's lifetime.
Naturally the more expensive LC and Quadras had enough internal bays, given their business purposes.
the PowerPC failure seem to be on IBM and Motorola. however, this time its Apple design and manufactured by TSMC. would this combo make any difference?
Well when you design your own CPU, you can't blame the people that made your CPU without blaming yourself. It's like that. IBM and Motorola were ultimately their own businesses, and had their own interests that weren't entirely aligned with Apple.
So if somewhere down the line Apple makes the CPUs in all of their Macs and can't compete, they have no one to blame but themselves. Right now they're not really failing to compete though, as their computers are stuck in the same holding pattern everyone else who depends on Intel is. They could switch to AMD parts, but they're trading one horse out for a horse of the same breed that's maybe a bit younger and prettier.
So if they're angling to axe their relationship with Intel down the line, at least for CPUs, why bother switching suppliers temporarily when they can switch their existing supplier out for something they designed in-house when they're ready and skip the intermediary step?
All reports I've seen over the past month seem to indicate that time is upon us. There's enough smoke that if Apple hasn't quietly passed a memo to the WSJ mentioning that they're not announcing anything of the sort on Monday by the close of yesterday, then the time is most likely now (well, Monday for the transition announcement, likely 2021 for the first shipping products).
> You can say that Apple always charges a premium, but not too much today on their main lines. Apple simply doesn't sell low-end stuff. Yes, a MacBook Pro 2GHz costs $1,800 which is a lot. However, you can't compare it to laptops with crappy 250-nit, 1080p screens or laptops made of plastic, or laptops with 15W processors.
Last time I checked, Apple was selling laptops with an Intel i3 processor packing 8GB of RAM and a 128GB SSD for 1300$.
Apple's cheapest laptop carrying more than 8GB of RAM is selling for around 2300$.
You can argue that you like Apple's gear,but the myth that they are not way overpriced simply doesn't pass any scrutiny.
Internally Apple always had an x86 build of OS X running. Just like I’m sure they have an ARM build running today.
Intel chips ran cooler, had better power efficiency and were way faster at a lot of things than PowerPC.
Leadership was actually super reluctant to switch and it took a demo from an engineer showing the massive improvement to convince them.
If Apple is ready to switch to ARM they must have some impressive CPU. Apples not usually one to dabble here and there, so if they switch it’s going to be a wholesale ordeal. What will this look like for the Mac Pro?
> If Apple is ready to switch to ARM they must have some impressive CPU. Apples not usually one to dabble here and there, so if they switch it’s going to be a wholesale ordeal.
Apple has an ARM-based product on sale today that has a better screen, better battery life, and better performance, than the 2020 Macbook air for many workloads, including compiling and doing development work (it crushed the air in the benchmark sets that can be run there).
That product is the 2020 ipad pro, and that is why there are so many posts about people actually trying to turn them into development machines.
This product is one step away from being a real laptop: it is missing real MacOSX.
The performance of the Intel-based Macbook Air has increased by 1.8-1.9x from 2012 to 2020 (that's 8 years). Apple already has internal A13 prototypes, and A14 is probably going to the prototype phase right now.
I can't imagine the numbers working in favor of Intel. By 2021 Intel can probably deliver a 1.1-1.2x speed up tops for Macbook airs, while Apple can probably deliver a 2x speed up for 2021 with the A13 and another 2x one for 2023 with the A14. Being able to scale the same chip for iphones, ipads, and macbooks, reusing internal resources, and avoiding the "hassle" of having to deal with Intel.
The only thing that's IMO in the air is what is going to be the discrete graphics story for macbook pro's and Mac pros.
Its unclear whether it would make sense for AMD to deliver discrete gfx products that interface well with ARM, and the Apple-nvidia bridge burned long time ago. So I wonder whether Apple has openings for electrical engineers to work on discrete graphics verification & design, and driver developers. It would strongly hint that apple would design their own discrete gfx in house.
The A12 is clearly at parity now vs. Intel's existing mobile offerings, probably somewhat ahead given the long delays with 10nm.
The question upthread is whether or not switching architectures makes financial sense, not whether it's a (mild) technical win.
Switching to their own chips cuts Intel out of the loop, but as far as business risk that simply replaces one single source manufacturer with another (TSMC).
It probably saves money per-part, which is good. But then Apple is still drowning in cash and immediate term savings really aren't much of a motivator.
> By 2021 Intel can probably deliver a 1.1-1.2x speed up tops for Macbook airs, while Apple can probably deliver a 2x speed up for 2021 with the A15 and another one for 2023 with the A16.
That's going to need some citation. Moore's law is ending for everyone, not just Intel. TSMC has pulled ahead of Intel (and Samsung has caught up) for sure, but progress is slowing. That kind of scaling just isn't going to happen for anyone.
>Moores law is ending for everyone, not just Intel.
I am by no means an "in the know" on Chip design and this whole bit is probably a fair bit of speculation, but I remember Jim Keller talking about the ending of Moores law on a podcast in February[1].
If I remember correctly his argument boiled down to the theory, that Moores law is in some sense a self fulfilling prophecy. You need to have every part of your company believing in it, or else the parts stop meshing into one another well. I.e. if a team doesn't believe that they will get reach a density/size improvement, that would allow them to use more transistors in their design they will need to cut down and adjust their plans to that new reality.
If this distrust in improvement spreads inside of a company, it would in turn lead to a steeper slowdown in overall improvement.
And while there may be an industry-wide slowdown at the current point in time, perhaps this dynamic is exacerbated at intel, causing them to loose their competitive edge over the past years.
Intel's 10nm strategy was basically to do everything they could to advance their fabrication process without having to use EUV. Some of those changes turned out to be bigger risks than EUV. TSMC was a bit less aggressive with their last non-EUV nodes, but it actually worked and now they have EUV in mass production (though few if any end-user products have switched over to the EUV process at this point).
And this is a thread in the rust subreddit about compilation speed on macbooks where some users report the performance increase for different generations of macbook pros and macbook airs, if you want a more "realistic benchmark" to calibrate geekbench results: https://www.reddit.com/r/rust/comments/gypajc/macbook_pro_20...
You are definetely right that Moore's law is hitting Intel hard. But AMD is still doing quite well, nvidia and "ati" are doing incredibly well, and Apple chips have been doing extremely well over the last couple generations.
Maybe you are right, and Apple won't be able to deliver 2x speed ups in the next 2 generations. I'd expect that, just like for Intel, things won't abruptly change from one gen to another, but for this to happen over a longer period of time. Right now, only apple knows what perf their next 2 gens of chips are expected to deliver.
The only thing we know is that Apple ARM chips are crushing their previous generation both for ipads and iphones year after year, and now they are betting on them for macbooks, and potentially mac pros, probably for at least the next 10-15 years.
> This is the ipad pro 2020 crushing the macbook air 2020
You keep coming back to that citation. It's more than a little spun. The parts have comparable semiconductor process (Intel 10nm vs. TSMC 7nm) and die size (146.1 vs. 127.3 mm2). But the the A12Z in the iPad is running as fast as Apple can make it (it's basically an overclocked/high-binned A12X), where the Intel part is a low power, low-binned variant running at about half the base clock of the high end CPUs, with half the CPUs and half the L3 cache fused off.
A more appropriate comparison would be with something like the Core i7 1065G7, which is exactly the same die and can run in the same 12W TDP range but with roughly double the silicon resources vs. Apple's turbocharged racehorse.
But the A12Z (based on the A12) is not using the latest TSMC process as used for the A13 (which is almost certainly in higher volume production than Intel 10nm).
Plus, if Apple can afford to put an overclocked / high-binned TSMC chip in the lower-cost iPad but has to put a low binned i5 in the Air doesn't that say something about the relative economics / yields.
For what it's worth I have a Core i7 1065G7 and it's decently fast but gets very hot and definitely needs a fan (which the iPad doesn't) and has good battery life (but not as good as the iPad's).
The advantage still seems to me be to be very strongly with the Apple parts.
> if Apple can afford to put an overclocked / high-binned TSMC chip in the lower-cost iPad but has to put a low binned i5 in the Air doesn't that say something about the relative economics / yields.
Potentially. It probably also says more about the relative product positioning of the iPad Pro (high end, max performance) vs. MacBook Air (slim, light, and by requirement slower than the MPB so that the products are correctly differentiated).
The point is you're reaching. The A12 is a great part. TSMC is a great fab. Neither are as far ahead of the competition as Apple's marketing has led you to believe.
Both products (ipad pro 2020 and macbook air 2020 i5) are similarly priced (ipad pro being ~25% cheaper), yet the ipad pro has longer battery life while having a much better display, and much better raw performance (~1.6x faster !).
The first i7 from apple starts at ~1.6x the price of the ipad pro in the macbook air 2020, yet the results of comparing the a12z with that show that, performance wise, nothing really changes: https://browser.geekbench.com/v5/cpu/compare/2626721?baselin...
I'd suspect the reason is that these benchmarks are long enough, that the Intel CPUs just end up completely throttled down after the first ~30-60s, while the A12Z does not.
Either way, the trade-offs here have multiple axes, and honestly die-size is not something I as a user care about. I care about performance / $ and battery life / $. The A12z seems much better along these two axes than either of the Intel CPUs in the Air 2020.
To find something competitive in terms of performance (but worse battery life) from Intel, one does need to go to the Core i7 1065G7 that you mention, which delivers approximately the same performance as the A12Z: https://browser.geekbench.com/v5/cpu/compare/2627162?baselin...
However, the first machine with that CPU is the Macbook Pro 13' 2020, and that starts at ~2x the price of the ipad pro. From looking at the upgrade prices, the cost of that i7 alone might be by itself ~50% of the whole ipad pro cost, or larger.
So while you are right that these two are comparable in terms of raw power, I doubt they are comparable in terms of performance/$ and battery life/$. Without knowing the exact costs of each we can only speculate. If that intel i7 is 2x more expensive than the A12Z, then perf/$ would be half as good, and since raw battery life is worse, the battery life/$ axes would be twice as bad.
> I'd suspect the reason is that these benchmarks are long enough, that the Intel CPUs just end up completely throttled down after the first ~30-60s, while the A12Z does not.
The reverse is most likely true. The macbook has a bigger aluminum body to absorb and dissipate heat. In addition, it has an active cooler with heat pipes to conduct away that heat for ejection from the system.
> And this is the improvement from the previous generation ipad A10 to the ipad pro's A12Z - 2x speed up in single a generation
Single product generation, not single chip generation. There is a new A SoC every year. The A11 was a thing for the iPhone X & 8.
Apple doesn’t claim to “tick/tock” like Intel did. Apple is also dealing with significantly less mature technology which enables exponential gains like the sort they’ve been delivering through since the A4.
The technology is maturing. The hockey stick growth doesn’t work forever for any metric. Given enough time, it will always flatten out. Apple is already seeing that too. They’ve gone from year on year 2x performance to (per your above) 2x every 2 years.
That timeline will continue to stretch. The only major advantage Apple’s A chips have over Intels is that they can be optimised to Apple users (and I do mean the large aggregate, not the niche coders) typical use cases to maximise battery life. There are no other customers Apple cares about beyond that large majority.
There is no point in benchmarking anything: Macs have long shipped with CPUs one to two generations behind. If performance or efficiency (they’re interchangeable In this regard) on the desktop/notebook fronts mattered to Apple as a company, that would never have happened. Clearly there are other factors beyond the performance/efficiency curve at play and so it’s not a matter of benchmarking anything.
You missed my point. I have no problem with the machine he’s benchmarking, my point is rather that if performance or efficiency were the be all for Apple, they’d never have shipped a non Pareto choice/behind the curve. Clearly there are other factors that lead to their choices.
Yes, the 10th generation of the Facebook/email CPU
Nobody would use that for real work, Apple won the laptop market when developers started using it, now they are killing them for tablets, which are gadgets, not work machines.
I recently got an upgrade but, I spent quite a long time working as a professional data scientist and data engineer on a 2013 MacBook with one of those Facebook/email CPUs. Quite happily, too. It was never going to hack it training any deep neural nets, of course. But that is what our data center is for. I wouldn't want that kind of stuff running locally, anyway, for a whole host of reasons. And it turns out that analyzing data in Jupyter or R is can easily be a lighter workload than displaying Facebook ads or doing whatever it is that Google has recently done to Gmail.
I will admit that our front end developers do all have nice high end machines, multiple high-resolution monitors, all the good stuff that Jeff Atwood tells us we should buy for all developers because They Deserve the Best. I attribute our site's sub-par (in my opinion) UX on the Facebook/email CPUs and 13-15" monitors that our users typically have, in part, to the fact that my colleagues' belief that high-end kit is necessary for getting real work done is a bit of a self-fulfilling prophecy. There's not much intrinsic incentive to worry about user experience when your employer is willing to spend thousands of dollars on insulating you from anything resembling a user experience.
It always amuses me how warped many commentators on HN's perspectives are on what is and isn't "real work". At time's it's bordering on "no true scotsman" fallacy.
I assure you that plenty of people are using those computers and getting paid for the work they do on them.
Most of these people do their work without writing any JS too, enabling them to cope without 32TB of RAM to manage their 16 tabs of JavaScript each running in its own chromium instance.
Some day, we’ll re-learn some old lessons about the need for efficient utilisation of resources. In the mean time, check out the latest old game re-implemented at a fraction of the frame rate in your browser.
To be fair my laptop at work has 8 cores and 64gb of ram and it's already 3 years old
I'm gonna change it soon not because it's not good anymore, but because some of the hardware has become too slow for the things are necessary nowadays
Mind you it's not my choice only, things are more complex, deadlines aren't longer, quarterly reports are still every 3 months but the things we have to do have become more complex and computationally heavy and need new hardware to keep up
I'm an hobbyist musician, a 300 dollars laptop is good enough and even if I was a pro it would be enough
Truth is that the computational power of a 100 dollars smartphone would be enough
So a new chip by Apple doesn't change that their mobile line is already overpowered for the average use cases and the laptop one is underpowered and overpriced
I think the assumption is that user on HN are devs. I mainly do OPS and have a huge vCenter installation availble to me, as well as AWS and Azure test accounts. My spec’ed out work macbook pro mainly runs iTerm, nothing that my personal macbook pro from 2013 can’t do.
There are only two things that I use at home that could benefit from an upgrade, AppleTV hobby projects and YouTube, not the video playback, that’s fine, but the pages loads slowly.
> I think the assumption is that user on HN are devs.
That's a fair assumption. My problem is with the other poster using the term "real work" to imply that Apple's devices are underpowered or useless. And even then, if they are, there's a lot of dev work that can still be done on machines a decade old performance-wise.
I'm starting think I'm in an episode of Twilight Zone: we are in an alternative world where people suffer from severe cognitive dissonance and can't argue properly
Real work referred to computers means heavy load
The original post said "Apple is killing laptops to sell.more tablets which are gadgets and you can't do real work on gadgets"
Which is true
There are a lot of people driving push scooters, you can't do real work with them, you need a proper vehicle if your job requires moving things and/or people all day
Paper and pen have nothing to do with laptops and, usually, brain is more powerful than an i5
> Real work referred to computers means heavy load
I think this is perhaps the source of your problem. You are assuming that others interpret the phrase "real work" to mean exactly what you think of when you hear the phrase.
For many people, when you say someone is not doing "real work", you are implying that their work is not important, or it's not valid. If someone says to you "why don't you go get a real job" - it's the same kind of thing. There are plenty of jobs in our industry writing CRUD apps for businesses, for example. Those ARE "real work", no matter how common or unglamorous they might be. However many of those jobs can easily be done on a machine with very modest resources.
Yes, there are jobs where the demand on computer hardware is much more resource-intensive. But it is a mistake to assume that those scenarios are what people will think of when you use the phrase "real work".
Here on HN real work is not something an i5 can handle
And if an i5 can handle your workload, then the A13 won't make a difference either in price or performance (because you already don't care about performance) it only matters to Apple's profits
So no, an i5 is not enough for doing real work in technology
Any developer here on HN would tell you that with 1k you can find much better deals for the money
If your job is not technology related you can do it with a 5 years old laptop of any brand
We buy new machines because we always need more power, to do everything else that is not real work I own a 3 years old 12 inches Chinese laptop with an i3 that runs Ubuntu and is just perfect
You might be amused, but you wouldn't accept it as a work laptop if your company gave you one
> If your job is not technology related you can do it with a 5 years old laptop of any brand
That really depends on what technology you're working with. In general, if you're not working with a bloated JS project or a multi-million line C++ codebase, a computer from the last decade will do just fine as long as it has 8-16 GB of RAM.
I mean these days the difference between an i5 and i7 is almost non-existent to me as when possible I disable hyperthreading out of an abundance of precaution.
There's a lot of "real work" in tech that can be handled on an i5.
Most embedded programming work could easily be done on an i5 from a decade ago.
> We buy new machines because we always need more power
We need more power because people keep developing bloated software for newer machines.
---
Can you define to me what real work is? Without simply saying "it's work that needs more than an i5 to handle", that is.
> computer from the last decade will do just fine as long as it has 8-16 GB of RAM.
That was my point as well, if you aren't working on anything that requires power you don't need a "working machine"
But if you do, the A13 is not a solution
> We need more power because people keep developing
I'm not the one putting hundreds of ML/AI models in production
But I do enjoy having a system that makes it possible to test an entire stack on a single machine, something that just a few years ago required multiple VMs and complex setups
Even if you're developing puzzle games for low level Android phones an i5 is not enough
You can not believe it, but it's the truth
> Can you define to me what real work is?
Of course I can, even though you can't define what it is that can be done with a baseline i5 that qualifies as "real work"
A typical dev will do some, all, or more than these things
- open up an editor, with multiple files open, with integrated language server, linter/formatter, background error checker
- open up an ide, jetbrains, Android studio, Xcode, on a middle sized project, with a few tens dependencies and start working on it
- launch a maven/gradle compile
- launch a docker build
- launch a docker-compose up with a main application and 2 or 3 services (DB, redis, API backend)
- launch the training on any of the ml/ai framework available. Of course you'll launch it on a very limited subset, it's gonna be slow anyway
- process gigabytes (not even in the tens of gigabytes) of data
- on my i3 even apt upgrade is slow. That's why I use it as a media player and not as work machine.
I really doubt they are, I'm an average programmer
My laptop is a working tool for professionals, if I use it as a dumb typewriter I don't really need modern generation CPUs, a Pentium 2 would be enough
Ram is a more pressing issue these days, given the amount of bloated software one has to run just to page a colleague about something (yes slack, I'm talking about you, but not only you...)
When I am at my computer working I want it to do things for me while I do something else effortlessly, without noticing something else is going on
If I have to watch it while it finishes the job, it would be just a glorified washing machine
And it means it is underpowered for my workload
That's why people usually need more power than the baseline, because the baseline is the human using it, the computer's job is not to just display pixels at command, it's much more than that
Imagine you are an administrative employee, you are typing a report on your laptop, you're doing real work BUT you're not doing real work on your laptop or to better put it your laptop is sitting idle most of the time which is not real work for it
Work is a physics concept, real work means there is force involved and energy consumed
If the energy is minimal or the force applied almost zero, there is almost zero work done
> real work means there is force involved and energy consumed
See my reply earlier in the thread. I think the primary source of contention here is that you assume everyone should only think of the definition you give for the phrase "real work".
Yup, I bought that one within says of the October 2016 because the new ones were outrageously expensive compared to what I was used to and I was not willing to give up inverted-T ;)
There are tools that work and tools for professionals, tools work, professional tools are for people that do it as a daily job and their job depends on them
Your opinion of "what works" is not universally shared by everyone.
You don't know the details of saagarjha's work developing on Android (unless, perhaps, you actually know them in real life, which seems unlikely given the way you responded), and neither do I. saagarjha is the one best able to determine what works for them.
If your point is that that having more resources avaiable to you can make you more productive, that's fine. It's always nice to have as beefy of a machine as possible.
However, not everyone has that luxury. Businesses have budgets, and most developers I know don't get to set their own budget for equipment. Sometimes you are lucky, and the money is there, and your management is willing to spend it. Sometimes that is not the case. Regardless, my primary development machine right now is a 5-year-old laptop, and I get plenty of development work done with it.
The way you worded this latest response makes it sound as if you are saying that I am not a professional, and my tools are just "toys", because I don't work on an 8 core machine with 64GB of RAM. I don't know if that is your intention, but if it is it is both inaccurate and insulting.
> Your opinion of "what works" is not universally shared by everyone.
Earth orbiting around the Sun wasn't either.
See the problem is not if you are a professional or not, but if the tool is.
If I do the laundry and the washing machine takes 4 hours to complete a cycle I'm still washing my clothes, but I'm not doing it using a professional tool
There's no place where I implied people using less than optimal tools are not professionals, I'm talking exclusively about tools.
I agree with every single point you said and has been stating something similar.
>It probably saves money per-part, which is good. But then Apple is still drowning in cash and immediate term savings really aren't much of a motivator.
The only possible reason I could think of is to lower cost and lower selling price ( While retaining same margin ). A Macbook 12" ( Or will it be Macbook SE? ) that cost $799, the same price as iPad Pro.
It is basically Apple admitting tablet with Touch computing will never take over PC with Keyboard and Mouse. Both will continually coexist for a long time if not indefinitely. And this isn't a far fetch statement. Most enterprise have absolutely no plan to replace their office Desktop Workflow with Tablet. The PC market is actually growing. There are still 1.5B PC in the world, of which only 100M belongs to Apple.
I still dont understand how they will give up x86 compatibility on the Pro market though. They could make the distinction where every Mac product with Pro uses x86. And non-Pro uses ARM. At least that is my hypothesis.
The iPad pro not taking over a PC is a self-fulfilling prophecy as long as Apple does not allow it. With the mandatory App Store and its restricting rules, there are many things you just cannot do on an iPad. I wouldn't consider a MB Air, if the iPad had the same capabilities. That it doesn't have, is purely a software limitation.
For consumers and business users, you have Excel and Email. iPad already does 95% of what most Mac user do on their Desktop If not more and yet it hasn't taken over. It has not taken over by numbers, there isn't even a trend, projection or glimpse of hope anything has started.
The tablet and PC is simply a different paradigm best suited for their own purpose.
It is the same narrative that Smartphone will take over most of your computing needs. At first it seems obvious, Nations not been through PC era will go straight to Smartphone. And yet 5 years later the biggest growth area for PC are these Smartphone nations.
> But then Apple is still drowning in cash and immediate term savings really aren't much of a motivator.
Why would you think so? Apple is a for profit company. Apple consistently makes more than 35% or so in margins. Why wouldn’t it make sense to increase that wherever possible (to increase profits or offset some discounted pricing it’s offering elsewhere, like a free one year subscription to Apple TV+ on purchasing a new device)? Also consider the impact of COVID-19 for the next one year or so.
That could be because of cheaper production costs or because Apple has seen iPad sales dropping and traded some of its margins to sell them at lower prices. As a for profit company with very good margins, it only makes sense that Apple would continue to maximize that and not let it slip a lot, even if the gains may seem minimal to an outsider. It’s also the same reason why Apple continues to sell Macs at the same price a at launch time even years later without any hardware updates, even though Macs are a small percentage of its total revenues.
It actually makes quite a bit of sense more so than buying Intel/AMD CPUs if you think about it.
The CPU does little to nothing in those machines using a bespoke ARM design will allow you to have as many PCIe lanes as your heart desires and also optimize other things such as memory access to make those solutions even more optimal than using off the shelf general purpose server platforms.
Discrete graphics are not dependent on the processor instruction set, they just need to interface with the PCI bus. All AMD has to do is deliver their cards, the drivers are FLOSS already.
I assume the drivers might have a lot of specific x86 optimizations, so it's easy (but not too easy)
I'm not sure how's the status of PCI-X on Arm machines at the moment as well. Do PCs still have the North and South bridge to interface to multiple devices?
> The only thing that's IMO in the air is what is going to be the discrete graphics story for macbook pro's and Mac pros.
My opinion is that the A13 graphics performance is not significantly behind on discrete graphics cards today, and we know apple has historically been able to scale up over 50% as they adopted the integrated graphics capabilities from iPhone to iPad Pro.
I wouldn't rule a competitive iGPU from Apple out once you are operating at a higher TDP.
I don't think the complexity of chip performance can be captured by assigning a trajectory number to wholly different product lines and assuming we can use the trajectory we hand waved into existence to predict future performance.
This is especially true when our sole means of comparison is a singular synthetic benchmark we are assigning as the arbiter of truth because its hard to compare actual applications.
I wonder if the size vs. needs has changed in general as the adoption of large screens seems to have slowed down a bit in my environment (probably because everyone that wanted one now has one).
In any case, I don't think you can compare ARM to Intel right now, when talking about switching chips. The question is ARM vs AMD, and with the amazing AMD chips coming out I'm not convinced that the ARM chips can be much of an improvement.
I'm not so sure. Just looking at all the dongles they ship, it seems they are more interested on what is better for Apple rather than what is better for the end user.
Any time there's competition, companies need to make things that users want more than their competition, and most choices towards that goal are beneficial to the consumer. Users want a faster cpu at a good price, and AMD and intel competing has lowered prices and increased performance.
Heck, until Rhapsody Developer Release 2 Apple shipped IA-32 builds. It was only after that, with Mac OS X Server 1.0 that they dropped IA-32 support publicly. That they'd kept the whole thing running on IA-32 never seemed at all far-fetched.
Oh, sorry! I have a NeXTstation in my retro collection and have a fondness for those old machines. I plan to install NeXTStep on a Sparc but haven't got there yet...
Note that that is SPEC, which is a synthetic benchmark that chip makers have been playing games with forever because it's quite sensitive to compiler optimizations. Much the same issue with browser benchmarks when you're using Apple's browser engine on Apple's chips -- expect them to have specific optimizations for the code in common benchmarks.
I'd really like to see some independent benchmarks here. It'll be interesting if they actually release a macOS device and then we can see how it runs GIMP and 7-zip and Firefox etc.
You can blame SPEC for lots of things (the most important one being that it may not be relevant for your workload), but I don’t think you can blame it for not being an independent benchmark. It’s made by a non-profit (https://en.wikipedia.org/wiki/Standard_Performance_Evaluatio...)
Yes, companies can try to influence the definitions of new SPEC benchmarks to make their CPUs look good and they also may spend time tweaking their compilers to look good on it, but it’s not that they write those benchmarks unopposed by representatives of other companies.
The SPEC benchmark code is rather short and synthetic. The vendors then write compiler "improvements" which are specifically designed for optimizing that exact benchmark code for their processors. It often causes the results to be unrepresentative of performance on general purpose code which nobody is doing that for.
And that competitiveness is without taking into account the different operating environments.
The power envelope afforded to the intel chip is dramatically larger than that of the Apple chip. Apple is fighting with one arm tied behind its back, here, it’ll be truly interesting to see what they can do on a level playing field
> If Apple is ready to switch to ARM they must have some impressive CPU.
Not necessarily. The tech advantage that Apple gains by designing its own CPUs is ease of adding its own specialist circuitry for computer vision, speech recognition, and other difficult sensing tasks.
Think Siri on device, not over the internet. And a Siri who is aware of conversational context, and the environment generally. Siri writing meeting minutes -- being asked for them after the meeting is over.
OK, that's probably not what Apple is thinking, but rapid circuitry addition is a real technical option that likely has real value to Apple.
Unified architecture is another thing that has value.
They already do similar things on T2 chip. It has video encoder and other processing feature. Maybe integrating it to SoC makes it possible to implement other feature that works with CPU processing.
Is there any benefit in switching to ARM for beefy desktop machines?
I can see the benefit of ARM for the iMac or a Mac Mini, but I imagine on a tower like the Mac Pro Apple will not want to compromise on performance regardless of heat or power consumed. I could be wrong but I think the Mac Pro would rather switch to AMD than to ARM.
I wasn't in the room, I was only an intern at the time, but I worked with engineers who were. They literally had an ugly beige Dell machine with the OS running. Although I can't seem to find any article backing this up, it was wildly believed internally that Avie and Rubinstein left because they didn't agree with the change. Avie for sure was quoted as saying the PowerPC made the Mac special.
> Internally Apple always had an x86 build of OS X running.
This is true.
> Intel chips ran cooler, had better power efficiency and were way faster at a lot of things than PowerPC.
This is false, and so much so that it brings into question the whole comment. Heat from Intel parts was massively greater, and power consumption overall was greater on Intel parts for similar work. "faster" would have to include some specifics. A low-end Intel part would not beat an expensive PowerPC part, obviously.
> Leadership was actually super reluctant to switch and it took a demo from an engineer showing the massive improvement to convince them.
There were many demos internally and many casts of characters at the executive level, all along.
>If Apple is ready to switch to ARM they must have some impressive CPU.
The key element in the CPU market space is volumen. Volume lowers the cost of manufacturing and allows you to spend much more money on R&D as it is amortised over more devices. While all the big RISC manufacturers had in principle better architectures than Intel, in the 90ies Intel would kill them one by one due to the insane volumes of the PC market. Only on the server PowerPC and Sparc would survive. This is what forced the PowerPC to Intel transition, Apple had litte choices. I always keep wondering what the outcome would have been, if one of the large RISC platforms would have been made available in more consumer level products, e.g. offering an ATX motherboard for running Linux. Volumes would have been much larger.
Another big factor for Intel was, that, financed by their huge cash flow, they had the most advanced fabs, so competitors often were 1-2 generations behind in the available processes.
But now a few things have changed. First of all, Intel got stuck with their 10nm process, so they are no longer the manufacturing lead. But most importantly, TSMC would pull ahead of Intel and offer their services to everyone in the market. For the first time, AMD had a manufacturing advantage vs. Intel.
And the iPhone happened. Given Apple an almost endless supply of money and a huge volume. Over many years, Apple built up a leading chip-design team. This already paid off big with the iPhone, having by far the most powerful CPUs in the mobile space. This also gave Apple a clear insight into the advantages of really owning the whole platform. Designing software and the cpus together.
Offering desktop-class CPUs is of course a large additional investment - so it is not a trivial step. But if Apple is willing to do it, it should be very interesting and would give hope that they have ambitious plans with the Mac, as it only makes sense if they really push the platform.
The big problem with switching CPUs is the instruction set.
For most people, it doesn't matter. But if you are in some niche domains, it really has an impact. I don't expect a smooth transition of libraries such as BLAS or VMs such as the JVM. You can't simply recompile these. You typically need a human to rewrite SSE, AVX and other tricky low level code so that performance stays competitive.
> You typically need a human to rewrite SSE, AVX [...] so that performance stays competitive.
Not true. As I commented a few days ago: There is sse2neon https://github.com/jratcliff63367/sse2neon. For intrinsics it supports, you only need to add a header to automatically map SSE intrinsics to NEON. There is also simde https://github.com/nemequ/simde. It is a larger project and may be more complete. These projects are still immature for sure, but when arm mac becomes a real thing, we will see better libraries that support SIMDs over different architectures.
Apple has the Accelerate.framework already, which is hand-tuned per chip-type, and is what most of the libraries call into. I’d imagine a lot of work will have been done to make that as seamless as possible on the new chips.
It’s also kind of useful for a a framework team to be able to call up the guy designing the next cou and say that “this bit here is a bottleneck, what can you do for that?”...
You still need to recompile and relink. And it's not that simple, Apple's implementation of LAPACK is well out of date for e.g. - it dates back to 2009.
Accelerate is barely used on Mac for scientific stuff in my experience. People tend to use Intel MKL - see for e.g. the Anaconda Python distribution - all of the NumPy/SciPy libs are linked against MKL.
I think it's one of the reasons Apple decided to make such a great leap with Catalina. It's a testing ground for what will happen when we switch to ARM. It's also a clear message to developers to recompile and test their builds against each new version of Xcode and macOS even if they don't plan any new release. The great pain for the users is often the choice between security updates and using their legacy software that worked perfectly so far.
First, I don't think that fast vector operations are a driving force for much of the Mac market these days. A lot of people who are sensitive to these things are already migrating away because of other wedge issues. I do care, enough to demand Intel's MKL over BLAS, but the optimized vector code issue still doesn't worry me, because my local workloads are lightweight and anything where it really matters is already being pushed out to a server farm somewhere. I've actually been trying to convince my own employer to start letting developers have Linux workstations instead of Macs for a host of other reasons. Notably, I'm just getting tired of having to deal with all the little subtle differences in behavior between Docker's Linux distribution and its Mac distribution. And, as an extension of that, I'd be much more worried about not being able to use the same Docker images in production and development than I am about a minor little thing like how well the vector instructions are being used.
Second, Apple has plenty of resources to handle doing those optimizations themselves. They did it before, with AltiVec, and, while I realize that team was disbanded a very long time ago, I expect the existence of iOS as a gaming platform means that an equivalent team either already exists, or could be ramped up quickly. And I presume that covers the most important factors for what would be noticeable to desktop users, such as Quartz.
People keep forgetting that OpenJDK is only the reference implementation, and between open source, research and commercial JVMs there are around 10 of them, with support from tiny microcontrollers all the way up to exascale HPC CPUs.
...and, if I remember well, ARM JVM were generally slow, and require(d) pay-per-every-user that you'd distribute to.
I don't know of any fast BLAS/LAPACK implementation for ARM (but I might be wrong).
So, for something that works well on x86, and is available for free (in the beer and speech sense) now requires payment, if available at all if I want to support macOS? I guess I'll skip.
OpenJDK has ARM support, including vector instructions support.
As for the other JVMs, or ART coffee flavour for that matter, they are also quite good, otherwise they would have been long out of business.
And I really don't understand why the focus with BLAS/LAPACK, if you want that kind of work make a Linux OEM happy, Apple platforms never cared for HPC work.
Apple has been pushing developers to using frameworks rather than hand-optimized vector code since before the switch to Intel, however, and that’s good since AVX is a moving target, too. For the libraries I’ve used, the combination of phones and ARM servers means a lot of them already have Neon support, often very competitive.
For the last decade, too, I’d expect some fraction of the heaviest code to have moved to the GPU.
So, I work in a niche domain - scientific software.
For one thing, you can already get scientific libraries on Linux which run on ARM. That's not too much of an issue. BLAS is an API of which there are many implementations.
The issue is that
(a) it requires everyone to recompile everything
(b) projects which are 'legacy' and are no longer developed just won't ever switch, so that software won't be runnable. If Apple do a Rosetta equivalent, they'll run slowly, but if that project ends (like Rosetta), that software will just stop working. This is pretty much the same problem as where Apple have killed x86 - there are many apps that just no longer work.
Constraining ourselves just to the JVM as VM example, there are implementations for almost any CPU out there, including microcontrollers (e.g. MicroEJ).
JVM bytecode is already code on ARM because of Android, sure it's not OpenJDK and maybe not even a VM, but there should be more than enough experience to draw on.
> And the iPhone happened. Given Apple an almost endless supply of money and a huge volume. Over many years, Apple built up a leading chip-design team.
I'm not sure if this is the case, but my read on it was always that Apple bought PA Semi to bootstrap its chip design efforts.
The book "The Race For A New Game Machine: Creating the Chips Inside the XBox and the Playstation 3" by David Shippy has some commentary on Apple with their relationship with IBM and the Cell. It's an interesting book and gives some reasons for Apple to ditch PowerPC.
There was less software running on PowerPC than x86 back then and still there's less software running on Arm than x86, especially the key ones like professional ones (above all, Adobe suite, AutoCAD, Blender, Ableton, etc.)
Arm chips might even be faster for Apple, but MacOS X unfortunately isn't as castrated as iOS and power users will refuse to give away that (small) liberty available in the desktop system than on mobile one.
An OS where there's only a way to install software (the AppStore) is a huge limitation and even Microsoft itself learned the mistake with Windows 10 S, offering customers to install the standard version.
Apple has already done this a few times before. I'm sure internally they've decided whether they need buy-in from a critical-mass of apps like Word and Photoshop (or maybe they feel they don't anymore). I'm pretty sure the Carbon API was a concession made because Adobe wasn't going to port Photoshop (they still took 10 years to do it). Apple will reach out to those companies individually and coordinate something if they feel it's needed. They often demo these things live at the announcements.
At least this time Adobe has already ported the "core" of Photoshop for iPad Pros last year. Microsoft had Word running on Windows 10 on Arm (I think they merged the mac/windows codebases a bunch of years ago?)
I think the biggest question is what will happen with OpenGL because so far I don't see pro apps adopting Metal.
I have also seen the point made that many/most modern desktop apps are just Electron apps: not too many people/companies are invested in Cocoa these days. Adobe/Microsoft are the big obvious exceptions. Computers are fast enough to enable what might otherwise be called "bloat."
Whilst most of the discussion has been on Intel vs ARM performance and power consumption as a rationale it's probably worth mentioning two others:
- Complete control of the Silicon. Apple will be able to place its own silicon IP on the new ARM chips. Does this mean the integration of the T2 onto the main SoC? Adding Neural Engine hardware? None of this would be possible with Intel and this would seem to provide interesting opportunities for Apple to differentiate the Mac from the PC market.
- Economics. It seems likely that the ARM chips will be materially cheaper for Apple to buy than comparable Intel chips, although Apple will have fixed design costs to meet that it wouldn't do if it stuck with Intel. Any advantage would grow if Mac volumes increase which would make it advantageous to try to grow market share. Is this the start of a push to grow Mac volumes significantly?
I think the lower heat density of ARM, and the increasing heat of AMD and Intel is another interesting point.
AMD and Intel are racing to smaller manufacturing processes that inevitably will increase heat density.
Today the most powerful laptops are those huge PC gaming bricks which of course are much more powerful than any MBP. This is only going to get worse as heat density increases, at least for demanding applications (gaming, 8k video editing, vfx, etc).
By moving to ARM, Apple will be able to offer much more performant laptops in a much smaller form factor which will only differentiate Macs even more from the PC world. At least in theory.
If this works I wouldn't be surprised if PC laptops moved to ARM too a couple of years later.
> If this works I wouldn't be surprised if PC laptops moved to ARM too a couple of years later
Hasn't PC world ALREADY started the transition to ARM? Snapdragon based laptops already started shipping with SD835, Microsoft already has Windows S for such ARM laptops and many OEMs are already making experimental foldable ARM based devices that can take advantage of these small chips. Apple would be just retro-fitting their ARM chips in the shell of Macbooks 2 years too late.
>In the early days of the Apple/Intel partnership, their use represented something of a “pressure valve” on processor limitations that the Power Mac G5 created for Apple’s processor line. It helped solve a plateau in Apple’s laptops, which weren’t able to take advantage of the 64-bit architecture that the PowerPC G5 had promised to consumers.
I doubt Apple will ever forgive Intel for missing the Merom release date...forcing Apple to support 32bit for an extra decade.
I don't think they're exactly comparable. Ditching PPC (and I was working at Apple at this time) was a bold move. The existing Apple loyal user base -- which Jobs wisely knew was irrelevant -- loved having a "different" "Supercomputer" CPU at the heart of their computer instead of the "slow-as-a-snail" Intel. Jobs knew it was better to appeal to the rest of the world than be true to the "true believers" -- who would have been happy with OS9, too. But it was taking a risk:
To the True Believers it didn't matter that by this time ~2003, Intel was fast and much more power efficient. You'd be lucky to get 40 minutes of battery life from a PowerPC based Mac laptop at the time when Intel laptops could run for a few hours.
Today, Apple doesn't have a core group of users who are "proud" of their unique CPUs, and isn't fighting an uphill battle as they were in 2003-2005 or so. However, sometimes they choose a tech for "stubborn" reasons rather than technical ones and it's not clear if the ARM decision is made for the right reasons. For example: I think not choosing NVidia, especially for the Mac Pro, was a big mistake and costs them customers.
>I think not choosing NVidia, especially for the Mac Pro, was a big mistake and costs them customers.
Apple's reluctance to use Nvidia has been a total head scratcher. I owned a 2011 with Nvidia dedicated GPU, but this was the line with known manufacturing defects. I had the mainboard replaced twice because of this issue, but eventually replaced the laptop when the GPU failed again. It's like Apple is holding a grudge.
It's also due to Nvidia's unwillingness to collaborate. AMD allows Apple to maintain their own fork of AMD's drivers for macOS. From what I've heard, AMD also keeps a handful of engineers on premises at Apple campus to assist with work on this fork.
Presumably, Apple wants the same from Nvidia, but Nvidia is notoriously secretive and protective and wants exclusive control over drivers for its hardware.
It's easy to pin this on Apple, but Nvidia's lack of openness shows in FOSS too — where AMD has open sourced the Linux version of their GPU drivers, making AMD GPUs work great out of the box with Linux and allowing for the drivers to follow along with the latest in desktop Linux developments (Wayland, etc), Nvidia has stubbornly insisted on keeping their drivers closed, making for a frustrating install experience, and has actively impeded the development and adoption of Wayland.
The true-believers didn't matter because they were a cult.
I have a family member in the Cult of Apple. Up to the day of the announcement, you would hear him talk about how amazing PowerPC was, Risc vs Cisc, etc.
From the day apple anounced the move to Intel and on, he did a 180, talking about how smart Apple was to switch, how modern Cisc was really just a frontend to a Risc processor anyways, etc.
> a bold move. The existing Apple loyal user base -- which Jobs wisely knew was irrelevant -- loved having a "different" "Supercomputer" CPU at the heart of their computer instead of the "slow-as-a-snail" Intel. Jobs knew it was better to appeal to the rest of the world than be true to the "true believers"
... there were no 'true believers' in PPC only 'true believers' in Apple. They argued "slow-as-a-snail" Intel because that's what Jobs/Apple had been arguing for a half a decade. There was little risk in that appeal; once Jobs said switch there were no holdouts for PPC chips or threatening to jumpship. There was more risk in showing Bill Gates at Macworld than moving to Intel and the crowd merely booed then hopped on board.
And in general, computing have been hidden under one or two more layers. In the early 90s, architecture/OS made a difference (strength, platforms, available software). Nowadays .. everything is so powerful and so similar, and so much is crossplatform.
> "if you're actually using the PowerBook, a charge won't last nearly that long. Apple claims that the battery life is 3 hours and 45 minutes for a combination of wireless Web browsing and editing a text document, but only 2 hours and 15 minutes for DVD playback."
Also DVD decoding was an edge case — spinning up an extra drive and expensive decoding until the hardware, drivers, and OS all supported direct hardware decoding — which was relevant to people on planes but almost nowhere else in normal life.
> To the True Believers it didn't matter that by this time ~2003, Intel was fast and much more power efficient. You'd be lucky to get 40 minutes of battery life from a PowerPC based Mac laptop at the time when Intel laptops could run for a few hours.
You’re calling people True Believers and then just making things up like a period flame-warrior. I supported both at the time and there really wasn’t a significant difference in battery – both could last around 6 hours in light usage, especially since if you disabled Flash, or 3-4 for things like developers or scientists.
> "if you're actually using the PowerBook, a charge won't last nearly that long. Apple claims that the battery life is 3 hours and 45 minutes for a combination of wireless Web browsing and editing a text document, but only 2 hours and 15 minutes for DVD playback."
This is actually a quote from MacWorld, which was always charitable to the platform.
In actual use as a developer doing compiles, I often got less than an hour. I was working at Apple during this time. I know.
So you’re moving from your previous statement of “you’d be lucky to get 40 minutes” to “a few hours”?
Again, I heavily used both supporting a number of daily users. I’m not saying that the situation was anywhere near acceptable by modern standards but there just wasn’t such a huge difference between platforms: nobody had hardware which would run 100% CPU for a full day but light use (web development, system administration) would get you at least half a day. The one exception to that were the PC laptops which had multiple batteries but that’s because they had 2-3x battery capacity rather than a huge disparity in processor efficiency.
Not a fan of the rumored platform switch, but I tend to think the vast majority of Apple's Mac users, who aren't power users/techies, care more about the Apple hardware quality/design and OS than the CPU platform that Macs run on.
I think you're probably right, but there is a mix of both. I'm actually super interested in the platform switch. I can't wait to see what Apple does with arm, and how the differences between arm and Intel change things on laptops.
Apparently Windows supports ARM, so in theory Apple will continue to support Bootcamp.
> (notably, one thing Apple does not need to give up is Windows support: Windows has run on ARM for the last decade, and I expect Boot Camp to continue, and for virtualization offerings to be available as well; whether this will be as useful as Intel-based virtualization remains to be seen).
Windows may support ARM. Yet the reason most people want Windows is compatibility. And if Windows ARM doesn't run most of their software then it's a step backward for them.
It might give a small boost to Windows-on-ARM that Microsoft has been trying for over a decade. Porting a typical windows app might be easier than porting to Mac/Linux because you still have DirectX and all the Windows libraries.
There's also x86 emulation on ARM. It's slow, but it might be enough to run that 20 year old business app.
Windows ARM seems to be a bit faster than that running x86. An older video showed pretty good performance: https://www.youtube.com/watch?v=DRBMBkL7SCM . Using an abstraction layer to convert system library calls to call native ARM system libraries instead so you don't have to emulate x86 versions of the system libraries.
This is the same concept box86 implements on Linux. https://github.com/ptitSeb/box86 . It's good enough to run lower end Linux games on a Raspberry Pi 4
Wasn't Intel saber-rattling about patent lawsuits when Microsoft announced that Windows for ARM would run some x86 apps via emulation? [1]
While I don't know what became of that, I can see Microsoft working out a deal with Intel because Windows is still huge on x86 and wasn't going anywhere.
On the other hand, if Apple is planning to completely drop Intel in favor of ARM and wants to implement x86 emulation, I can't see Intel letting OSX ARM emulate x86 without some form of resistance.
But what IP would be violated for x86 emulation? It's clear that Intel only threaten chipmaker who tried to add x86-emulation-acceleration ISA to the silicon. x86 ISA is complete 17 years ago, which mean a lot of patents has been expired today.
I'm not convinced Apple will abandon x86 entirely. A lot of high-end Macbooks Pro are bought for software developers looking for Docker support, and that gets messier when you're developing for multiple architectures. (Then again, maybe it's a good thing if you like AWS Graviton.) The product that would benefit the most from ARM is the Macbook Air, so it's possible they just do that.
The rest of the PC world includes a lot of desktops, and the cost/benefit analysis there is very different - power consumption matters little, and games demand top-of-the-line performance. So I don't think it'll move to ARM wholesale anytime soon.
Apple is unlikely to waste engineering resources and product resources on something they don't care about. Hackintoshes are largely an enthusiast phenomenon that doesn't overlap much with their core market of people willing to spend a premium on hardware that "just works" and looks nice. For the last 10 years Apple hasn't lifted a finger to stop them , why would they start now?
The danger is not so much T2 chips or the like, because that can easily be defeated in software, but locking down peripheral support would be. For instance if they only support their own custom graphics hardware that would be a problem. This is what Apple tends to do so it's the most likely scenario.
They did waste legal resources going after commercial Hackintosh clones, though, and they might want to prevent something similar in the future. And once they have their own unique CPUs, adding a CPU ID check is trivial.
There's plenty of prior art beyond Rosetta to look at. MS has already done this for Windows on ARM, and simultaneously across ABIs via WSL. Linux offers QEMU-based "on the fly" emulation of different instruction sets based on execution-time examination of the ELF binaries. x86 BSDs have long offered Linux emulation. Even ChromeOS offers lightweight containers for Android and Linux apps.
Apple is actually pretty far behind the curve here, at least in terms of end-user-accessible features. Presumably that means they wouldn't have to innovate too much to get x86/ARM translation working, even for binaries that couldn't be readily recompiled to support the new chips directly.
Well, the GP was talking about getting wine to run. Your examples cover either instruction set emulation or ABI emulation, but not both. In order to get wine to do something useful om ARM mac, it would need to do both while somehow being optimized to not suffer too much performance loss, and without suffering too much compatibility loss.
It turns out such a project actually exists![0]
But it seems to be in an early stage, and relies on infrastructure not exactly favoured in MacOS (QEMU, deprecated OpenGL). Apple could work to port it and polish it so it works with most apps, but why on earth would they invest so much in Windows compatibility?
IMHO, Apple has two reasonable choices here: Ignore Windows compatibility from now on, or do just enough so that Windows on ARM boots, and let Microsoft deal with the supporting x86 Windows headache and blame.
With all of the security issues surrounding Intel CPUs in the last couple of years, not having an Intel processor will be a real advantage for Apple. And competition is a good thing.
That chart doesn't include Apple's implementations, but they were some of the relatively few non-Intel CPUs to be affected by Meltdown as well as Spectre.
Those are kind of fundamental to multicore / multithreaded CPUs, not just Intel. If I have many tasks running on the same CPU, tasks can interact in complex ways. That can leak traces of information, and a clever enough person can pry that tincan open.
Not much one can do about it, other than running untrusted tasks in a sandbox with very well-controlled performance. That means slow performance.
I ditched my Windows PC when Apple moved to Intel. I will ditch my MacBook when it uses ARM.
Not dogma, just practical. I'm a `nix SW Dev and require the 95% of code that works on AMD64 run on my machine. Odd to see MS now supporting more `nix. What we (many SW Devs) need is `nix + AMD64 for the foreseeable future.
Furthermore I do not trust Apple. The advent of the iPhone software lock-in echo-system shows where they want to take Macs and that is just a no starter for me.
I don't think that's quite a given. I don't think Apple has changed as much as you're implying. When OSX was first released, even though it was Unix-based, most people were pretty confident it wouldn't ship with a Terminal. Apple has always been very opinionated and strongly biased towards a user-facing experience. "Techies" have always been a bit skeptical of what Apple may do and should continue to be.
Linux already has a very healthy ecosystem on ARM with Raspberry Pi and others. Heck, it had a healthy ecosystem back in the day on PPC with thins like Yellowdog Linux. I don't think an ARM transition will change this part as much as people think. You've always needed to recompile for macOS.
Never thought that I would see the day that Intel would lose its dominance. Every time they came out with a new processor back in the 90s was like a new iPhone.
P.A. Semi built a powerful and power-efficient Power ISA processor which solved all the problems Apple had with the Power architecture at the time.
What did Jobs do? He bought the company and closed it immediately so that nobody noticed that the switch to Intel was not only completely unnecessary but also a big mistake.
You're being downvoted hard which I feel is unfair.
The first part of your statement is true, they built the PA6T and was acquired by Apple. The second is your opinion. Mine is that Jobs wanted the team and acquired Palo Alto Semi to get the expertise not the chip.
And who knows, maybe Jobs didn’t set them on anything. Maybe he bought PA intending to use their existing bus technology but someone at PA pulled him aside and said “Hey, we don’t think you should waste your time with this. How about we design [what is now their ARM line]?”
A single chip oriented at embedded systems can’t be said to have “solved all the problems Apple had” — and after years of falling behind, they really needed to get past the performance issue & its ensuing heat/reliability/cost problems. Customer loyalty only stretches so far and only the Intel option reliably closed the gap across product lines.
Putting the team to work on the future made sense: they’re incredibly profitable, the software has matured in key ways making it easier to port, and they’re moving after years of hitting high targets annually.
I'm sympathetic to this view but PA6T wasn't going to solve Apple's problems: it was clearly better than the G4, but Apple needed G5-level performance in their next generation laptops as table stakes, and PA6T just doesn't get there (see the AmigaOne X1000 as an example). It also was not at all clear at the time how scalable the microarch was, and it was coming from a company with even fewer (albeit some brilliant, as Apple has proven) engineering resources, so it would have been a big bet that Apple did not want to lose. We'll never know the answer, of course.
I think amd may be on the verge of fixing a lot of the problems that intel has run into in their inability to move forward quickly - the 10nm switch is potentially devestating intel (intel's slow pace of advancement is kind of what the article eventually gets to). If apple said they were going to focus on amd chips at this point, the market would be excited. Are those apple arms going to be able to really handle the cpu load and scale over time like large market intel and amd design teams are scaling investments over billions of devices? I'd just be afraid apple isn't quite big enough. It's very exciting when a change like this comes along in any case. Will they run x86 'legacy' mac programs at a reasonable speed?
At the same time in the mid 80s Acorn was an even smaller company that wanted to make its own CPU, and managed it, and by 1992 had a SoC version with GPU and MMU built in (ARM250)
>Gassée is certainly correct that Aquarius likely played a historical precursor to Apple’s current processor ambitions, but it likely also played an indirect role in its first major processor shift—that from Motorola’s 68000 series of processors used in the Apple Macintosh of the time, to the PowerPC, which eventually took off in a big way in the 1990s.
Hey, Apple's first major processor shift was from 6502 to 68000!
No way to easily maintain an ARM ecosystem and an x86, but it sure would be nice if we got to choose our hardware more in our macs. They could solve a lot of peoples issues by letting people customize basics like their laptop ports, magsafe or not, keyboard / touchpad or not, and T2 chip or not.
They would have to open the OS up a lot to allow choice of CPU architecture, which they will never do.
For the end user, the migration to the new platform will be simple, but confusing for the average Mac user. For developers, the migration will require much more work (testing and coding for both platforms) for the foreseeable future. Hopefully, the development tools will ease the pain somewhat.
What does this mean for current x86 needs? Will Apple just "bridge" it for a while like their transition to Intel? Do they have a shim or some other way to handle it?
Lack of backwards compatibility every time Apple changed their OS or processor pretty much ruined my life back when I was trying to write games on the Mac, because they coincided with downturns like the dot bomb and housing bubble popping (in that case just after iOS arrived). I was so beat down trying to survive at life that rewriting everything I had just written the last year for the new hotness became too much of a burden. I worked a bunch of dead end jobs instead and wasted whatever potential I might have had. Now midlife has hit and I've generally let all that go, but it still bothers me thinking about what might have been.
That said, Intel has sat on their hands for 15 (I would argue 20) years, and so it's unsurprising that Apple is ditching them. I remember seeing 3 GHz processors sometime around 2003-2004. Very little has changed since then. We have faster memory busses now but we've generally lost almost two decades of Moore's law. Before that, processors got twice as fast every 1.5 years, so 100 times faster every decade, which would be performance equivalent to a 30 THz processor today.
Note that where progress HAS happened is video cards (GPUs). So I'm somewhat optimistic that if Apple disrupts the CPU industry, we might see true general-purpose computation speed up rather quickly and break the 4 core, 1 memory bus barrier. I think low hanging fruit here would be 16 to 256 cores arranged in a grid, with the square root of that number of memory busses on an edge. With today's tech, we could have 1024 DEC Alpha cores with 32 memory busses for not much more than we're paying for an Intel i9 with 2 billion transistors (the Alpha had 2 million). Yes, I know it's not an exact comparison, but I have a computer engineering degree so this isn't the time to be pedantic.
General purpose computing could also disrupt the GPU and AI industries, because we could jump ship from the ever-narrowing niches of rasterization and neural nets, and move on to broader experiments in things like ray tracing and genetic algorithms. I had originally wanted to do that with FPGAs, but I've been burned out so long trying to keep up with the shortening attention span of tech that I had to let it go.
Hard to say if any of this will happen, but I just wanted to shed light on the kind of innovations we've missed out on under two lost decades in tech. This is the tip of the iceberg. An explanation for this is that customers want cheap eye candy, and prices have certainly fallen on track with Moore's law. But I've vote to finally see better performance again. Also I'd like to see the emergent effects of better processors, such as more use of parallelized higher-order functions and data driven/functional/declarative programming using something like the Actor model, piping data around with simple tools that do one thing well, borrowing techniques from UNIX and Clojure/Erlang/Go/MATLAB/etc.
All of that stuff can be good, but has tradeoffs. Longer pipelines result in worse branching performance, caching interferes with write-heavy code that's mainly about moving data (like for games), and so on. I feel that putting extra transistors towards large numbers of cores with short 4 stage pipelines (like in early PowerPC) would have been better.
This is one of the more concise benchmark comparisons, in this case having a 3.6 GHz i9 and 1.4 GHz Pentium 3 (released starting in 1999):
So this is 8 cores vs 1, at 2.57 times the clock speed. So per-core performance has increased:
(18892/299) * (1/8) * (1.4/3.6) = 3.07
A 3x fold increase in 20 years is admirable but 1/3000 what would have been predicted if performance had followed Moore's Law. To me, this indicates that per-core performance stopped really increasing sometime around 2005 at the latest. That's why fabs moved towards lower-cost mobile and embedded chips.
That's true and it'll likely prove to be a good business decision although not necessarily one that benefits the end user in the long run. I'm also not suggesting they use POWER for mobile applications neither did I suggest they use Intel x86 for that.
The way I'm seeing things, RiscV will be in the same business position ARM is in now, in about 5 years. Apple might switch to that, and if so I'd also expect no further migration after that (as it's an open architecture).
Does Apple actually invests into laptop/desktop development? They are usually couple of years behind competition (DDR3, Sky Lake chips). While their ARM chips are heavily developed.
I think they decided long time ago that ARM is good enough, and are waiting for train to stop to change the engine. It is not about Intel quality, but about saving money and independence. AMD is not even considered as an alternative...
I do think they actually invest significantly into laptop/desktop development. If you look at the teardowns at iFixit, they are pieces engineering art. No other manufacturer offers that nicely designed devices. Just think of the new Mac Pro having no internal cabeling, everything, including the extra power supply for the graphics cards, is on the motherboard.
But of course, one can (and I do) disagree with some of their design choices, like non-exchangeable SSD and using far too much glue in the designs.
LDDR3 was used for power efficiency. Apple generally only updates to newer generation CPUs when (a) Intel can supply enough and (b) there's a significant performance or power efficiency improvement. As far as I can tell they make the decisions on engineering grounds.
People seem to be buying them anyway. If we consider Form over function could also be playing a role here, Apple's obsession with making their notebooks thinner ditching intel will help a lot with that cause they can at least keep the same battery life with a smaller battery.
Apple would talk about the "MHz-myth" a lot. While it's true that MHz doesn't equal performance, Intel was doubling the PowerPC's performance most of the time. The G3 saw Apple do OK, but then Intel went back to dominating in short order. The PowerPC never matched Intel again.
It was really bad. People with Windows computers just had processors that were so much more powerful and so much cheaper.
You can say that Apple always charges a premium, but not too much today on their main lines. Apple simply doesn't sell low-end stuff. Yes, a MacBook Pro 2GHz costs $1,800 which is a lot. However, you can't compare it to laptops with crappy 250-nit, 1080p screens or laptops made of plastic, or laptops with 15W processors. A ThinkPad X1 Carbon starts at $1,553 and that's with a 1080p display rather than 1600p, 400-nits rather than 500-nits, 8GB of RAM rather than 16GB (both soldiered), and a 15-watt 1.6GHz processor rather than the 28-watt 2GHz part. Heck, for $1,299 you can get something very similar to the ThinkPad X1 Carbon from Apple (though with a 1.4GHz processor rather than 1.6GHz) - $250 cheaper!
The point of this isn't to say that you can't get good deals on Windows computers or that there's no Apple premium or even that there's any value in Apple's fit-and-finsh that you're paying for. This is to say that I remember things like the original iMac with CRT display, 233MHz G3 processor, 13" screen (when people wanted 15-17" screens), and an atrocious mouse going against Intel machines for half the price with nearly double the speed and better specs on everything other than aesthetics. Things were really bad trying to argue that someone should spend $1,300 for an iMac when they could get a Gateway, eMachine, Acer, etc. for $600 with a 400MHz processor rather than 233MHz. A year later, Apple's at 266MHz while Intel has released the Pentium III and is cranking it up from 400MHz to 600MHz that year.
Yea, you can point to $700 laptops today and say, "why buy an Apple for $1,300?" Sure, but at least I can say that the display is so much better (500 nits vs 250 nits and retina), it's lighter than those bargain laptops, fit-and-finish is so much better, etc. At least I'm not saying, "um, no...all those benchmarks showing the Windows machine twice as fast...um...and the mouse is so cool because it's translucent...you get used to it being terrible." It's very, very different from the dark days of 2000.
Plus, today, a price premium doesn't seem as bad. Back in 2000 when you thought you'd be upgrading ever 2-3 years, you'd be shelling out a lot more frequently. If performance doubled every 18 months, 3 years later you'd be stuck with a computer running at 1/4th the speed of something new. With the slowdown in processor upgrades, paying for premium hardware doesn't seem like throwing money away in the same way.
The article also paints the RISC architecture as superior. I'm not a chip expert, but most people seem to say that while RISC and CISC architectures have a different history, modern CPUs are hybrids of the approaches without huge advantages inherent in their ideology. Frankly, if Intel were able to get down to 7nm and 5nm, Apple might not be looking at ARM as strongly.
I think it also paints Apple as some sort of more demanding customer. In some ways, sure. Apple likes to move things forward. However, it's not like MacBooks are that different from PC notebooks. The difference is that Apple has options. They can move to another architecture. Windows manufacturers don't really have that. Sure, Windows on ARM has been a thing, but Microsoft isn't really committed to it. Plus, Windows devs aren't as compliant when it comes to moving architectures so a lot of programs would be running slowly under CPU emulation.
The big issue is that Intel has been stuck for so long. Yes, they've shipped some 10nm 15-watt parts and even made a bespoke 28-watt part for Apple. It's not enough. I'd argue that PC sales are slow because Intel hasn't compellingly upgraded their processors in a long time. It used to be that every 18 months, we'd see a processor that was a huge upgrade. Now it's 5 years to get that upgrade.
There's a trade-off between custom products and economies of scale. With the iPhone using so many processors and TSMC doing so well with its fab, Apple now kinda doesn't have to choose. Intel has been charging a huge premium for its processors because people were locked into the x86 and it takes a while for new competition to happen. Their fabs have fallen behind. It looked like they might be able to do 10nm and move forward from that, but that doesn't seem to be working out too well for them.
The transition from PowerPC to Intel was about IBM and Motorola not being able to deliver parts. They were falling behind on fabs, they weren't making the parts needed for Apple's product line, and it was leaving Apple in a position where they simply had inferior machines. The transition from Intel to ARM is about Intel not being able to deliver parts. It wasn't simply a short time when they couldn't deliver enhancements, but a decently long trend on both accounts. Apple knows it can deliver the parts it wants with its own processors at this point. The iPhone business is large enough to ensure that and they can make laptop parts that really fit what they're trying to market. Intel got Apple's business because they produced superior parts at a lower price. They're losing Apple's business for the same reason.