Hacker News new | past | comments | ask | show | jobs | submit login

You might have to bite the bullet and get a mac - I don't see much promise from others in this space.

Intel's CEO change may save them, but they're definitely facing an existential threat they've failed to adapt to for years.

Amazon will move to ARM on servers and probably some decent design there, but that won't really reach the consumer hardware market (probably - though I suppose they could do something interesting in this space if they wanted to).

Windows faces issues with third party integration, OEMs, chip manufacturers and coordinating all of that. Nadella is smart and is mostly moving to Azure services and O365 on strategy - I think windows and the consumer market matter less.

Apple owns their entire stack and is well positioned to continue to expand the delta between their design/integration and everyone else continuing to flounder.

AMD isn't that much better positioned than Intel and doesn't have a solution for the coordination problem either. Nvidia may buy ARM, but that's only one piece of getting things to work well.

I'm long on Apple here, short on Intel and AMD.

We'll see what happens.




I just got my M1 Air. This thing is unbelievably fluid and responsive. It doesn't matter what I do in the background. I can simultaneously run VMs, multiple emulators, compile code, and the UI is always a fluid 60 fps. Apps always open instantly. Webpages always render in a literal blink of an eye. This thing feels like magic. Nothing I do can make this computer skip a beat. Dropped frames are a thing of the past. The user interface of every Intel Mac I've used (yes, even the Mac Pro) feels slow and clunky in comparison.

Oh, and the chassis of this fanless system literally remains cool to the touch while doing all this.

The improvements in raw compute power alone do not account for the incredible fluidity of this thing. macOS on the M1 now feels every bit as snappy and responsive as iPadOS on the iPad. I've never used a PC (or Mac) that has ever felt anywhere near this responsive. I can only chalk that up to software and hardware integration.

Unless Apple's competitors can integrate the software and hardware to the same degree, I don't know how they'll get the same fluidity we see out of the M1. Microsoft really oughta take a look at developing their own PC CPUs, because they're probably the only player in the Windows space suited to integrate software and hardware to such a degree. Indeed, Microsoft is rumoured to be developing their own ARM-based CPUs for the Surface, so it just might happen [0]

[0] https://www.theverge.com/2020/12/18/22189450/microsoft-arm-p...


So much this. M1 mini here. I am absolutely chuffed with it. It’s insanely good.

I’m going to be the first person in the queue to grab their iMac offering.


Are you saying you don't see much promise for AMD, Intel and Nvidia in the GPU space or with computers in general? I had a hard time following your logic.

Apple may own their stack, but there are a TON of use cases where that stack doesn't even form a blip on the radar of the people who purchase computer gear.


My prediction is x86 is dead.

External GPUs will remain and I think Nvidia has an advantage in that niche currently.

The reason stack ownership matters is because it allows tight integration which leads to better chip design (and better performance/efficiency).

Windows has run on ARM for a while for example, but it sucks. The reason it sucks is complicated but largely has to do with bad incentives and coordination problems between multiple groups. Apple doesn't have this problem.

As Apple's RISC design performance improvements (paired with extremely low power requirements) become more and more obvious x86 manufacturers will be left unable to compete. Cloud providers will move to ARM chipsets of their own design (see: https://aws.amazon.com/ec2/graviton/) and AMD/Intel will be on the path to extinction.

I'd argue Apple's M1 machines are already at this level and they're version 0 (if you haven't played with one you should).

This is an e-risk for Intel and AMD, they should have been preparing for this for the last decade, instead Intel doubled down on their old designs to maximize profit in the short term at the cost of extinction in the long term.

It's not an argument about individual consumer choice (though that will shift too), the entire market will move.


> My prediction is x86 is dead.

I don't see that. At least in corporate environment with bazillion legacy apps, x86 will be the king for the foreseeable future.

And frankly I don't really see the pull of ARM/M1 anyway. I mean, I can get a laptop with extremely competitive Ryzen for way cheaper than MacBook with M1. The only big advantage I see is the battery, but that's not very relevant for many use cases - most people are buying laptops don't actually spend that much time on the go needing battery power. It's also questionable how transferable this is to the rest of the market without Apple's tight vertical integration.

> I'd argue Apple's M1 machines are already at this level and they're version 0

Where is this myth coming from? Apple's chips are now on version 15 or so.


This is the first release targeting macOS, I'm not pretending their chips for phones don't exist - but the M1 is still version 0 for macs.

> "And frankly I don't really see the pull of ARM/M1 anyway. I mean, I can get a laptop with extremely competitive Ryzen for way cheaper than MacBook with M1..."

Respectfully, I strongly disagree with this - to me it's equivalent to someone defending the keyboards on a palm treo. This is a major shift in capability and we're just seeing the start of that curve where x86 is nearing the end.

“No wireless. Less space than a Nomad. Lame.”


> but the M1 is still version 0 for macs.

Fair enough, it's just important to keep in mind that M1 is a result of decade(s) long progressive enhancement. M2 is going to be another incremental step in the series.

> to me it's equivalent to someone defending the keyboards on a palm treo. This is a major shift in capability ...

That's a completely unjustified comparison. iPhone brought a new way to interact with your phone. M1 brings ... better performance per watt? (something which is happening every year anyway)

What new capabilities does M1 bring? I'm trying to see them, but don't ...


> "That's a completely unjustified comparison. iPhone brought a new way to interact with your phone."

People don't really remember, but a lot of people were really dismissive of the iPhone (and iPod) on launch. For the iPhone, the complaints were about cost, about lack of hardware keyboard, about fingerprints on the screen. People complained that it was less usable than existing phones for email, etc.

The M1 brings much better performance at much less power.

I think that's a big deal and is a massive lift for what applications can do. I also think x86 cannot compete now and things will only get a lot worse as Apple's chips get even better.


> People don't really remember, but a lot of people were really dismissive of the iPhone (and iPod) on launch.

I do remember that. iPhone had its growing pains in the first year, and there was a fair criticism back then. But it was also clear that iPhone brings a completely new vision to the concept of a mobile phone.

M1 brings a nice performance at fairly low power, but that's just a quantitative difference. No new vision. Perf/watt improvements have been happening every single year since the first chips were manufactured.

> I also think x86 cannot compete now and things will only get a lot worse as Apple's chips get even better.

Why? Somehow Apple's chips will get better, but competition will stand still? AMD is making currently great progresses, and it finally looks like Intel is waking up from letargia as well.


>M1 brings a nice performance at fairly low power, but that's just a quantitative difference. No new vision. Perf/watt improvements have been happening every single year since the first chips were manufactured.

I'd say the M1's improvements are a lot more than performance per watt. It has enabled a level of UI fluidity and general "snappiness" that I just haven't seen out of any Mac or PC before. The Mac Pro is clearly faster than any M1 Mac, but the browsing the UI on the Mac Pro just feels slow and clunky in comparison to the M1.

I can only chalk that up to optimization between the silicon and the software, and I'm not sure that Apple's competitors will be able to replicate that.


> "Why? Somehow Apple's chips will get better, but competition will stand still?"

Arguably this has been the case for the last ten years (comparing chips on iPhones to others).

I think x86 can't compete, CISC can't compete with RISC because of problems inherent to CISC (https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...)

It won't be for lack of trying - x86 will hold them back.

I suppose in theory they could recognize this e-risk, and throw themselves at coming up with a competitive RISC chip design while also somehow overcoming the integration disadvantages they face.

If they were smart enough to do this, they would have done it already.

I'd bet against them (and I am).


RISC vs CISC is not real. Anyone writing articles about it is uninformed and you should ignore them. (However, it's also not true that all ISAs perform the same. x86-64 actually performs pretty well though and has good memory density - see Linus's old rants about this.)

ARM64 is a good ISA but not because it's RISC, some of the good parts are actually moving away from RISCness like complex address operands.


Very much this. Intel is not that stupid. They went in the lab, built and simulated everything to find that the penalty of extra decoding has an upper bound of a few percent perf/Watt max once you are out of the embedded space.

OTOH, apple is doing some interesting things optimizing their software stack to the store [without release] reordering that ARM does. These sorts of things are where long term advantage lies. Nobody is ever ahead in the CPU wars by some insurmountable margin in strict hardware terms.

System performance is what counts. Apple has weakish support for games, for example, so any hardware advantage they have in a vacuum is moot in that domain.

Integrated system performance and total cost of ownership are what matters.


I'm confused - I thought the reason that it's hard for Intel to add more decoders is because x86 ISA doesn't have fixed length instructions. As a result you can't trivially scale things up.

From that linked article:

--

Why can’t Intel and AMD add more instruction decoders?

This is where we finally see the revenge of RISC, and where the fact that the M1 Firestorm core has an ARM RISC architecture begins to matter.

You see, an x86 instruction can be anywhere from 1–15 bytes long. RISC instructions have fixed length. Every ARM instruction is 4 bytes long. Why is that relevant in this case?

Because splitting up a stream of bytes into instructions to feed into eight different decoders in parallel becomes trivial if every instruction has the same length.

However, on an x86 CPU, the decoders have no clue where the next instruction starts. It has to actually analyze each instruction in order to see how long it is.

The brute force way Intel and AMD deal with this is by simply attempting to decode instructions at every possible starting point. That means x86 chips have to deal with lots of wrong guesses and mistakes which has to be discarded. This creates such a convoluted and complicated decoder stage that it is really hard to add more decoders. But for Apple, it is trivial in comparison to keep adding more.

--

Maybe you and astrange don't consider fixed length instruction guarantees to be necessarily tied to 'RISC' vs. 'CISC', but that's just disputing definitions. It seems to be an important difference that they can't easily address.


People are rehashing the same myths about ISA written 25 years ago.

Variable length instructions are not a significant impediment in high wattage cpus (>5W?). The first byte of an instruction is enough to indicate how long an instruction is and hardware can look at the stream in parallel. Minor penalty with arguably a couple of benefits. The larger issue for CISC is that more instructions access memory in more ways so decoding requires breaking those down into micro-ops that are more RISC like, in order that the dependencies can get worked out.

RISC already won where ISA matters -- like AVR and ARM thumb. You have a handful of them in a typical laptop plus like a hundred throughout your house and car, with some PIC thrown in for good measure. So it won. CISC is inferior. Where ISA matters it loses. Nobody actually advocates for CISC design because you're going to have to decode it into smaller ops anyway.

Also variable length instruction is not really a RISC vs CISC thing as much as also a pre vs post 1980 thing. Memory was so scarce in the 70s that wasting a few bits for simplicity sake was anathema and would not be allowed.

System performance is a lot more than ISA as computers have become very complicated with many many I/Os. Think about why American automakers lost market share at the end of last century. Was it because their engineering was that bad? Maybe a bit. But really it was total system performance and cost of ownership that they got killed on, not any particular commitment to a solely inferior technical framework.


I agree that's a real difference and M1 makes good use of it, it's just to me RISC ("everything MIPS did") vs CISC ("everything x86 did") implies a lot of other stuff that's just coincidences. Specifically RISC means all of simple fixed-length instructions, 3-operand instructions (a=b+c not a+=b), and few address modes. Some of these are the wrong tradeoff when you have the transistor budget of a modern CPU.

x86 has complicated variable instructions but the advantage is they're compressed - they fit in less memory. I would've expected this to still be good because cache size is so important, but ARM64 got rid of theirs and they know better than me, so apparently not. (They have other problems, like they're a security risk because attackers can jump into the middle of an instruction and create new programs…)

One thing you can do is have a cache after the decoder so you can issue recent instructions over again and let it do something else. That helps with loops at least.


Remember, M1 is on the leading edge 5nm fab process. Ryzen APUs are coming and may be competitive in terms of power consumption when they arrive on 5nm.

Apple software is also important here. They do some things very much right. It will be interesting to run real benchmarks with x64 on the same node.

Having said all that, I love fanless quiet computers. In that segment Apple has been winning all along.


> At least in corporate environment with bazillion legacy apps

They could just run them as virtual desktop apps. Citrix, despite its warts, is quite popular for running old incompatible software in corporate environments.


Ok, I still have questions.

To start... How would a city with tens of thousands of computers transition to ARM in the near future?

The apps that run 911 Dispatch systems and run critical infrastructure all over the world are all on x86 hardware. Millions if not Billions of dollars in investment, training, and configuration. These are bespoke systems. The military industrial complex basically custom chips and x86. The federal government runs on x86. you think they are just going to say, "Whelp, looks like Apple won, lets quadruple the cost to integrate Apple silicon for our water system and missile systems! They own the stack!"

Professional grade engineering apps and manufacturing apps are just going to suddenly rewrite for apple hardware, because M2 or M3 is sooooo fast? Price matters!!!! Choice Matters!!!

This is solely about consumer choice right now. The cost is prohibitive for most consumers as well, as evidence by the low market penetration of Apple computers to this day.


Notice how the only counter examples you came up with are legacy applications. This is the first sign of a declining market. No, Intel will not go out of business tomorrow. But they are still dead.

The growth markets will drive the price of ARM parts down and performance up. Meanwhile x86 will stagnate and become more and more expensive due to declining volumes. Eventually, yes, this will apply enough pressure even on niche applications like engineering apps to port to ARM. The military will likely be the last holdout.


You make bets on where the puck is going, not on where it currently is.

"How would a city with tens of thousands of HDDs transition to SSDs in the near future?"

It happens over time as products move to compete.

Client side machines matter less, the server will transition to ARM because performance and power is better on RISC. The military industrial complex relies on government cloud contracts with providers that will probably move to ARM on the server side.

It's not necessarily rewriting for Apple hardware, but people that care about future performance will have to move to similar RISC hardware to remain competitive.


Wow I feel like I'm back in 1995 or something. Stupid Intel doubling down on the Pentium! DEC, Sun, and Motorola RISC will extinct them!


I think we'll see a lot of ARM use cases outside of the Apple stack and x86 is dead (but it will of course take its sweet time getting there). For the longest time everyone believed at a subconscious level that x86 was a prerequisite due to compatibility. Apple provided an existence proof that this is false. There is no longer a real need to hold onto the obsolete x86 design.

The only way for Intel and AMD to thrive in this new world is to throw away their key asset: expertise in x86 arcana. They will not do this (see Innovator's Dilemma for reasons why). As a result they will face a slow decline and eventual death.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: