Hacker News new | past | comments | ask | show | jobs | submit login

It doesn't very much, in theory. Figuring out the instruction boundaries in x86 instructions in parallel is rather involved but after code has been seen once the boundaries can be marked with an extra bit in the L1 instruction cache and in loops you're mostly using decoded instructions cache anyways. The strict sequential consistency model of x86 versus ARM probably has a lot of implications for how the cache hierarchy but I couldn't speculate in detail.

However, x86 has a ton of cruft in the instruction set which has to be implemented and keep working despite whatever microarchitectural changes happen. You have to worry about how your Spectre mitigations interact with call gates that haven't been used much since the 286, for instance. That's a lot of extra design work and even more extra verification work that has to happen with each new x86 core, which I think is most of the advantage that ARM has.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: