Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Writing a CPU emulator is, in my opinion, the best way to REALLY understand how a CPU works

Hard disagree.

The best way is to create a CPU from gate level, like you do on a decent CS course. (I really enjoyed making a cut down ARM from scratch)



I think both are useful, but designing a modern CPU from the gate level is out of reach for most folks, and I think there's a big gap between the sorts of CPUs we designed in college and the sort that run real code. I think creating an emulator of a modern CPU is a somewhat more accessible challenge, while still being very educational even if you only get something partially working.


When I was at Caltech, another student in the dorm had been admitted because he'd designed and implemented a CPU using only 7400 TTL.

Woz wasn't the only supersmart young computer guy at the time :-)

(I don't know how capable it was, even a 4 bit CPU would be quite a challenge with TTL.)


I think the key word above was modern. I felt able to design a simple CPU when I finished my Computer Architecture course in university. I think I forgot most of it by now ;) There are a few basic concepts to wrap your head around but once you have them a simple CPU is doable. Doing this with TTL or other off the shelf components is mostly minimizing/adapting/optimizing to those components (or using a lot of chips ;) ). I have never looked at discrete component CPU designs, I imagine ROM and RAM chips play a dominant part (e.g. you don't just built RAM with 74x TTL flip-flops).


He probably used off-the-shelf RAM chips, after all, RAM is not part of the CPU.

In the early 70s, before the internet, even finding the information needed would be a fair amount of work.

I learned how flip flops worked, adders, and registers in college, and that could be extended to an ALU. But still, that was in college, not high school.

I've read some books on computer history, and they are frustratingly vague about how the machines actually worked. I suspect the authors didn't actually know. Sort of the like the books on the history of Apple that gush over Woz's floppy disk interface, but no details.


Was doing some Googling and came across: https://en.wikipedia.org/wiki/Breakout_(video_game)

I never heard this story...


> and the sort that run real code

And the sort that are commercially viable in today's marketplace. The nature of the code has nothing to do with it. The types of machines we play around with today surpass the machines we used to land men on the moon. What's not "real code" about that?


This is an illusion and a red herring. RTL synthesis is the typical functional prototype stage reached which is generally sufficient for FPGA work. To burn an ASIC as part of an educational consortium run is doable, but it's uncommon.


Seconded. A microcoded, pipelined, superscalar, branch-predicting basic processor with L1 data & instruction caches and write-back L2 cache controller is nontrivial. Most software engineers have an incomplete grasp of data hazards, cache invalidation, or pipeline stalls.


IIRC reading some Intel CPU design history some of their designers are from a CS/software background. But I agree. Software is naturally very sequential which is different than digital hardware which is naturally/inherently parallel. A clock can change the state of a million flip-flops all at once, it's a very different way of thinking about computation (though ofcourse at the theoretical level all the same) and then there's the physics and EE parts of a real world CPU. Writing software and designing CPUs are just very different disciplines and the CPU as it appears to the software developer isn't how it appears to the CPU designer.


I'm not talking about Intel's engineers, I'm saying very few software engineers today understand how a processor works. I'm sure Intel hires all sorts of engineers for different aspects of each ecosystem they maintain. Furthermore, very few software engineers have ever even touched a physical server because they're sequestered away from a significant fraction of the total stack.

Speculative and out-of-order execution requires synchronization to maintain dataflow and temporal consistency.

Computer Organization is a good book should anyone want to dive deeper.


Well, I think you're both right. It's satisfying as heck to sling 74xx chips together and you get a feel for the electrical side of things and internal tradeoffs.

When you get to doing that for the CPU that you want to do meaningful work with, you start to lose interest in that detail. Then the complexities of the behavior and spec become interesting and the emulator approach is more tractable, can cover more types of behavior.


I think trollied is correct actually. I work on a CPU emulator professionally and while it gives you a great understanding of the spec there are lots of details about why the spec is the way it is that are due to how you actually implement the microarchitecture. You only learn that stuff by actually implementing a microarchitecture.

Emulators tend not to have many features that you find in real chips, e.g. caches, speculative execution, out-of-order execution, branch predictors, pipelining, etc.

This isn't "the electrical side of things". When he said "gate level" he meant RTL (SystemVerilog/VHDL) which is pretty much entirely in the digital domain; you very rarely need to worry about actual electricity.


I write retro console emulators for fun, so agree with you 100% :)


So far on my journey through Nand2Tetris (since I kind of dropped out of my real CS course) I've found the entire process of working my way up from gate level, and just finished the VM emulator chapter which took an eternity. Now onto compilation.


OTOH, are you really going to be implementing memory segmenting in your gate-level CPU? I'd say actually creating a working CPU and _then_ emulating a real CPU (warts and all) are both necessary steps to real understanding.


This. I mean, why not start with wave theory and material science if you really want a good understanding :)

In my CS course I learned a hell of a lot from creating a 6800 emulator; though it wasn't on the course, building a working 6800 system was. The development involved running an assembler on a commercial *nix system and then typing the hex object code into an EPROM programmer. You get a lot of time to think about where your bugs are when you have to wait for a UV erase cycle...


> why not start with wave theory and material science

You jest, but if I had infinite time...


> OTOH, are you really going to be implementing memory segmenting in your gate-level CPU?

I have, but it was a PDP-8 which I'll be the first to admit is kind of cheating.


I agree.


Reading Petzold’s “Code” comes pretty close to, though and is easier.


CPU was a poor choice of words. ISA would have worked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: