Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems like you're putting a lot of stuff on silicon that better belongs in a compiler. It isn't clear to me how to make the silicon, say, check for the totality of an arbitrarily-large function in any reasonable manner; silicon prefers rather sharp bounds.

Plus anything you encode on the silicon is basically permanent. (Sure, you can tweak things with microcode, but only so much.)



> It seems like you're putting a lot of stuff on silicon that better belongs in a compiler.

That's debatable. By making the separation between hardware and software here, it allows future processors to make substantial changes to their internals.

> It isn't clear to me how to make the silicon, say, check for the totality of an arbitrarily-large function in any reasonable manner.

Actually, that part is easy. Essentially, you can define a typed functional language without recursion (or recursive definitions). The only looping that you do allow is the use of eliminators over inductive data types.

> Plus anything you encode on the silicon is basically permanent.

The only thing that is really permanent is the interface. Future processors can change their internals, but the old programs should always work on new processors.


"Essentially, you can define a typed functional language without recursion (or recursive definitions). The only looping that you do allow is the use of eliminators over inductive data types."

Yes, but I said I don't see how you can check totality on an arbitrarily large function efficiently. If a total function has 65,537 cases to be checked, silicon is not the place to do it. If the silicon isn't checking it, then it's the compiler doing it anyhow.

The thing is, your processor may be arranged differently, even very differently, but on the whole it can't be doing more than modern processors are, or it will be slower than modern processors, in which case, why not just continue compiling onto them anyhow? History is full of "better" processors from various places that, by the time they came out, were already a factor of 3 or 4 (or more) slower than just running on x86.


A function is never arbitrarily large, it's made of a bounded number of sub-expressions and those sub-expressions can be checked independently.

CPUs are extraordinarily wasteful. The reason I think a processor like this might work better is because it could do a lot more work in parallel.


A CPU can do anything. Silicon is turing complete. The question is whether it can do it more efficiently than some rather well-tuned, if suboptimally-architected, processors. It's a high bar to leap, despite substantial rhetoric to the contrary. (I believe it to be leapable, but it's not easy.)


> CPUs are extraordinarily wasteful.

I've seen this claim before, and I've even accepted it at face value before. But it seems to me that it isn't adequately supported by evidence. What evidence do we have that a substantially more efficient (therefore less wasteful) CPU architecture is possible?


cf the discussion in the same thread: https://news.ycombinator.com/item?id=5559282




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: