Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO, we need to fundamentally rethink the design of our computing infrastructure so that it suits our current way of using computers. This would start from the CPU microarchitecture and instruction set, through the programming interface of the operating system, up to the middleware used by most programs.

Do you know if there's anyone seriously working on this? I feel like there are a lot of people on this website who want something like this and would love to help out.



Yes, there are people working on this.

VPRI[1], the group around Alan Kay had an NSF-funded project to reproduce "Personal Computing" in 20KLOC. The background was that way back at PARC, they had "Proto Personal Computing" in around 20KLOC, with text-processing, e-mail, laser-printers, programming. MS Office by itself is 400MLOC. Their approach was lots of DSLs and powerful ways of creating those DSLs.

This group then moved to SAP and now has found a home at Y-Combinator Research[2].

One of the big questions, of course, is what the actual problem is. I myself have taken my cue from "Architectural Mismatch"[3][4][5]. The idea there is that we are still having a hard time with reuse. That doesn't mean we haven't made great strides with reuse, as in we actually have some semblance of reuse, but the way we reuse is suboptimal, leading IMHO to excessive code growth both with increasing number of components and over time.

A large part of this is glue code, which I like to refer to as the "Dark Matter" of software engineering. It's huge, but largely invisible.

So with glue being the problem, why is it a problem? My contention is that the key problem is that we tend to have fixed/limited kinds of glue available (biggest example: almost everything we compose is composed via call/return, be it procedures, methods or functions). So my proposed solution is to make more kinds of glue available, and make the glue adaptable. In short: allow polymorphic connectors.[6][7][8]

So far, the results are very good, meaning code gets a whole lot simpler, but it's a lot of work, and very difficult to boot because you (well, first: I) have to unlearn almost everything learned so far, because the existing mechanisms are so incredibly entrenched.

[1] http://www.vpri.org

[2] https://blog.ycombinator.com/harc/

[3] http://www.cs.cmu.edu/afs/cs.cmu.edu/project/able/www/paper_...

[4] http://repository.upenn.edu/cgi/viewcontent.cgi?article=1074...

[5] https://www.semanticscholar.org/paper/Programs-Data-Algorith...

[6] http://objective.st

[7] https://www.semanticscholar.org/paper/Polymorphic-identifier...

[8] http://www.hpi.uni-potsdam.de/hirschfeld/publications/media/...


The problem is that tech is always built with hidden assumptions. ALWAYS. Anyone who tells you otherwise is a liar or naive.

Not everyone can work with the assumptions that this tech demands, and not all of the assumptions are apparent (behavioral assumptions especially), so we end up either solving the same problem with a different set of assumptions or using glue code to turn things into a hacky mess that works.


Right, it's definitely the hidden assumptions that are currently killing us.

That's why the process is so difficult, because questioning everything is not just hard, it's also very time-consuming and often doesn't lead to anything. Or, worse, doesn't seem to lead to anything, because you stopped just a little short.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: