I've been working alone on a project that I've been calling "intentional programming" -- the idea being that a user/programmer specifies what they want, then the system handles mapping it to the primitive-level, through what can be described as a tower of interpreters. Obviously, it has to be a collapsible tower, or the whole thing's just a crazy heavy-weight abstraction that'd make even simple calculations painfully expensive.
There're a lot of problems with the design presented in the paper that they'd need to solve yet to catch up to where I'm at. But, for something that's been my life's work so far, it's truly strange to read a significant chunk of it casually posted online.
It does leave me with a nagging question, though. So far as I can tell, the system that I'm making will broadly obsolete other programming regimes; it basically automates the process of mapping concept-to-code, in a provably optimal, reproducible way. So, I suspect that what I'm making will fully disrupt how we do programming entirely. That said, the paper's "Related Work" section shows that ideas in this direction have long-existed, for decades at least.
So, my question would be, why hasn't this replaced everything yet? Like, what haven't we replaced/generalized compilers with/to partial-evaluators?
The reason we haven't replaced human-engineered computer programs with "HAL do this for me" instructions is because the devil is always in the details...
Hah, true. It's definitely been a heck of a lot more than I'd have guessed at first.
Still, I approached this whole thing more incrementally; I never tried to say, "HAL, do this for me", but rather I just tried to make a framework that made programming easier.
For example, it's easier to do mathematical programming if you put in an abstract numeric class, more like what Mathematica uses than primitive numerics that the CPU uses. It's yet easier if you shift the focus from value-oriented logic to expression-oriented logic.
But then that's all heavy-weight stuff; you're stuck with doing lots of expensive computations even when all you want to calculate is "1+1". So, the framework's got to be able to reduce itself to something more optimal.
But then there's a question of how to optimize a program. You can't stop too soon, or else your compiled result won't be fully optimal. But if you try to optimize all the way, then whenever you compile something like a numeric simulation, e.g.
var result = doHugeMathProblem();
Print(result);
, it'd always optimize to something like
Print(7); // since result == 7
, as the compiler would've had to do all of the work. So, obviously, we have to ditch "compiling" and move to a generalization.
But then, if you mix primitive-computation with abstract-computation, how do you do that optimally? For example, when do you stop trying to find a more clever symbolic solution to a system of equations and switch to trying to solve it numerically?
And the chain of problems just keeps going on-and-on like that.
Still, it seems like it'd have been a mistake for someone to say, "HAL, do this for me". HAL might be at the end of the road, but acknowledging that and then trying to jump right to it feels like plotting a road trip by figuring out the direction that the destination is in, then attempting to drive straight to it, driving through whatever's in the way rather than on roads out of the silly idea that the path ought to the destination ought to be straight.
Which, _tl;dr_, is to say that I understand why everyone who tried to make HAL may've failed, but what confuses me is why people didn't have more success pushing development efforts like that in the linked PDF. Those would've led to HAL, if pursued hard enough.
Not super related but it's worth noting that Charles Simonyi sort of pioneered something called 'intentional programming' [1] a good while ago, going so far as to create the Intentional Software Company to promote the practice.
I think that's where I first heard the term "intentional" in this usage! It fit exactly what I was doing, so I adopted it as a name.
I did a little bit of research on their prior work, but I get the feeling that they fell into a bunch of traps. For example, they seem to refer to "compiling" programs, which is enough of a conceptual mistake as to sink their project from the start.
Still, the overall idea that the focus should be on intent, rather than how that intent is implemented, seems apt.
If you look at any of their products (might be hard to do actually) the focus was metaprogramming as a paradigm and building tools to create basically DSLs for various domains which allowed more or less what you're talking about, all of which were interpreted to a sort of universal meta-language and all of which could be projected from that representation into any of its other forms. Pretty cool stuff, or at least it had the potential to be.
Can you summarize the insights you have had beyond what the paper covers? Sounds fascinating. I don’t know why people aren’t more actively pursuing this stuff, either.
What I would be more interested in is how such a tower of interpreters behaves. Does the undefined/underspecified behaviour add up until it's impossible to tell what a program will do? Does it remain more of less functional?
Well, having many layers does additively increase the chances of bugs, but for well-specified and tested interpreters the prevalence of interpreter bugs is pretty low. Their example is:
> Python code executed by an x86 runtime, on a CPU emulated in a JavaScript VM, running on an ARM CPU.
So in this case, we have a tower of well-defined abstractions, most of which have extant well-tested implementations. I would expect the resultant tower to be semantically correct enough of the time to be a reasonable platform to work with--i.e. if you have a bug, it's probably in your code, not the platform. Where we'd see the abstractions leaking heavily is performance analysis, unless this tower collapse is also a Sufficiently Smart Compiler.
The very low level interpreters, running VMs are nearly completely specified, and the very high level ones usually refrain from undefined behavior, they either only reexport the ones for the lower levels without adding to them or explicitly fix the lower level ones and don't add any.
Or, to put it simpler, undefined behavior is mostly a C thing, you won't find much of it elsewhere.
There is a significant difference between "implementation-defined" and "undefined". An implementation should behave consistently with itself for implementation-defined behavior. There is no such complusion for undefined behavior.
If behavior is implementation-defined, then the language is actually a family of languages that can be instantiated according to the implementation's choices. This is consistent with the grandparent's description.
Underspecified behavior is just another way to say "implementation-defined". It's a problem of documentation and specification -- in principle, if you had enough of both, you should be able to tell how a stack of interpreters behaves. It's not a fundamental barrier.
In a Standard, "implementation-defined" is much more specific. It means the implementation is required to provide documentation of its definition, so that a user writing deliberately non-portable code can rely on a behavior.
As a matter of standardization, it is almost always a mistake to make anything implementation-defined. It is a common mistake among beginners writing their first proposal (don't be "that beginner"!) and inexperienced committee members who think it would resolve an impasse. Typically, when you find ID in a standard, it is because nobody cared enough to bother getting it right.
The definition an implementation provides for an ID thing may just say "undefined".
A comment below says that "undefined" doesn't mean "not defined". But it really does. Anything a standard does not define is undefined, and if you step in it, anything may happen, up to and including launching the missiles.
Implementations are allowed to provide their own definition for almost anything left undefined by a standard. Often they define things according to another published standard, and point there. For example, "#include <unistd.h>" is UB in ISO C, but implementations often defer to POSIX, another ISO Standard, there.
Sometimes a Standard will leave something undefined, but warn that they plan to define it later. Usually this is expressed as "reserve".
'Undefined' does not mean 'not defined'. It means that it specifically has no definition, not that they haven't gotten around to giving it a definition.
Another way to think about it is that undefined means they have defined it, and they've defined it as not having a definition.
Most of Ruby is not defined. But little of it is undefined.
If you're interested in the work that led to this, definitely check out Kenichi Asai's reflective languages. The code for his Black language with reflective semantics is reproduced here: [0]
More tangential but also cool: this talk[1] by William Byrd mentions reflective towers, but jumps into a discussion of what he terms "relational programming". It's a demo of his Barliman Smart editor system: [2]
haven't been able to look through this or the linked video i'm giving but its tim baldridge showing a new language and mentioning collapsing tower of interpreters.
I've been working alone on a project that I've been calling "intentional programming" -- the idea being that a user/programmer specifies what they want, then the system handles mapping it to the primitive-level, through what can be described as a tower of interpreters. Obviously, it has to be a collapsible tower, or the whole thing's just a crazy heavy-weight abstraction that'd make even simple calculations painfully expensive.
There're a lot of problems with the design presented in the paper that they'd need to solve yet to catch up to where I'm at. But, for something that's been my life's work so far, it's truly strange to read a significant chunk of it casually posted online.
It does leave me with a nagging question, though. So far as I can tell, the system that I'm making will broadly obsolete other programming regimes; it basically automates the process of mapping concept-to-code, in a provably optimal, reproducible way. So, I suspect that what I'm making will fully disrupt how we do programming entirely. That said, the paper's "Related Work" section shows that ideas in this direction have long-existed, for decades at least.
So, my question would be, why hasn't this replaced everything yet? Like, what haven't we replaced/generalized compilers with/to partial-evaluators?