So, given work like that, what remaining tough problems are there before you would find a metaprogramming system safe and acceptable? Or do we have the fundemantals available but you just don't like the lack of deployment in mainstream or pragmatic languages and IDE's?
Note: Just dawned on me that you might mean abstract programming in the sense of specifying, analyzing, and coding up abstract requirements closer to human language. Still interested in what gripes or goals you have on that end if so.
"Meta is dangerous" so a safe meta-language within a language will have "fences" to protect.
(Note that "assignment" to a variable is "meta" in a functional language (and you might want to use a "roll back 'worlds' mechanism" (like transactions) for safety when this is needed.)
This is a parallel to various kinds of optimization (many of which violate module boundaries in some way) -- there are ways to make this a lot safer (most languages don't help much)
I've always felt that the meta space is too exponential or hyper to mentally represent or communicate. Perhaps we need different lenses to project the effects of the meta space on our mental model. Do you think this is why Gregor decided to move towards aspects?
I don't think Aspects is nearly as good an idea as MOP was. But the "hyperness" of it is why the language and the development system have to be much better. E.g. Dan Ingalls put a lot of work into the Smalltalks to allow them to safely be used in their own debugging, even very deep mechanisms. Even as he was making these breakthroughs back then, we were all aware there were further levels that were yet to be explored. (A later one, done in Smalltalk was the PIE system by Goldstein and Bobrow, one of my favorite meta-systems)
Aside from metaprogramming, from reading the "four reports" document that is the first Google link, it seems PIE also addresses another hard problem. In any hierarchically organized program, there are always related pieces of code that we would like to maintain together, but which get ripped apart and spread out because the hierarchy was split according to a different set of aspects. You can't get around this problem because if you change what criteria the hierarchy is split on in order to put these pieces near each other, now you've ripped apart code that was related on the original aspect. I've come to the conclusion that hierarchical code organization itself is the problem, and we would be better served by a way to assemble programs relationally (in the sense of an RDBMS). It seems like PIE was in that same conceptual space. Could you comment on that or elaborate more on the PIE system? Thanks.
Good insights -- and check out Alex Warth's "Worlds" paper on the Viewpoints site -- this goes beyond what PIE could do with "possible worlds" reasoning and computing ...
This is a very interesting paper. Its invocation of state space over time as a model of program side effects reminds me of an idea I had a couple years ago: if you think of a program as an entity in state-space where one dimension is time, then "private" object members in OO-programming and immutable values in functional programming are actually manifestations of the same underlying concept. Both are ways to create fences in the state-space-time of a program. Private members create fences along a "space" axis and functional programming creates fences along the "time" axis.
And you get to use "relational" and "relativity" side by side in a discussion.
A lot of interesting things tend to happen when you introduce invariants, including "everything-is-a" invariants. Everything is a file, everything is an object, everything is a function, everything is a relation, etc.
I'm guessing safe meta-definition means type-safe meta-programming.
For example in Lisp, code is data and data is code (aka homoiconicity). This makes it very convenient to write macros (i.e. functions that accept and return executable code).
Unsafe meta-programming would be like the C pre-processor whose aptness for abuse make it a leading feature of IOCCC entries.
Me too. But if he doesn't answer it he may mean how languages don't have a well designed meta protocol. See the one they built for CLOS in that good book.
This reminded me of an interesting dream I had. I dreamt I created a nice language with a meta protocol. In working with the language and using this protocol I changed the language into a different language which gave me insights on changing that language -- all through meta protocols. I woke up having a distinct feeling of what is means to not be plodding around in a Turing tarpit.
Certainly, in this day and age, the lack of safe meta-definition is pretty much shocking.