I would love to see announcements of new languages include a rationale. Why was this created? What problem was the author trying to solve that they thought the solution was a new language? What short comings did existing languages have that this new language overcomes?
A lot of the responses to hermitdev point out that a language can just be for fun, and need not "solve" anything.
This is true, but hermitdev's point still stands: why was this particular idea pleasurable? Why c and lisp? Is it because they want a fast compiling lisp? Did they like the syntax of c but find it easier to implement in lisp's tree-like structure?
Further clouding the picture is that Dale is actually implemented in C++. The inner syntax under the S-exps is grotty C++ data structures and some brutally ugly code to deal with them that doesn't seem to even try to leverage the ability of C++ to simulate a dynamic language (like with suitable.
bool
isUnoverloadedMacro(Units *units, const char *name,
std::vector<Node*> *lst,
Function **macro_to_call)
{
std::map<std::string, std::vector<Function *> *>::iterator
iter;
Function *fn = NULL;
for (std::vector<NSNode *>::reverse_iterator
rb = units->top()->ctx->used_ns_nodes.rbegin(),
re = units->top()->ctx->used_ns_nodes.rend();
rb != re;
++rb) {
iter = (*rb)->ns->functions.find(name);
if (iter != (*rb)->ns->functions.end()) {
fn = (*iter->second)[0];
break;
}
}
This is not even up to good C++ coding practice, what with the raw exception-unsafe pointers and whatnot.
Name represented as char * ? I stopped using C strings in C++ code around 1998, other than in low level code interfacing with things that require them. Kids that were born then are now in college. If you're doing Lisp manipulation, you want anything that is a "name" of some kind to be an interned symbol, which quickly compares to another symbol variable as a pointer.
Why would you do all this to yourself, if (or so it seems) you know enough about Lisp to want a C-like language in S-exp syntax?!
This is pretty standard "C-with-classes" style of coding. You see it a lot in embedded systems too. Ok in this case it's more like "C with the STL", but it's close enough. Those would exactly be the two things I'd want to add if I were a C programmer longing for a little more power.
All your critique would also apply if this were written in idiomatic C. C pointers are exception unsafe by definition because there are no exceptions. STL containers don't throw exceptions other than out-of-range exceptions, which isn't much better or worse than the access violation you'd get for the same bug in plain C.
One good reason why the code might be like this is the author doesn't know all 2568287 C++ features that are required to write modern idiomatic C++ - but the author does know C and std::map.
The problem with your statement is that it isn't true.
Selecting code alternatives using #ifdef is one of the benign uses of the C preprocessing feature, exemplifying "use it like this" in contrast with "bad" uses. In many a coding guide you can it recommended to use the preprocessor to do a little #ifdef here and there, while avoiding crazy macros. (Of course, code can turn into a hairball of nested #ifdefs, which everyone rightfully hates, but this sort of use is not characterized with the word abuse. It is just use.)
We could split the function into two copies for those alternatives, put them into separate files and then have something slick in the Makefile to pick the correct file (not the GNU Make ifeq syntax, of course; that would be ironic).
However, those two functions would then contain lots of repeated code also. If something has to change, two identical parts in similar functions have to be updated.
Okay so then we could refactor the function so that all the common things are in a generic part, and then the switched pieces are in a helper inline function (included from one of two different files, etc).
It's not clear that it would improve things in that particular case.
I really have no idea what, if anything, would be worth doing, and that could be due to my mental limitations. If someone sends me a plan about how to refactor that code in a good way (just an outline with bullet points in plain language, no code) I will seriously consider it, and possibly do the work.
Your comment is amusing but otherwise doesn't contradict the claims of parent. Probably better off on a thread discussing parent's tech. This thread is about the Dale language and its implementation. Surely you of all people would appreciate that both should be done with some quality, esp in implementation language. You didn't pick shoddy C++. You went with a language that you thought was better among a number of them. I haven't heard gripes about your code so far so maybe the style was OK, too.
So, your parody is interesting but maybe counterproductive. It doesn't change the fact that the OP is already using language that isn't great for this in a way that doesn't inspire confidence. That's worth noting always as shitty implementations can lead to bugs for early adopters. Best to call out problems in parent's work in threads on that, a mailing list or repo if there is one, etc.
My code is certainly flawed, the question is if it behaves more or less as advertised.
I would be more open to such criticism from users of the software. In my opinion that type of comment is worse than backseat driving, at least in that case the passenger is a stakeholder.
"In my opinion that type of comment is worse than backseat driving"
By that logic, I'd have to lease a mainframe for millions of dollars before critiquing aspects of their offering. Likewise, I'd have to pay Oracle $70,000 per processor. If flawed FOSS, I'd have to go through the pain of setting up and using it instead of submitting the flaw. Your analogy would apply better if you said "person in back seat shouting about an obstacle they're about to hit to driver that fell asleep."
Realistically, though, better to critique than use flawed products unless it gets the job done enough to be worth using anyway. No need to become a stakeholder.
The thing is, I really think that the sort of thing which Dale is is a very, very good idea. Any time I see a link to "C-like language done up in S-exps with macros", I'm very keenly interested, and have all these expectations. I should perhaps do a better job of hiding my disappointment.
Yikes, yeah, this is bad. I only went through the readme, I didn't look at the source, and just...no. NULL? no. nullptr, please. And, as you've said, const char* as strings? again, no. I know GLS isn't terribly old, but something like string_view is in order. Arguments as undecorated pointers...no. Are they in or out? optional? C++ has semantics to indicate these things (even without GLS).
This thread of conversation is unproductive. Could you link me to both of your respective github accounts so I can nitpick without any stake or contribution in your projects?
No, because I'm bound by contract, cannot disclose my source, and I'm under NDA. That said, my comments are not unreasonabe or overly critical for C++11 or newer code. Quite honestly, if I desired to, there's more I could probably find issue with in even that small of a code sample.
I think that this developer, just like I, have drunk the homoiconic kool-aid. Homoiconic syntax is _fucking serious_ as far as productivity and enjoyment goes, it's like a whole separate world.
This is a completely legitimate answer to my questions.
I'm just curious - I'm not passing judgement at all. Hell, I've a C++ compiler I've been working on for a while (it doesn't get much attention). Why? It forces you to confront the spec (all +1200 pages of it) and to understand the nuances of the language. I don't ever expect my compiler to be released publicly or used in production, but it's for my own personal growth and to increase my understanding of the language, which makes me a better C++ developer. I'm more of a data systems programmer (I work in finance), so working on a compiler exposed me to other areas I wasn't familiar with (parsers, grammars, ASTs, etc).
But, more substantive answers to the questions I've asked of a language(s) designer(s) can provide insight into what the language is well (or poorly) suited for or for it's potential longevity. If it was created for fun by the author, it's probably significantly less likely to have large adoption.
Unfortunately, I now commute 2 hours a day in the car, so my spare reading time is greatly diminished, so I don't have 40 minutes twice a day to read something novel like this in detail on the train like I used to have.
Writing a language for fun is totally fine, but it is useful to know that fact when the language is being shared. Am I looking at this language to see if it will be useful to solve a problem I have that existing languages don't solve, or is it just an exercise, or is it an improvement?
No, but there is still a big difference between a new language with a still-small community, but with ambition and some early attempts to apply it at "real" problems, vs a brainfuck-like, just for kicks language.
Everything is done for a reason. Sometimes that reason is just "to have fun / learn something new / expand ones horizons" and that's okay. But sometimes the reason is "to get the same amount of work done faster" and that's also okay. A lot of the mainstream languages were made for the second reason, but probably most hobbyist languages were made for the first.
I created one a long time ago for reasons that seem to show up in others. Those included: Lisp-style macros are raw productivity/power; incremental, per-function compilation plus REPL = blazing, iteration speed; if using LCD of features, I can synthesize to more than one language/VM target; I can automate safety checks for C pitfalls without looking at cluttered code; way cleaner way to handle errors with similar benefit.
Sadly, I lost that tool in a triple, HD crash along with most work. Loved it, though. I never even fully learned LISP or C but the subsets let me crank out tons functionality really fast then run it through optimizing C compilers.
It didn't exist at the time. I was gonna maybe turn it into a product, too. That be a great default today, though. Now I also know one can have paid, shared source. I might have opened it with free, perpetual licenses for any contributors.
A regular, non-convoluted[1] syntax for C? Hygienic macros instead of defines? These two are sufficient reasons to consider it.
But they also seem to offer namespaces, type inference (even through macros), sum types, anonymous functions, overloaded functions, a form of runtime introspection, and a bunch of goodies like containers in the stdlib.
I would think that the rationale for any language with the syntax of Lisp but semantics similar to X is obvious: you want a language like X (that can be used for the things that X is used for) but with the meta-programming capabilities provided by Lisp-style macros.
I can definitely see a Lisper thinking this rationale is so obvious that it does not require mentioning.
Yep, macros are the only motivation needed for wrapping a language like C, and this project does a lot more than that. I've tried to come up with a way to do this myself, but I always hit a wall when it comes to integrating with the C preprocessor.
" Its
development was prompted by the Common Lisp and Scheme tutorials that
contrast syntactic macros with C preprocessor macros, and wanting to
see whether syntactic macros could work in a lower-level language. " -Tom Harrison [1]
You're welcome; I had to dig to find it. I, like you, would have preferred to see the motivation in the top-level documentation. It helps so much to see the core concerns that an abstraction is built upon, especially something as tangled and complex as a high-programming language.
> What problem was the author trying to solve that they thought the solution was a new language?
It could well be a cultural thing: for some people, "creating a new language" is a serious undertaking, but for others (especially those exposed to Lisp and Scheme), creating a new language is something you might do if you're curious about something. At the end of the day, creating a language just isn't all that hard to give it so much thought.
In Lisp / Scheme, people may create a tiny language to express the problem.
Steps:
1. Create a new language in which the problem is trivially expressed.
2. Solve the problem in the new language.
Lisp / Scheme are ideal for this.
Example: solve some sort of puzzle game.
Create a data structure that represents the board, a game piece and a move. Operators upon these. Given a board and a legal move, return a new board with the move applied. Given a board, give all of the possible legal moves (for a particular player if this a multi-player game instead of a puzzle).
Once you have that language it becomes easy to use your favorite off the shelf search algorithms. Depth first. Breadth first. A*. Etc.
But, this is probably not the same reason to create a "language" as the article. And the notion of "language" is quite different.
What makes that a 'language' and not just a program? When I create a 'GameBoard' class in an OO language, I don't tell someone I created a new language.
Doesn't a language have to be turing complete to be a language?
I haven't used any of these, but my understanding is that Agda and Coq are not Turing-complete, nor is Idris with the totality checker on. Never seen anyone dispute these being referred to as "languages."
This is not a critique; I'm just curious.