"Go is syntactically a big language. Neugram’s front end has to match it, and that’s a lot of work, much of which is still to do."
I appreciate the sentiment – that there's a lot of work left to do – but isn't the assertion about syntax size false? I thought Go's syntax was quite small compared to other popular languages? It only has 25 reserved words (Java has 50+) and was designed in part to be uniform and easy to read.
I made this point poorly, because I agree with you that Go is quite easy to read and very predictable.
With more words: I have written a parser and type checker for an ML-style language, with parametric types and several other neat tricks in it, and I've now written a parser and type checker for a good subset of Go. The latter has been far more work. I am not entirely sure how to explain the work. Go has lots of syntactic and type conveniences that are easy to use and read, but quite difficult to implement.
As there are few implementers and many users, I think the original designers of Go picked well when they put the work on us implementers.
Can you elaborate on what syntactic conveniences are difficult to implement and why? Language design is on of my hobbies, so I really would like to know that.
One good example is untyped constants. They form a large extension to the type system only present at compile time. It is a good example because there is an excellent blog post describing them in detail: https://blog.golang.org/constants
In particular note the difference between
const hello = "Hello, 世界"
and
const typedHello string = "Hello, 世界"
one of these can be assigned to named string types, the other cannot.
As a user of Go I find untyped constants to be extremely useful. They almost always do what I want without surprise. Implementing them however, is not trivial.
A tricker example is embedding. It adds a lot of complexity to the type system. As a user, it can be a bit surprising, but I admit it is very useful.
At a guess, I would describe it as a trade off between syntax and semantics.
Languages that have simpler syntax tend to have far more complex semantics (to infer what's missing, etc.).
Eg. python's semantics is horribly complex: "hello".upper() is something like str.__dict__['upper']("hello")) -- which is all run-time. Whereas say a C++ version amounts to quite a simple run-time function call on some bytes.
Because Neugram is a different language. It is so only in small ways, but they add up when you are in the parser, and especially in the type checker.
First is the fact that the top level is statements, not declarations. That would mean heavy modification to the go/parser package to make the inner-function statement parser the top level.
Second is the fact I need to parse incrementally, to implement a REPL. That would not require serious modification to go/parser, but it would to go/types.
Third is I wanted to experiment with new syntax. It was quicker to get to that experimentation by working from scratch rather than adopting go/parser and go/types.
Fourth is there are some fundamental things I would like to do differently, particularly around how comments are handled. I haven't got there yet.
In retrospect, if the new Go parser inside cmd/compile existed when I started this, I probably would have started from there. (It does many of the things I wanted to do to go/parser.) I suspect I still would have ended up with my own type checker though, if for no other reason than incremental type checking.
I've toyed with making some kind of interpreter for Go. Something that integrates well with it, giving access to all the libraries, channels, goroutines and such. Discussing this with an acquaintance, we decided the best scripting language for Go was... Go. It is interesting to see that you've headed in the same general direction, though you've thought things through much further than we did.
... yeah, Go's small number of keywords is often touted by Gophers, but really it's because the go team made this incredibly stupid decision.
If you counted the same sorts of things as keywords that java did, Go would have almost exactly the same number... and more problems would be caught at compile-time (like `len := 1`) .
I think the true argument is that you can read the Go specification in a slightly extended sitting, whereas (for example), the current working draft for C++ is 1416 pages.
The Go specification is not the same sort of document as the C++ or Java specifications. Go is doubtlessly less extensive than C++ or Java, but it would be dishonest to attempt to leverage their respective specification documents as yardsticks for complexity. If Go were submitted to a standards body such as ISO, the drafting committee would produce a more detailed and more verbose document than currently exists.
Unless the C++ draft is 90+% stdlib, it's still quite a difference in size (printed, which would not be a good way to use it, the Go spec is 90 pages). And I'd consider it normative, though not to the point of "spec wins if the spec and the toolchain disagree".
I'm not sure Go has a spec. The page says spec, but the content says its the reference manual. So I'm guessing it's more focused on learning how to use the language then on writing an alternative implementation of it.
Whenever a difference is found between the two Go compilers, gccgo and gc, the spec is consulted to see which is right. If the spec is not clear on which compiler is right, then the spec is changed so it is.
When the third frontend was written (go/parser and go/types in the standard library), the spec played a similar role. Though far fewer changes were made to the spec, suggesting it is getting pretty good.
If the spec and an implementation disagree, there is a bug in one or the other. (There are multiple implementations of the language.) The Go spec is not entirely normative in that a disagreement between it and the implementation may be resolved in favor of the later. It is normative in that you should be able to implement the entire language from the spec alone.
Guess it's relative. Different lisps are probably syntactically the smallest group of languages, but compared to Java (which most people would say is a bloated language) it's a tiny one.
How does this compare with https://github.com/cosmos72/gomacro which does something very similar? It seems fairly complete, or at least much further along than neugram + it does lisp style macros! If you ignore the macros its language is pretty much go except that main(){ ... } is elided just like in ng.
May be worth checking it out, at least for ideas/inspiration.
100% behind you on this, crawshaw. Although I've never missed having a Python style REPL for golang ("go run" usually fits the bill). You make a convincing argument for why it would indeed be a very powerful tool for any Linux admin's toolbox ;)
It's premature for feature requests I know. But I'd love to be able to view the Stack Trace of a live running goroutine. Typically something I log to disk using the "runtime" calls. To be able to probe that info from a command shell would be neat, don't you think?
Yes, that sounds like the kind of mistake I would make.
I put a tiny link at the bottom of to the home page: https://neugram.io. I'll do something more elaborate when I'm done with this other bug I'm working on. Thanks.
Hi, I'm the author. I don't think Neugram is really ready for the full HN treatment, but I'm happy to answer questions.
I kept trying to write a "Why Neugram" post and kept getting stuck, which is why I went with this incomplete enumeration of differences from Go. But I get a bit more into my motivation in section 4 of this post today: https://neugram.io/blog/design-principles
Basically, I write Go all the time. I occasionally write bash, perl, or python. Occasionally enough, that those languages fall out of my head. What I really want is a scripting language that reuses all my day-to-day knowledge from Go, only with some more scripting language friendliness.
I suspect if I programmed in Nim every day, I would use it as a scripting language. Nim is neat.
Nim has become so good that no one else should develop languages as pet projects? If that's what you're saying (which is how I read it), it's a very obnoxious sentiment.
I like this. Especially the error handling. Since the focus on a scripting language can be brevity (Ala perl) I don't want to write all that error handling boilerplate. But I don't want to ignore issues either. So this is a compromise. I can check errors, I can capture errors (in _) to ignore them. But if I don't do anything, they turn into panics.
It really is an excellent package. Does exactly what it says on the box.
I fear I may hit a wall with it eventually. For example, several interesting approaches to https://github.com/neugram/ng/issues/49 would step outside what liner can do. But I'm going to use it as long as I can.
Among other things, I actually modified liner to stop using/swallowing all control codes, and to instead surface some of them to the client program to decide how to respond. e.g. liner stops doing anything with Ctrl-X (0x18 ?) and instead ng could respond to Ctrl-X by writing a docstring to stderr for an identifier under the caret in the input buffer.
It seems like something like this would be useful for a Jupyter kernel. I don't think I'd use this as an embedded scripting language; it doesn't offer enough in terms of abstraction and conciseness to justify leaving Go. I'd probably try for a lisp.
The relation to the go toolchain right now is your GOPATH is used to find Go packages when you import them. (It uses the go tool to build the package as a plugin for loading.)
You certainly could put your Neugram files in the same git repository as your Go code, and then "go get" would get them.
I think there is something to be said for something like:
import "github.com/crawshaw/foo/mypkg.ng"
looking for the .ng file in your GOPATH. I need to think about that a bit more.
(Note that a Neugram package is limited to a single file, unlike a Go package. This is for a couple of practical reasons, notably init order, and one philosophical reason: which is Neugram packages shouldn't get as big as Go packages.)
Sometimes switching between Go and C is weird (because of the type order or the semicolons) but I've never had any trouble switching between Go and Python.
Go has an awesome ecosystem for writing servers and web applications but it's obvious that the libraries weren't built for scripting purposes and the language is more explicit (for good reasons).
Python, on the other hand has an ecosystem that's great for scripting and language features that are more compact and terse.
If you don't have trouble switching between Go and Python, then the concept of Neugram has much less to offer you than me.
Possibly you don't because you use both languages more than me. My scripting needs are approximately once every other week, sometimes even longer. That's enough time for me to forget what python package has the date/time functions I need.
I also wouldn't even begin to compare something in its infancy like Neugram with something as well-established as Python's huge collection of libraries. There are many years of work between here and there.
> Possibly you don't because you use both languages more than me.
It depends what the client or project demands. I've worked in Python, Javascript, PHP, VBA, Java, Go, and C (plus a few dialects of SQL). Spending months writing mostly one or two of those at a time. The only ones I enjoy enough to use in my hobby projects are Python, Javascript, and Go.
But in my experience the hardest part of toggling between languages like that is the ecosystem (libraries and tools).
In this case it seems like you're adding a new ecosystem which means I'd still have to stop and figure out "what's the right package to use for this?"
For the most part, language is easy. It's the idioms, libraries, and tools that I find myself googling the most.
My goal is very much to reuse Go packages. It is somewhat working today. The underlying mechanism is when you type something like:
import "github.com/pkg/errors"
A .go wrapper file is generated, a Go plugin is built, and then loaded into the ng executable. You can then use the errors package in the Neugram evaluator. Under the hood the lifting is done by the reflect package.
Very much my goal is when scripting, to use the os.Open and ioutil.ReadFile I know, along with filepath.Join, time.Now, and friends.
You can already "go run program.go" from anywhere, even outside of a project structure, which is the same result as running an interpreted script.
It's a novel idea but I don't really see it taking off. Is there really that much benefit to having a shebang at the top of the file versus executing "go run" and is it really worth having a REPL for a fairly verbose language? I certainly don't see myself wishing for a Java REPL, and I don't particular see myself reaching for a Go REPL either.
I appreciate the sentiment – that there's a lot of work left to do – but isn't the assertion about syntax size false? I thought Go's syntax was quite small compared to other popular languages? It only has 25 reserved words (Java has 50+) and was designed in part to be uniform and easy to read.