Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go generics draft design: building a hashtable (mdlayher.com)
196 points by mdlayher on June 17, 2020 | hide | past | favorite | 130 comments


Since this post is illustrating the use of generics, why not go all the way with the get implementation to return an Option[V] type? It's a natural thing to do here. The return type is already a kind of sum type: it's either the value you want (non-zero value, true) or it's not (zero value, false). If the implementation uses the optional type, it'll become impossible to write code that uses the zeroed-out returned value incorrectly. Calling code must always explicitly check the returned Optional[V] value to access the value of V and continue or to perform some code to handle the not present case. As it stands, it's very possible to ignore the 2nd returned boolean value and write code that'll easily break.

Now, I can see why the author would _not_ want to do this, since this "explosion" of sum-typed things is present in all go code (e.g. the err := ...; if err == nil { ... pattern). So, it might be easier for Go programmers to see how they could use generics in their own code by re-using this pattern. However, I think this is a disservice to why generics are an incredibly useful construct in programming languages. They can be used to align code more closely with the semantics that the programmer wants to convey.


> If the implementation uses the optional type, it'll become impossible to write code that uses the zeroed-out returned value incorrectly.

Since Go doesn't have sum types, it would most likely be possible: the option type would just be a reification of the MRV. At best it could panic if you try to get the value of an "empty" optional but now you've got a panic.


It’s always possible, even in a Rust Option you’ll panic when you extract the value with unwrap() without knowing that it’s valid.

What it does is preventing the accidental use of a missing value. You can’t pass the Option<T> on to a function taking a T without explicitly doing it.


[flagged]


It's not even just rust; any language with functional features has an option/maybe type, and it's incredibly useful and a great tool to help ensuring your code is correct.


This doesn't add anything. You get the same protection in Go with pointers. For example, a `*T` can't be passed to a function taking a `T` without explicitly dereferencing it. The problem of course is that the type system doesn't guarantee that the pointer isn't nil when you go to dereference it, similarly your `Option<T>` doesn't guarantee that the option.Value is set correctly. You need sum types to provide this substantial guarantee.


Pointers in go aren't a signal for optionality, they're a signal for stack or heap or optionality. They also don't play nice with interfaces.

Let's say I have a hashmap that returns a `&T` and I have a `func foo(s Stringer)`. Because 'Stringer' is an interface, it's possible for it to take both `T` and `&T` without a compile-time complaint.

In addition, I may wish to have my function `bar(t &T)` always take a pointer because I want the object to exist on the heap, or to be able to mutate its value for the caller.

Since pointers mean so many other things, they're not a good way to have a compile-error indicate optionality.

On the other hand, if my caller does `hm.Get` and gets an `Option<s: Stringer>`, it's clear what to do to pass it to foo, and if it's an `Option<&T>`, it's clear both that I want a pointer, and that it should be checked / unwrapped.

I agree that `T` / `&T` would be just as powerful as option types in go (without sum types) if pointers didn't already have other substantial meaning, and if they could sensibly interact with interfaces.

As it stands, I think you're off the mark though.

(note: All the asterisks are & because I dunno how to escape stuff on hn)


> Pointers in go aren't a signal for optionality, they're a signal for stack or heap

No, a pointer in Go doesn't mean that it's on the heap. The compiler keeps it on the stack if it's safe to do so, regardless of whether you're using pointers.

You can even write code like `t := new(T); t.Foo()` that very much looks like you're allocating on the heap, but it can stay on the stack, yet t is then a pointer to the stack.

Unlike C, you don't need to worry about the heap-vs-stack in Go. It's never even mentioned in the language spec as a concept people need to be concerned with. It's an implementation detail.


> Unlike C, you don't need to worry about the heap-vs-stack in Go. […] It's an implementation detail.

I'm really surprised to read that: yes a beginner can get his code working without wondering about stack vs heap (and that's one of the big reason why Go is easier to learn than Rust for people coming from non-system language), but as soon as you care about performance (which many Go users do!), you need to write code that reduces allocations to the minimum, because Go's allocator is really slow (compared to Java for instance). Interestingly enough, doing so forces you to think about the ownership and lifetime of your objects, like you'd do in C (or Rust).


The great thing about Go is that you don't need to think about allocations or lifetimes until you care about performance. Memory management is optimization in Go, while in C, C++, Rust, etc it's required to get a program to compile (of course you can "opt in" to easy memory management in those languages by using some kind of GC library, this is vanishingly rare--presumably developers would rather just use a language that was designed for GC). If you're at the point where your Go code is so performance sensitive that you're optimizing memory more often than not, it probably makes sense to be using Rust, but these cases are rare for the kinds of applications I tend to write (which is sad because I actually really enjoy using Rust).


Absolutely (well at least for Rust, because for C it's not required for it to compile: the code will compile to some broken garbage that will either crash at runtime, be vulnerable to exploits or burn your hard drive, depending on the mood ;).

I just found surprising to see a core person of Go declaring heap vs stack allocation to be “implementation detail”. Because if it was, that would mean that my carefully crafted zero-alloc code could become full of allocations one day because the underlying implementation changed! Obviously they don't want their users to be afraid of that.


This is not entirely true. You can exhaust stack headroom by calling functions with very large pass-by-value structs. In these situations, you are then forced to use a pointer, which results in heap allocation.


It guarantees that the value is set if the flag is saying it is set (that's the invariant of the type). It screams "check flag before accessing the value".

To compare with references which are also 0-or-1-thing effectively: A reference where you as a developer know it's never null but always a ref to exactly one thing is denoted "* T" and a reference to "one thing or null" is denoted "* T"! There is no difference in the types! so you can accidentally send one that is "0-or-1-things" to a method accepting a * T that MUST be a thing. Type system didn't help you document which case it was.

Options, apart from the annotation benefit it also helps making the syntax nicer in many cases, with e.g. "or()" fallbacks etc.

    let data = get_cached().or(load_from_disk()).or_panic();


I agree that there are semantic issues with pointers, but my point was to illustrate that you need sum types in Go to get any real benefit out of an Option type. If you need an option type and you aren't content with a pointer, you can use `(T, bool)`, but this is still a far cry from a real Option type.


I use Option<T> types very happily in C# without sum types. Most of what would be pattern matching can be done with just methods.

    Car c = maybeCar.GetValueOr(CreateCar()) // inline fallback 

    maybeDog.Do(d => b.Bark()) // only performs call if present

    Sailboat s = mybeBoat.As<SailBoat>() // none unless of correct subtype
And so on. With nullable reference types C# now has a builtin alternative to this, but it has worked we’ll for many years.


I like the idea, I just haven't given it a try with the new draft yet. Sounds like it's worth exploring at the very least.


Cool! I'm glad you read my comment. I appreciate that you went through the effort to make a blog post (my comment is very low effort in comparison). I hope it came off as constructive.


I really wish Go would pursue sum types instead of generics


Sum types and Generics solve different but related problems. So they are not really comparable. I would like to see both get implemented.


I'd say it's really hard to abuse sum types, I wouldn't say the same of generics.


That is true. However sum types don't allow us to abstract over common code. For example a sum type wouldn't allow us to create a type safe channel "fan in" function that works for all channel types.


Sum types without generics will still be good enough? I think for things like Option/Result we need both sum types and generics.


I don't think you would need user generics for this to work.


Without generics, sum types would have to be a built-in like maps and arrays.

With generics, discriminated unions can take on all sorts of user-defined shapes.


Without generics, structs can take on all sorts of user-defined shapes. It's just that the types of its fields are fixed at declaration.

Without generics but with sum types, the sum types can still take on all sorts of user-defined shapes. It's just that the types of its various alternatives are fixed at declaration.


Non-generic sum types are useful but you can’t build an option or result type out of them, so the usefulness is somewhat hampered.


But you don't need to, generic Option and Result types could be built as part of the standard library.


Yes but then you need neither sum types nor user-defined generics, you can just build-in whatever sugar is useful or necessary (possibly repurposing existing sugar e.g. the `select` statement).


sum types are still super useful imo without generics.


Aren't the new interfaces from the draft, where you can list bunch of possible types, sum types?

edit: oh. no.

https://go.googlesource.com/proposal/+/refs/heads/master/des...


I don't use Go myself but knowing its philosophy I wonder if they'll end up replacing the idiomatic "if err == nil" with an generic optional type even if they end up implementing generics in Go.

For one thing it would generate a massive amount of churn to upgrade existing code, and if you don't update you'll quickly end up with very ugly mixed error handling patterns. On top of that Go seems to really value compilation speed, so I suspect that they won't want generics "contaminating" interfaces all over the place only to do error handling.

I'm really curious to see (from the outside) how all of this is going to coalesce in the end.


Generics aren't the gap (multiple return values are already generic), but rather sum types. Sum types are what allow you to express that this is either None/Sum(T) (Option) or Ok(T)/Err(E) (Result) or Nil/Cons (List) or etc.


Sum types are what give you compile-time safety over those options. Run-time safety and developer-intent-signaling is entirely feasible with just generics.


Right, but as previously mentioned, Go's multiple return values are already "generic" and already signal developer intent. If you have a library method that returns (int, err) every single Go developer will check the err first before using the int. User-defined generics don't improve this use case.


Result types can, at runtime, by making the err case panic if the value is accessed, instead of just returning a zero value. They let you move past intent and into enforcement. Multiple returns are nothing but intent, and cannot be made stronger.

You can do that without generics, of course. But the developer overhead is large enough that it effectively does not happen, as you have to redo that by hand for every type. That's what generics bring - ergonomics good enough to stop using less safe workarounds (e.g. `interface{}`, multiple returns).


Take a look at the comment tree starting here in the Generics announcement thread from the other day for some discussion of Option[V] under the current proposal:

https://news.ycombinator.com/item?id=23545361

For what it worth, returning `(value, err)` is conceptually the same thing as returning a Result[V]. You can ignore the error case on a result / the None case on an option as easily as you ignore the `err` today.


> You can ignore the error case on a result / the None case on an option as easily as you ignore the `err` today.

The point of a result type is that you _cannot_ ignore the None case. Any method that would provide you with the value will also check that the error is not present.

In comparison, you can happily ignore `err` in Golang and continue with an invalid `value`.


I see what you're saying, and while I think you're right, I think a lot more of the value is in being able to use `Map()` /`FlatMap()` to be able to avoid thinking about error checking at all, as I laid out in the second part of this comment https://news.ycombinator.com/item?id=23549396 . The convention of returning `(value, err)` goes a long way towards enforcing the checking of errors; I have golangci-lint's `errcheck` linter enabled and basically cannot accidentally ignore the `err` values. In any case, Option/Result types are extremely useful and I'm happy that with generics the language will be able to include them!


For an ergonomic implementation of Result and Option you need to be able to represent tagged unions/enums/sum types, and having pattern matching makes dealing with them much more natural. These features are not planned for Go (yet?), right?


> In comparison, you can happily ignore `err` in Golang and continue with an invalid `value`.

I'm struggling to envision a Result type that requires you to be more explicit than `foo, _ := fallible()`. Seems like `fallible().Ok()` and similar are strictly less explicit.


To do anything with a Result/Optional, you have to explicitly get the value inside the box at some point. But the container abstraction also gives you a nice composition abstractions instead of the uncomposable top-level `val, err := do()` destructure.

Though perhaps not quite as compelling without real sum types. I'd have to play with Go's generic-typing sandbox more to form a stronger opinion.


> returning `(value, err)` is conceptually the same thing as returning a Result[V]. You can ignore the error case on a result / the None case on an option as easily as you ignore the `err` today.

No, you can't. If you want to pull the value out of a result and ignore errors, you have to explicitly do so. With `(value, err)` style error handling, you can write bugs by simply forgetting to check `err`.


I'm not understanding this. Are you saying an Option[V] would reduce the number of lines of code (the explosion) that Go code uses for "err := ...; if err == nil"?


I don't think it would in Go. You'd still end up with

    result := ThingThatReturnsOption(...)
    if result.Error() != nil {
        // ...
    }
happening in general, or some other equivalent construct.

The main point of having Option in this case would be to make it so that where Go programmers normally write

    result, err := ThingThatMayError()
you can get precisely one of a result or an error. At the moment with the current calling convention it is possible to both return a result and an error.

However, I will say that while in theory this is advantageous (and I mean that seriously), in practice this is nearly a non-issue. I don't think I've ever had a bug because I had both things and misused them. I expect a dozen or more "Option" implementations to pop up nearly overnight once this is released, and for Go programmers to settle pretty quickly on not using it.

In non-Go languages, Options can have additional features that make them yet more powerful, such as chaining together optional computations in a way that makes it easy to shortcircuit whole computations, e.g., in Haskell:

     do
         x <- optionalFail
         y <- somethingElseFail x
         z <- moreMightFail x y
         return (extractFromZ z)
In Haskell, assuming the right definitions of the various functions, while that may look like it's not handling errors, it actually is, because the machinery behind their Option type (called Either in Haskell) is handling all the short circuiting. Go comprehensively lacks the features necessary to make that sufficiently pleasant to use that anyone will, though.

It is not unique in lacking those features, most languages are missing at least one thing to make it easy enough to use people will, but it does quite comprehensively lack them. It's not just a matter of adding this one little thing or that other thing, it'd be a whole suite of necessary changes, e.g., you might be able to write an .AndThen(...) function to operate on a Option type, but it's going to be too inconvenient to use, even post-generics, and even if you force it because it's the Right Thing to Do in languages that aren't the one you are currently programming in, it's still going to be a lot of disadvantages for not much advantage. Personally I don't value "Doing the Right Thing in language X while working in language Y" very highly, but some people seem to.


When you chain multiple things that can go wrong with Result[T,E] or Option[V] it should be possible assuming there are methods for chaining/fallback. E.g. opening a file, reading the contents, parsing the contents etc. If all those 3 return Result[T,E] and you want the overall result (parsed) T or the first error to occur, then you should be able to chain that.


I think they are saying using sum types will remove a source of errors (forgetting to check for error conditions).

I also think part of their remark is on ‘either’, not ‘option’, but that’s not important for the point being made.


The author asks about using the same hash function as the builtin Go map. This was recently exposed in the standard lib at https://golang.org/pkg/hash/maphash/

Specifically https://golang.org/src/hash/maphash/maphash.go?s=1316:1346#L... links to the internal hash function.


Right, but you can only write strings or bytes to the hash, not integers, booleans, structs, etc. So the problem remains.


For what it's worth from the Peanut Gallery™, I find the proposed syntax quite readable and am happy there are no angle brackets <> stabbing my eyes.

It seems powerful enough to cover most cases of generics. Architecture astronauts will never be happy, but tough.


I'd definitely prefer angle brackets. With everything being parantheses I just get lost keeping track of what is what - they're now used for type parameters, parameters and return values, one after the other with no separation, and two of them are optional. Quite confusing if you ask me.


And yet, if you think about parsing it, they're all ways to group things together. Avoiding adding reserved characters (see also the discussion about adding ternaries) is a worthy goal to pursue. Keeps the parser(s) simpler too.

An IDE / coloring scheme can help make things look more distinct if need be.


Comparing it with the builtin hashtable:

* the custom one: hashtable.Table(string, int)

* the builtin one: map[string]int

The syntax forms are quite different. Wouldn't it be better to make them consistent? Is it so hard to achieve this?


It would not surprise me to learn that the parser has special rules for the token "map".


Yes, the fact that "map" is a keyword and "Table" is not really makes a trouble for parser. But I think we should think towards the road. There should be always a solution.


lisp


For a long time, Go designers have been saying that it doesn't have generics, because they don't think that the ways they're done in other languages is "good enough".

Looking at this, I don't see any fundamental differences from, say, C#. They're even using interfaces for generic constraints. Did they decide that they are "good enough", after all?


Don’t think the type systems itself were up for debate - the Go team isn’t necessarily going to invent new types of types or advance research in type theory - That’s already been done enough by Haskell, Scala and others. Whatever system comes into Go will have already been done somewhere.

The good enough is more drawing the tradeoff line in a spot that’s useful and clean to maintain, and the place where the core team and the community draws that line is finally converging.


But that's the thing - they ended up drawing that trade-off line in more or less the same spot as most other mainstream PLs with generics. I don't understand why it took so long to basically acknowledge that prevailing wisdom is correct.


A refusal to acknowledge that prevailing wisdom was correct is what got us Go in the first place. I think of the value of the project as being a re examination of every aspect of programming from first principles. Some decisions will change, some will be considered to already be optimal.


> For my design, I decided to enforce that both key and value types are comparable, so I could build a simple demo using two hashtables as an index and reverse index with flipped key/value types.

Surely the demo works equally well without this extra constraint? If the demo had a generic function for creating a reversible mapping it would have been necessary, but as it stands, this extra constraint comes across as avoiding having to write the less aesthetically pleasing

  type Table(type K comparable, V interface{}) struct {
    ...


Yep that is true. I wanted both values to be comparable for an easy demo but would write it as you've suggested if I intended to make this a general purpose package.


What I mean is that as written, the demo doesn't need V to be comparable.


Oh wow, for some reason I totally zoned out and finally understand what you mean. Yes, you're right! I should fix that.


The limitation around implementing methods on built-in types seems unfortunate. If I understand how Go interfaces work correctly, that would mean that you also can't implement your own interfaces on built-in types. Which would seem like quite a severe restriction in being generic over them. Perhaps I'm missing something?


You can only add methods on types that were defined in the current package. As a consequence, you can't add methods to built-in types. But that's rarely something you want to do anyway (I can't think of a real-world situation where it would be useful), what would make sense is creating a new string type, and add methods to it:

    type myInt int

    func (mi myInt) f() myInt { ... }


The alternative is to do something like:

type Int int

func (i Int) Hash() uintptr { /* do the hash */ }

But I didn't want to deal with it in this code. I agree that it isn't optimal and would be curious to see if the situation can be improved.


If you assume that the std lib creates a collections package, there could be helper types will for primitives. No Java style auto boxing, but more in the Go style. These could also be used in a refactored math package.


For example, the caller could cast `int` to `myInt` which implements the appropriate interface. This is probably more idiomatic than using a bare `int` type anyway.


What would be a real-world example of what you're referring to ("implement your own interfaces on built-in types")?


Perhaps like overloading in C++? Hash functions are sometimes defined this way (https://abseil.io/docs/cpp/guides/hash#making-hashable-types)


You can do cool things like Ruby's `2.days.ago` in languages that allow you to implement traits on any type.


`2.days.ago` is not "cool," it's an example of the some of the worst things about Ruby. Suppose I came across a cutesy expression like that in a program I was reading. How in the world am I supposed to figure out what code is invoked by that expression and where that code lives?

It's emphasizing writeability over understandability, which is totally backwards from the point of view of engineering robust systems.

Except it's barely even writeability, it's some bizarre notion that code should read like English where possible. And of course, since it's not actually English, it's only possible in limited ways and trying to generalize past those limits will break. So you have to learn exactly where the limits are anyway, which is as much effort as learning to use a proper library, except harder because a proper library will have the decency to stay in its own namespace.


Interesting article. I think that using <> for the type specifiers would possibly be better! For example one could quickly end up with something like:

  func (obj *SomeType(Q, Z)) Foo(type K, V comparable)(key K, val V) (*OtherType(Q, V), error) {
      ...
  }
... Lots of Infuriating & Silly Parentheses?


On <> syntax, this has been discussed quite a bit. () were chosen to avoid ambiguous parsing.

Also note that generic methods are not allowed in the current design, only generic functions.

The new draft design ( https://go.googlesource.com/proposal/+/refs/heads/master/des... ) discusses both of these points. I recommend everyone who is interested in this topic read the design spec, ideally before commenting.


On the <> syntax, why not improve the parser? Every language that uses <> for generics has a parser that is able to distinguish between generics and the "less than" operator, and it seems odd that Go developers think it can't be done efficiently.


C++ and Rust require extra disambiguating syntax in some contexts (C++ the `foo.template bar<T>()` syntax, Rust the "superfish operator" `foo.bar::<T>()`. Java works around the problem by awkwardly putting the generic argument list in front of the method name (`foo.<T>bar()`). I don't know about C#.


C# is `foo.bar<T>()`, I’m not sure what compromises that had to make for that to work but from an end-user perspective it works well.


They parse it one way, and if they figure out the other parse option was right, they go back and fix the parse tree.

Basically, unlimited look-ahead.


Is it somehow a problem?


(teeny tiny note: turbofish, not superfish)


Oops :D


Using [] is the optimal choice for languages in general, but they can't even use that because they blew away that syntax for indexing.


Scala uses [] for generics and () for indexing, which sort of makes sense given that in the end arrays are just functions.


So are the uninstantiated generic types ;-)


Go developers like to claim that the reason the Go compiler is fast is because it's "simple to parse". Unfortunately this doesn't make a lot of sense as parsing is typically only 1-5% of the total compile time.


It's not only about the main compiler. Go has tons of third-party tools (linters mainly) that can parse go code, because it's so easy to write one. Most of them wouldn't exist if parsing was a PITA.


Those almost universally use the "ast" package provided by the stdlib.


It is not a question of "Improve" but a question of dealing with tradeoffs. The Go parser is built so that at every stage it is totally unambiguous what the parser has to do.

This reduces the amount of state that the parser has to carry around and makes the error messages for syntax errors easier to generate.

Languages that use <>'s for generics have to look at a larger amount of the code when parsing to work out what to do.


In my opinion, the new design doc clearly characterizes this as a design goal of the parser, rather than a hard constraint that cannot be resolved.

"Resolving that requires effectively unbounded lookahead. In general we strive to keep the Go parser simple."

It's totally fair to question the tradeoff - should having a simple parser outweigh the potential ergonomic benefit of <>? I don't know. The forum for this is probably the golang-nuts group.


The design doc doesn't say "simple", it says "efficient".

I think "simpler" would have been a stronger argument. While using <> would make the parser more complex, I have a hard time seeing that making a meaningful performance difference in a compilation context. Maybe if you're parsing a lot of Go without actually compiling it, but that doesn't seem like a use case to optimize for.


It's all a matter of where you accept the complexity - in the compiler, in the language, or in the downstream applications. In my view it's an obvious choice to choose a more complicated compiler on exchange for simpler downstream applications...


The main reason is probably that by omitting <> for generics makes the symbol table unnecessary as part of the compilation stage. Go and D are the two modern languages that are avoiding symbol table like a plague for faster compilation time, i.e. the code should be parseabled without having to look things up in a symbol table .


The golang approach has been to dumb down the parser, at the expense of making it more complex for users.


there is no complexity tradeoff for users here.


Angle brackets are easier to parse for humans.


Ah, right, I missed this:

https://go.googlesource.com/proposal/+/refs/heads/master/des...

Hmmm ... not sure how I feel about that. Ok so my original example becomes this:

  func (obj *SomeType) Foo(type K, V, Q comparable)(key K, val V) (*OtherType(Q, V), error) {
      ...
  }
Which is only slightly better [without the generic type].

Don't get me wrong, I'm a big fan of Go, but I'm kind of on the fence about generic types. I've made do with casting interfaces and type-casts for many years and I'm OK with it (honestly, glad to not be a C++ or Java programmer anymore).


> Ok so my original example becomes this

No, you still don't understand. Methods can't have additional type parameters, only functions. This part is not allowed in your "example":

    (type K, V, Q comparable)


The doc says:

> Generic types can have methods. The receiver type of a method must declare the same number of type parameters as are declared in the receiver type's definition. They are declared without the type keyword or any constraint.

So my original example should have been:

  func (obj *SomeType(K, V)) Foo(key K, val V) (*OtherType(K, V), error) {
        ...
  }
(hopefully this is now correct!).


Exactly. Thanks!


I never have been able toi read the syntax with < > well, so I am soo happy they found some syntax based on parens. It is perfectly readable for me.


After seeing and trying dlang syntax for generics I was hugely disappointed that rust chose angle brackets.


I'm ready to bet that with <>, you'd introduce ambiguity in the grammar, making parsing code (and, as a consequence, writing tools for the language) much more complicated.


I agree, I think from a visual perspective the type declarations blend in with the rest of the type/function signature. I'm curious how quickly I'll get used to it.


There is absolutely no way this gets the vocal "conservative" core of the Go community's approval. For better or worse.


The <> people are the conservatives, because they are afraid to give new things a try. They want every programming language to look the same.


Consistency is important, though. Using C-family syntax makes it easy to get people to dip their toes into something within the spending time to learn a new syntax. Look at modern languages like Rust, Swift, Dart, and Go: they all use C-style syntax.

I’m not accusing you of advocating Go’s adopting a different syntax for generics solely to be contrarian - but using angle-brackets for type-parameters and template-parameters is a proven technique with few downsides - and certainly not any that downsides that would be fixed by using any other syntax I’m aware-of.


Well, Go already picks and chooses elements from different languages, e.g. Pascal-style declarations ("x int" instead of "int x") and C-style blocks ("{...}" instead of "begin...end"). So I'm not surprised that when deciding on the syntax for generics they look more at what fits best for Go than at what other languages are doing...


I do prefer TypeScript’s version of Pascal-style declarations now. Though I really don’t like how Go doesn’t have a colon between the name and the type, it makes it harder to scan-through parameter lists when you aren’t using an editor with syntax colouring.

(I had to write a lot of Go code in Notepad for a while in 2017 - never again)


Now that unicode is everywhere I think it's time to use more brackets than [], (), {} and <>.

Python is the worst, where dicts and sets both use {} and tuples and "grouping" both use (), each causing real problems.

Some of many possibilities:

⦅⦆«» ⟦⟧ ⟨⟩ ⟪⟫ ⟮⟯ ⟬⟭ ⌈⌉ ⌊⌋ ⦇⦈ ⦉⦊ ||

Yes, you need a way to type it on non specialized keyboards.

An easy way is is to type it as (( or [[ etc, and let the IDE convert it.


To me this is the ultimate "developer chasing shiny" at the expense of everything else.

Especially in the context of Go that hasn't even managed to use anything other than parentheses in its func definition syntax. No language has saturated the bracket options that come with the standard US keyboard so much that it's worth bringing in characters that you need additional tooling to type.

If (), <>, {}, and [] aren't enough for a language, something has gone very wrong.


While you normally don't regard it as a bracketing we also use "", '' and `` to bracket things.

In Perl / / was used to bracket regrexs in C and many C languages /* */ is used to bracket comments.


Dicts and sets are supposed to both use {}; {a,b,c} is (conceptually) shorthand for {a:dummy,b:dummy,c:dummy}, where sets (in some cases, including python, opaquely) are actually dicts without the space to store values. (This also shows up in set theory and related mathematics, although there it's dicts that are unusual.)

The rest of your post is a fair point, although I think you underestimate the value of plain ASCII that can be typed on a unassisted keyboard.


That's a lot of effort for little benefits

I can already see a lot of edge cases

Most notably font support

I like ligatures and ide/editor support for them, but I'm not sure I like this


The benefit is purely readability.

You read code 1000x more often than you write it, so a little extra effort for even a small readability benefit is worth it.

Most fonts have quite full Unicode support since many years.

I'm often disappointed in how reluctant programmers are to use modern technology. The public thinks we're on the cutting edge of advanced technology. If they only knew :)


This is just unnecessarily salty. Fonts may have full unicode support, but do keyboards? I don't want to have to press some weird key combination to type a symbol I might type hundreds of times each day.

Honestly, it's just common sense.


> The benefit is purely readability.

This year mark my 30th year as a professional developer, but my first "program" was copied fro a magazine

   10 PRINT CHR$(205.5+RND(1)); : GOTO 10
It generated random maze like structures in screen using two PETSCII symbols

I think everybody know this one liner here

So it's not symbols I don't like, it's the lack of ergonomics

My hands started hurting when I had to type a lot to write simple symbols like } that's why I switched to US layout and things have gone a lot better for my nuckles

What really surprises and disappoints me is how many people appoint themselves as engineers or computer scientists, but take for granted that their ideas are good and the rebuttals are "resistance to change" without even reading the most basic studies on the topic or testing their theories on the field.

Implement your idea, measure the results and then we'll discuss of why they didn't take off.

Because they won't, believe me.

There is a reason why I use font with the zero striked, otherwise capital O would look too similar.

What do you think of ‹‹ vs « ?

Do they really look different enough to be useful?

Cutting edge doesn't mean "stupid"


Even Spanish keyboards only have a dedicated Ñ key, so you have to use multi-key sequences to input ÁÉÍÓÚÜ etc.

And even after typing in Spanish for five years, it's still objectively more annoying to write está (6 keypresses) vs esta (4 keypresses). And I'm always having to go back and correct the key sequence because I've written ´a instead of á.

Also, just today I saw someone use "⇸" (crossed out arrow) on HN and it was so tiny with HN's font that I had to zoom in to see what it was which only inhibited their message.

You have to consider this sort of overhead when making decisions about the glyphs you are going to impose upon everyone when designing a language.

I've seen pages and pages of bike-shedding over whether to use kebab-case over snake_case for a DSL because it's one less keypress to type a hyphen vs underscore.

I can appreciate your preference for those characters. It's nice how they can encode meaning in a single glyph where you would otherwise need multiple glyphs (like "!="). One solution that you can find people doing today are IDE plugins that simply rerender something like "->" as "→".

Seems like the best of all worlds. You get to work with higher level characters while the plaintext format remains in the most accessible common denominator.


I do like what Haskell and Scala have done for operator diversity ... maybe Go should learn from it:

https://www.reddit.com/r/programming/comments/a9tb2/secret_h...


I'd rather have

    [* *]

    [= =]

    [/ /]

    (* *)

    (= =)

    (/ /)
Just kidding, requiring specialized tooling to type isn't quite acceptable..


Half of those don't even render for me. French quotes seem fair though, since we throw $'s around in languages like every country needs a dollar sign on their keyboards.


And then you need a specialised IDE to type characters?

Or there's a bug in the IDE and all of a sudden you literally cannot type anymore?


It's a chicken and egg problem. Input systems could be a lot better, and we wouldn't need (as much) a specialized IDE. E.g., in Linux, I can type « and » if I map the compose key (which I do, but it isn't the default). For really wild characters, OS X has a pretty decent character search dialog (^+⌘+Space) though it has a few usability shortcoming that I really wish Apple would address. (I wish I could get a decent implementation of this in Linux, and I wish I could get a compose key in OS X.)

(I'm not sure I'm convinced that using such symbols in a programming language is a good idea, of course, just that our current input tech is barely trying.)

Also, APL happened.


How do I pronounce... any of those?


Looks pretty nice and readable. Looks to me like a strong indication that the updated generics concept is close to what should go into Go.


Small note: the runtime hash function is available in standard library (hash/maphash) since Go 1.14. You don't need that introspection magic to access it.


Offtopic: the blog post title alone is taking up my entire phone screen (iPhone 7)


I am not a GoLang programmer, but that just didn't look neither pretty or readable...

Perhaps, perhaps, hard-core generics are not a great idea after all, and they should only be in the annotation level of the language (especially for container types, which is the only place where generics are truly valuable)...


The problem is that with current proposal Go severely lacks type inference, required for generics to look nice and clean.

A lot of the information is already in the definitions, there's no reason to repeat it, but Go chose to do that.

    // Reduce reduces a []T1 to a single value using a reduction function.
    func Reduce(type T1, T2)(s []T1, initializer T2, f func(T2, T1) T2) T2 {

    s := []int{1, 2, 3}
    sum := slices.Reduce(s, 0, func(i, j int) int { return i + j })
and the example from referenced article is even more outrageous

    t1 := hashtable.New(string, int)(8, func(key string, m int) int {
We have function types literally at the same exression, but Go requires to repeat it.


> We have function types literally at the same exression, but Go requires to repeat it.

It does not require it, the author chose to do it.

    t1 := hashtable.New(8, func(key string, m int) int {
is perfectly valid.

https://go2goplay.golang.org/p/SbEXpyVl-V3


In this particular case, the compiler can't infer the type of V if you omit the type parameters at the call sites:

  type checking failed for main
  prog.go2:17:3: cannot infer V (prog.go2:46:27)
  prog.go2:22:3: cannot infer V (prog.go2:46:27)
But generally you are right, yes. It is able to infer K at least.


Thanks for noting, but my bigger concern was with different thing, that I’m also quoting from the doc (https://go2goplay.golang.org/p/LFo23rCKHXZ):

    func Filter(type T)(s []T, f func(T) bool) []T { ... }
 
    s:= []int{1,2m3}
    evens := slices.Filter(s, func(i int) bool { return i%2 == 0 })
The type of predicate is already in generic definition, we don’t need this verbosity. Other languages (I’d even say most of languages, used in dev work today) let us write simply something like slices.Filter(s, i->i%2).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: