Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a big fan of the assertion that a future Go 2 will never break Go 1 compatibility. I think if you need to make changes so significant to a language, you may as well just fork and rename the language (an opinion I can see many holes in).

I wonder: why not go further and say "there will never be a Go 2" in order to eliminate ambiguity about this? If a theoretical Go 2 will run all Go 1 programs, what would make it different from some Go 1.xx release? Some might interpret this post as saying that, but I don't think it quite does. It says "There will not be a Go 2 that breaks Go 1 programs."



> I wonder: why not go further and say "there will never be a Go 2" in order to eliminate ambiguity about this?

They did, five years ago. Albeit with an “if”.

https://github.com/golang/proposal/blob/d661ed19a203000b7c54...

> If the above process works as planned, then in an important sense there never will be a Go 2. Or, to put it a different way, we will slowly transition to new language and library features. We could at any point during the transition decide that now we are Go 2, which might be good marketing. Or we could just skip it (there has never been a C 2.0, why have a Go 2.0?).

> Popular languages like C, C++, and Java never have a version 2. In effect, they are always at version 1.N, although they use different names for that state. I believe that we should emulate them. In truth, a Go 2 in the full sense of the word, in the sense of an incompatible new version of the language or core libraries, would not be a good option for our users. A real Go 2 would, perhaps unsurprisingly, be harmful.


> Popular languages like C, C++, and Java never have a version 2.

Only someone that never used those languages would state that, all of them have had breaking changes.


> We could at any point during the transition decide that now we are Go 2, which might be good marketing.

Among the (entirely?) dev-oriented consumers of Golang would the shininess of "2.0" really outweigh the "ugh documentation is going to get harder to find" and "ugh I now need to increase my auditing of dependencies" and other similar fatigue?

Is Google universally good at marketing?


C had a breaking change this year, pre-ANSI C programs need to have all of their function definitions changed for them to be compatible with C23.


But Java absolutely had versions that broke old code


Java went through this. There was a Java 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, and 1.7. Then, Java decided "You know what, we aren't breaking backwards compatibility so instead of naming things 1.x, let's just say Java 8, 9, 10...21"

I think that ultimately makes sense.


Ironically they then went ahead and made a massive change in Java 9 that, for the first time in Java's history, broke pretty much everything. Still angry about that...


Well, they removed code that was always discouraged from using and it was always and explicitly stated, that there are no guarantees of any kind when using this code. The code was not living in a package called "unsafe" and being undocumented (to my knowledge) by coincidence.

So while Java broke some big libraries/frameworks (not "pretty much everything though"), it can't really be blamed on them.

In fact, look what Go has: https://pkg.go.dev/unsafe

> Package unsafe contains operations that step around the type safety of Go programs. > Packages that import unsafe may be non-portable and are not protected by the Go 1 compatibility guidelines.

Let's wait until Go has reached Java's maturity and see what happens when they change this package ;)


> So while Java broke some big libraries/frameworks (not "pretty much everything though"), it can't really be blamed on them.

I think one of the reasons people use Java is to get access to those big libraries/frameworks.

I've worked at a few companies that used Java during the transition, so maybe I had access to about 10 Git repos that underwent this transition.

I think pretty much all of them required some tweaking e.g. adding extra dependencies in Maven when moving from Java 8 to Java 11. I actually became the "go to" person to do these transitions, having worked out what incantations were needed.

All of those repos, to this day, despite the effort that was put into them during the transition, now print out warnings about things being unsafe. The companies just ignore those warnings. I have 15 years of Java experience and I don't know what to do about them. My understanding is this is normal in the Java world now.

They are just normal web applications or REST services using databases like PostgreSQL, using e.g. Spring Boot, Tomcat, etc. Maybe those libraries do things they're not supposed to, I don't know. I have never used sun.misc.Unsafe in my code or anything like that.

Perhaps if I spent days studying the problem I could understand what was going on and what to do about it (although probably not, as the problems might have been in third-party dependencies.) But this wasn't money the companies I worked for wanted to spend. But anyway, my point is that spending days fixing stuff after an upgrade != backwards compatible.


Heh, once there is spring boot as dependency it has like 100 library jars to show hello world on web page. So pretty much sure everything would be broken.


You likely has a dependency of dependency of a dependency that uses it, and thus get the warning.


"Let's wait until Go has reached Java's maturity and see what happens when they change this package ;)"

They did a while back, actually. Compare https://pkg.go.dev/unsafe@go1.0.1#Pointer with https://pkg.go.dev/unsafe#Pointer , in particular the modern very precise description of exactly what you can do with an *unsafe.Pointer. I'm not sure what the cutoff for that was but it was a while ago, yes. Still, it didn't do much.


Interesting!

Personally I'm not a big fan of either Java nor excessive backwards compatibility. But I can't avoid noticing that a lot of people praise Go for things like the backwards compatible while despising Java at the same time, even though both languages are extremely similar in lots of regards.


It is not clear that people despise Java or they despise backward compatibility of Java. Because I haven't see anyone despising Java's backward compatibility.


> Because I haven't see anyone despising Java's backward compatibility.

I'm happy to have beers, just so that you get the chance to see someone like that. :-)


Hopefully there should never be a repeat of that now that they have strongly encapsulated jdk internals. My understanding is that (nearly) all of the migration headaches from 8 to 9 were caused by libraries that were using improperly using jdk internals.


Devil's advocate: anything that's possible for a downstream user to access is fair game for them to use. You can certainly mark it as internal and be explicit that you reserve the right to break it later, but if it's actually possible for users to do, it's not "improper", even if it gets broken later.


That's why they sealed those holes shut, and only allow some of them with deliberate end-user command-line flags, so that anyone wanting to go that way only has themselves to blame.


No there were lots of just outright breaks.

JavaEE being removed, along with a package frequently used for Base64 encoding (with no replacement until several later versions). JavaFX being separated out so it's not bundled anymore, along with removing javafxpackager (since returned as jpackage). Java Web Start being removed.

Then there were all the borderline stuff. The locations of files inside the JDK all changing, like "rt.jar" went away and a lot of tools depended on that. The concept of an installable JRE was removed entirely and along with it the whole way people were used to distributing Java apps was deprecated with no replacement until much later (and the replacement was much worse in some ways). Suddenly spamming warnings to the console if you use widely used packages (which breaks anything parsing the output).

Even just changing the version number broke a lot of stuff because code had been written to assume the convention that Java version numbers started with "1."

Then when they went to 6 month releases soon after, that broke a lot of stuff because the whole ecosystem made the design assumption that Java releases were rare (stupid stuff like using enums to represent versions, the Java guys bump the version number in .class files on every release even if nothing changes).

Then people tried to use the new module system, but that broke the world too and for little/no ROI, so eventually everyone gave up. Now the ecosystem is full of broken module metadata that's there but doesn't work, and if you try to use it and report bugs they get closed with status: "don't care".

Frankly a lot of the dust has still never settled, it was a very damaging time for the Java community. Backwards compatibility über alles bitte, and that means NOT removing widely used features that were heavily developed and advertised as the right way to do things for decades.


I only see eternal stagnation as the alternative, and surely noone wants that.

Java does have very good backwards compatibility and they make every change with that in mind, but if you are big enough, no matter what you do, someone will surely depend on some stupid thing they should have never do in the first place.


I was pleasantly surprised about 6 months ago when I went to run a game I wrote in Java when I was at university. Nothing huge, but still about 10k lines of code. I originally wrote it in Java 6, and it compiled and ran with no issues on Java 20.

I only used one 3p lib, otherwise just the standard library, which helped, but I was expecting something to be broken given it was over 10 years later.


Was it more than internal lib references that one was not supposed to do anyway?


Pretty much. Most of the breaks came from touching the likes of "sun.misc.Unsafe". Java versions 9->~17 added new jdk features (such as VarHandles) to allow for the safe interactions that sun.misc.Unsafe exposed. Libs had to update to use these new patterns with 9 being the worst hurdle.

There was also a change to how packages could be named that messed with stuff. 2 jars putting stuff into stuff like `javax.annotations` was a big no-no that broke with 9.


If that's pretty much everything, then nothing is backwards compatible unless they have the same hash..


One cannot make omelette without breaking a couple of eggs.


Sun did the same for Solaris, jumping from version 2.6 to 7:

https://en.wikipedia.org/wiki/Oracle_Solaris#Version_history


I did not know that was the reasoning or logic behind the Java 8, 9, 10 ... numbering, that clears up so many thing.


Also relevant is that Sun had pulled the same trick with Solaris a few years earlier - Solaris 2.6 was followed by Solaris 7. Bigger version numbers make for better marketing. I am skeptical that backwards compatibility was strongly involved.


Apple also did a similar thing with OSX/macOS a few years ago - instead of making everything 10.XX they bump the major version (first number) every year now, continuing on from the 10 that the X represented, as if each version is the same increment as the jump from Mac OS 9 to Mac OS X (which was a jump to an entirely new codebase)

Android did that too, much earlier starting with 5.0. Previously the major version was something of an indicator of a major visual/conceptual redesign. 3.0 was the tablet version, 4.0 was the move to the holo design language, 5.0 was material. Then they just kept bumping the major version every year since.

I also assume it's just for marketing reasons.


I’d argue that the “everything is 10.x” for Mac OS was also basically marketing. :)


This happened at Java SE 5(1.5) after 1.4. This was at much a marketing decision.


It goes back ever further, Java 1.2 was marketed by Sun as "Java 2".

https://web.archive.org/web/19991010063140/http://java.sun.c...


That was somewhat different from the way the internal stuff was numbered. You'd still see "1.5" and "1.6" everywhere when you asked the JVM for it's version. 8 was when the JVM started matching the marketing (IIRC, might have been 9).


You might be overestimating the type of change required to break source compatibility. A benign example is adding a keyword. Let's say you want to add a new language feature and the community unarguably wants the feature and the right or only way to add it is with a new keyword. If you're not allowed to break source, then you can never add the feature.

I understand your argument for big things like changing the semantics of the language. But a backwards-incompatible change can also be rather benign.


It's not true that Go can't add new keywords. Now that we have Go modules, all Go code is now explicitly annotated with the version of Go it was written against. We can add a new keyword in a later version of Go as long as the compiler can still also compile code written for the older versions of Go. (The go command tells the compiler which version of Go to use for each package it compiles.)

What we're not going to do is abandon all the code written for older versions of Go, like Python 3 or Perl 6 did. Python is recovering now but it easily lost a decade to the Python 2 -> Python 3 transition, almost certainly unnecessarily. And Perl lost even more.


This is a the TL;DR I wish was the first paragraph of the article!

Could this mechanism be used to patch up unfortunate evolutions in the standard library also? For example, all of the `WithContext` functions that could be folded into the (more common?) non-contextful versions?


Why can't it add keywords? Adding a new keyword doesn't break backward compatibility. It breaks "forward compatibility."


New keywords are like the textbook example of a backwards compatibility problem. It's probably why C overloads "static" so many different ways.


You can sort of add new keywords backwards-compatibly using a trick called "contextual keywords": you require that they be placed in a syntactic position in which no identifier could legally go, and you maintain them as legal identifiers for compatibility. C++ used this trick to introduce "final" and "override" by moving them before the opening "{".


You mean new reserved words? For example, I'm quite sure when C# added "record" it didn't break backward compatibility, as old code that uses "record" as a variable name still compiles.


Go has made changes like that by adding new predeclared identifiers ("any" is an example, I think?) but there's a distinction between predeclared identifiers and keywords.


Old code becoming a compiler error sound like a backwards compatibility issue to me.


I guess it's a terminology thing. As someone from a C# background, not all keywords are reserved words. Only new reserved words break backward compatibility.

C# has added some keywords (record, and, or) without breaking backwards compatibility.


I dont undestand your example, plenty of languages add new keywords without breaking backwards compatibility, its removing a keyword that would cause such and issue.


I have named a function 'foo', in current version of language. A future change makes 'foo' a keyword. My code was broken by adding a keyword.


This is not theoretical: Python broke a lot of async packages when they made “async” a keyword!


I guess, some languages get around this by having a destinction between functions and keyword functions not having the () braces in the syntax. But really if ur defning functions as keywords u should just put it in the standard library


I’d argue Rust’s editions are a good counter argument to that. The differences between editions really aren’t huge despite being breaking changes.

In theory I like the idea of backwards compatibility never changing but in reality some breaking changes really do make sense and being permanently on the hook for a language feature that didn’t take X or Y into account when it was created doesn’t feel like a win.


The article does state that at the end:

> The answer is never. Go 2, in the sense of breaking with the past and no longer compiling old programs, is never going to happen. Go 2 in the sense of being the major revision of Go 1 we started toward in 2017 has already happened.


It becomes a semantic difference at that point. If Go is doing semver, and there are going to be no backward-incompatible changes, there's no reason to ever increment the major version. Everything is a minor version (compatible additions) and patch version (bug fixes).


I'm not sure that's a relevant distinction. If you take the stance that a major version has to mean breaking API compatibility with the previous major version, semver style, then their statement is equivalent to saying "there will never be a go2". If you don't take that stance, then their statement leaves open the possibility that, fifty years from now, we'll be at go1.102 and someone will say "hey, these numbers are getting pretty big, maybe we should just call this next release go2"; and that's fine. That's literally and exactly what Linux does; when the number gets big, it becomes easier to type smaller first number, so rename version to smaller first number. Its not semver, but semver doesn't have a monopoly on how software must be versioned, and leaving room in the language today to do that is totally cool.


> I wonder: why not go further and say "there will never be a Go 2"

Pretty sure they've said this in the past.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: