Hacker News new | past | comments | ask | show | jobs | submit login
Russ Cox on Go dependency management (threadreaderapp.com)
214 points by aberoham on July 27, 2018 | hide | past | favorite | 70 comments



Long live reasonable BDFLs. They may annoy the community here and there, but the cohesion and gatekeeping provided by a quality BDFL is hard to match with a community. It's very hard for a community to have collective foresight, but even when they do, often a gatekeeper emerges anyways during review. Of course, it requires thicker skin on both sides of the coin, but in hindsight the tech benefits often outweigh the politics.

Sure, some communities (e.g. the Rust one) show that a sole person need not reside at the top. But that often comes with slow implementation/decision costs.


Rust would have died if it used the BDFL model. We had a huge disadvantage starting out, with a tiny team at a smaller company and little name recognition among the lead developers. The only reason Rust achieved any popularity at all was that we listened to the community when they were pushing us to compete 1:1 with C and C++ on performance. Cargo likewise would have been terrible without community input. The community was starting hostile forks before we course corrected.

Additionally, Rust development frankly moves faster than any other language that I know of. You need only look at the release notes for that. The reason is that people can truly be invested in the project, able to make technical decisions as well as contribute code.


I completely believe that, but I also believe between Graydon originally and yourself, the role kinda exists :-) I meant to give Rust as an example of community-driven, not of the slow decision/implementation cost comment, but I can see how I wrote that wrong; that part was meant for the bikeshedding communities that fork in every direction. Maybe I should revise my statement to say strong leadership is ideal instead of a directionless community.


Community input is different though. Communities need a leader (or small set of leaders) who's responsible for judging whether something fits into the overall vision of the product. If a whole community agrees that something should be this way or that, then sure, that's darn good evidence that it's a good fit and probably inherent to the usefulness of the product, and it would be unwise for them to ignore it. But sometimes you have 50/50 splits and a leader is there to make the ultimate decision based on their expertise and understanding of the problem-domain and how the product fits into it.


That's why we have the small language teams who have to sign off on things. It's not democracy.

I think the ideal situation is when the community develops a solution that works well for them and then leadership steps in to shepherd it through the integration process. Leadership isn't there to force their ideas onto the community. When done properly, everyone is empowered and nobody comes out of the process frustrated. This is something we have had to learn over time, but I truly think it is the right way to develop open projects like languages.


> But that often comes with slow implementation/decision costs.

It also comes with feature bloat.


No, it doesn't. "Feature bloat" comes from wanting to appease everyone by taking the union of all proposals. That is ineffective community governance: it comes from allowing people to say yes but not no to features. Believe me, people love to say no to features in Rust.


the cohesion and gatekeeping provided by a quality BDFL is hard to match with a community

There needs to be some thought leadership and gatekeeping, otherwise you get evaporative cooling, chaos, and devolution to the Least Common Denominator. All "leaderless movements" are subject to this.

https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporativ...

The problem with BDFLs, is that when they're wrong, there is sometimes no pressure relief valve.

Sure, some communities (e.g. the Rust one) show that a sole person need not reside at the top. But that often comes with slow implementation/decision costs.

What you're describing is a republic. There is no dictator. There is no absolute ruler. However, there are democratically chosen leaders who need to make decisions that won't make everyone happy. Most importantly, there is no tyranny of the majority.


> Of course, it requires thicker skin on both sides of the coin, but in hindsight the tech benefits often outweigh the politics.

And this is exactly why the BDFL model is not sustainable in a longer term. I do agree that a consistent tie-breaker (a role of GvR, a stereotypical BDFL, was indeed much like this) is good to have, but a BDFL eventually becomes a venerable monarch whose everything become controversial enough to make one a political entity regardless of one's intention. As GvR shows this also puts a lot of burden to a single person.


And even in Rust earlier module system written by someone else and was dumped wholesale without enough justification to author who finally left frustrated.

Same with green threads which at one point they were hell bent on adding and then one fine day decided to dump. Also leading to ban someone in Rust community who was against green thread.

I think what people do not want to talk about or do not get is unlike Oracle, Google, MS or Apple Mozilla can't just add few dozen highly paid people for language development. So this big community led approach is more of practical matter than some enlightened way to develop software.


I think you have some of your story slightly wrong. There were two previous build systems before Cargo; both of the authors did leave Rust, for different reasons. I'm guessing you are talking about the second. I don't remember the exact details, and I thought that they left a year before cargo was announced...maybe I have rose-colored glasses.

But with green threads, Rust had them for quite a while, on the order of years. I think the person you're talking about was in favor of the removal of green threads, which we were initially against, but it did end up happening. That person also left Rust but wasn't banned; we've only ever banned one person, and that was fairly recently, and it was for an extended harassment campaign, not over some sort of technical disagreement.

> So this big community led approach is more of practical matter than some enlightened way to develop software.

We believe that it produces better technical outcomes. Rust has more paid developers than many, many languages. Regardless, when making decisions, it's important to be informed, and there's no better way to be informed than to get a broad description of the problem space and other approaches to solving them. One person is never going to be able to be an expert in everything that a language needs.

There's also sustainability concerns with BDFLs, but that's a larger topic.


None of this is true.

> And even in Rust earlier module system written by someone else and was dumped wholesale without enough justification to author who finally left frustrated.

I don't know what module system you're talking about. The first Cargo was unmaintained for a long time before being removed; the original author was long gone. The second rustpkg was largely Graydon's work, not "someone else".

> Same with green threads which at one point they were hell bent on adding and then one fine day decided to dump. Also leading to ban someone in Rust community who was against green thread.

We dumped them in no small part because the community didn't want them! And nobody was banned.

> I think what people do not want to talk about or do not get is unlike Oracle, Google, MS or Apple Mozilla can't just add few dozen highly paid people for language development. So this big community led approach is more of practical matter than some enlightened way to develop software.

In other words, you're saying we treat the community as just free labor. That's an incredibly cynical accusation, and one that bears no resemblance to reality.

I'm pretty confident in predicting that if Rust had adopted a BDFL style of governance, it would have been dead long ago. Community input was critical in decisions like removal of green threads and I/O reform that have helped Rust find its niche.


Not looking to continue that argument here, but I think that was about tjc's take on the fate of rustpkg.


I hope this apology and acknowledgment of the problems in the process appease Sam. By and large he has been the one to driving the last resistance to adoption of go modules and I can't help but think most of that is just investor's dilemma.

At this point it would be great to no longer be talking about whether go modules should be the thing but working together towards making them work for everyone. (which I don't think is far off actually, Russ and co have done a good job post-reveal of addressing use cases that break with modules)


I absolutely love writing programs in Go and at my current startup we're basing all our core backend infrastructure on it.

One thing that I find extremely troubling about Go's dependency handling though (and which at least partly contributes to people's frustration I think) is the mixing of the concepts of a namespace and a resource identifier in Go's package names. It's absolutely reasonable that package names should be unique, making the package name double as a storage location is not always a good idea though (IMHO): For example, we recently switched from using gitlab.com to our own private Gitlab instance, which runs under a custom (non-publicly resolvable) domain. Since all our packages were named like "gitlab.com/.../..." (to make them "go get"-table) we would have to rename all of them to make them "go get"-table from the new location. In the end we solved it by defining a git URL rewriting rule that rewrites the old to the new URL when fetching the repository. This ensured that we could work primarily on our new Gitlab server while maintaining the possiblity to build packages on the gitlab.com server. While that works I think it would have been a better idea to keep package names and storage locations separate, or at least give users a way to override/modify the resolution of packages into locations using a more standardized approach: Such a mechanism could then allow for different name resolution strategies (like a package file, a list of standard repositories) while falling back to the standard way of resolving packages into URLs for "go get". Granted, this would add another layer of complexity, but as we already need a separate file for proper vendoring (to e.g. specify versions) it wouldn't hurt too much to also put (optional) information about storage locations into that file.

Just my 2 cents, apart from this minor inconvenience I'm absolutely happy with Go and its ecosystem!


if you want to decouple the namespace from the repository location you can!

https://golang.org/cmd/go/#hdr-Remote_import_paths (read until the end)

or TL;DR: https://blog.bramp.net/post/2017/10/02/vanity-go-import-path...


You still can't really.

If I have a canonical import path of "mydomain.io/foo", and I host html there pointing to github or whatever, I now have a different single point of failure: the domain. If I lose control of the domain or the

Most package management systems have one additional layer of resolution: a centralized pluggable repository location which can be configured to some default, and then lookups are made against that instead of against a url specified in the source code.

For example, with cargo, I can open my ".cargo/config" file and specify "registry.index = whatever-url.com", and then for package "foo" it'll look it up on that registry.

If I forget to renew the domain, I don't have to change a bunch of import comments in my source code to update the registry location; I have to update one cargo config file.


> If I have a canonical import path of "mydomain.io/foo", and I host html there pointing to github or whatever, I now have a different single point of failure: the domain. If I lose control of the domain or the

> Most package management systems have one additional layer of resolution: a centralized pluggable repository location which can be configured to some default, and then lookups are made against that instead of against a url specified in the source code.

TBH, these two seems to contradict each other. You don't want to use your own domain, because that's a SPOF that you could lose control over. Whereas the centralized repository is a SPOF that you never had any control over to begin with. Seems strictly worse to me. And the nice thing about the chosen approach is that it can coexist with what you seem to be wanting: gopkg.in exists and is exactly that. A centralized, hosted (and free!) registry for go packages with its own registration and authorization process.

What I find so beautiful about the go-get solution is a) that you can build any other mechanism I know of on top of it - i.e. the user can decide on their own tradeoffs of convenience/scalability/security/future-proofedness… and b) that domain names already provide an elaborate, globally agreed upon namespace that can separate ownership from administration and hosting, with its own takedown-semantics… It's decentralized, existing, robust infrastructure. It gives full control to the user. And it solves a whole bunch of technical challenges already, for free. It means the Go team doesn't have to manage user accounts of package maintainers and it doesn't have to have policies of what code to accept or not accept - as it can't enforce them.

It's really quite brilliant, IMO.


The point is that I don't have to change any of my source code in order to change what registry I point at.

I can use "foo.registry.com" with no changes to my rust source code.

If I want to switch go from "old.com/x/package" to "foo.com/x/package" I have to rewrite all the source code first.

That's the weird coupling which makes go not so configurable; code has no place dictating where it's downloaded from, yet go packages do that. I can't use the go package management ecosystem if my canonical import path doesn't resolve (e.g. dep won't work, go get won't work, vgo won't work).

I never have to worry about my rust code having a magic comment in it which makes the package manager incapable of fetching it from some arbitrary differently named mirror. That's simply not possible in a well designed package management system.


In our CI setup we ended up fixing it like this (which I still consider very hacky though):

    git config --global url.https://our-new-domain.com/.insteadOf https://gitlab.com/
This allowed us to keep our code unchanged and still be able to build it on different CI servers (e.g. our public as well as private instances). Specifically, for Gitlab our replacement config looks like this:

    git config --global url.https://gitlab-ci-token:${CI_JOB_TOKEN}@${CI_HOSTNAME}/.insteadOf https://gitlab.com/
This assumes that all packages hosted on gitlab.com are available on our local Gitlab instance, which can be problematic as well (for example if we would import/use an open-source package hosted on gitlab.com along with our own code). In that case we would need a replacement config that would only match our groups/namespaces on gitlab.com.

I vastly prefer the Rust way of doing things though, as I don't think using domains as a namespace is a good way. For example, most code hosting sites will shut down or be deprecated eventually (possible even Github at some point), which for Go will lead to a lot of headaches since there's no way to know where a given package will end up when it's no longer available at its original location.


> The point is that I don't have to change any of my source code in order to change what registry I point at.

Apparently it's a matter of opinion, but I consider that a feature, not a bug. It means the import path, as the identifier of the package, identifies the package. If two source files refer to the same name, they mean the same code.

> code has no place dictating where it's downloaded from, yet go packages do that.

That is still not actually correct. The webserver listening on the domain is dictating where the code is downloaded from. i.e. the owner of the identifier of the code you are downloading.

The basic assumption is, that if you take some code and put it somewhere else, you are assuming ownership, so you should actually own the identifier of that code. If someone wants Library X, they should be sure to actually get Library X, not your fork Y.

> I can't use the go package management ecosystem if my canonical import path doesn't resolve (e.g. dep won't work, go get won't work, vgo won't work).

To me, this seems to be a weird and unfair comparison.

gopkg.in exists. There is nothing in the Go ecosystem that is preventing what you want to have. The equivalent to Rust's cargo+crates.io isn't go-get+custom-domain, it's go-get+registry. And the equivalent to the import path "github.com/user/package" isn't (import of the "package" crate plus a mirror-directive in your cargo config), it's just the string "package". Because that's the identifier of the crate.

You could make the argument that Go still doesn't allow you to have mirrors, which is fair on the one hand, but will be solved with modules on the other.

Really, IMO it's just a fallacy to think of the import path as a download location. It's an identifier and the only reason the domain must resolve, is because go-get must consult the owner of that identifier what code to fetch. And DNS is the canonical tool to do that. Just like you need to resolve a hostname to get a Let's Encrypt certificate or use the Google Webmaster tools - you have to prove ownership of the name.


I believe the solution you're looking for is Athens[0], right?

[0] https://github.com/gomods/athens


> While that works I think it would have been a better idea to keep package names and storage locations separate

Then why didn't you? Go had that ability since the inception of go-get. You could have used your own domain to name your packages from the beginning and have avoided that.


I agree it sounds a bit “painful”.

But I think on the plus side it was very easy for you to reason about the tooling to get your simple solution going.

Thinking more of: It is great that you can kind of apply “Dependency inversion of the tooling” without jumping to too many hoops to solve your problem.


Ah, it's a variation on the URL vs. URI vs. URN debate again.


One meta pattern I’ve noticed: If team members can’t talk straight with one another, the team can’t make good collective decisions.

Mealy mouthed walls-of-texts are just as unhelpful as one sentence dismissals. Sooner or later, the silent miscommunication will manifest itself and impact the product directly.

Needless to say, the health of a project relies upon the health of its politics.


A lot of reasonably successful open-source projects (eg almost the whole npm namespace) is rife with some breed of toxicity and nevertheless does "fine" (for some value of fine - if you never look at it and ignore the security breaches) in terms of the technical merit.


I don’t think it’s just open-source projects: a great deal of all projects end up rife with some toxicity or another, and normally they do alright in the end. Human beings are a problem: we have emotions, we have feelings, we have needs, we’re not hyper-rational automata. But we mostly muddle through alright.

I don’t know if we can avoid this. What I think we can avoid are blow-ups, or at the least calm them back down again. We have to use our hearts, not just our heads, to say and mean, ‘yes, I acknowledge your feelings, which are valid.’ In the case of Go & dep, I think Sam Boyer feels let down because he’d been so proud of the idea that his tool dep would become the official tool. Rationally he acknowledged that it was the official experiment (maybe he even acknowledged that was an official experiment — I forget), but emotionally I believe that he thought that it’d become the official tool. To see it replaced is a colossal letdown.

We can honour him and his investment of time & energy while at the same time acknowledging that the tool he created isn’t the right tool long-term. This is something the net is terrible for: it’d be far better to break the news, in person, over dinner or drinks. There’s no good way to do it, of course, but sharing physical space helps.


A common theme in this thread is the relatively typical software development approach of 'well, I'll let them sort it out'. It's not lazy, and it sort of resembles delegating problems you can't handle to someone else - but the problem is if you do it enough on large scale projects you just end up with a huge stack of miscommunications, and a talented team that has put together a high quality product that completely fails to do what it needs to do. You can't successfully delegate without ensuring that everyone knows what their commitments are, and you need to pay attention on a regular basis to make sure that everyone still knows what the plan is.

It's easy to end up in a position where you're making that mistake if you're a small team used to shipping smaller products or features. Suddenly you're on a bigger project, maybe a bunch of the developers work at another company, maybe you've got a few million users instead of a smaller set like a hundred thousand, and the same planning and team collaboration techniques you've used are subtly harmful and produce bad outcomes a year or two down the line. I observed the WebAssembly design process suffering from same of the same problems early on because it was a group of engineers from multiple companies who were generally used to just working in tiny silos on experiments or features of their own, not A Collaborative Effort For Millions Of Users, much like what Go has turned into now that it's scaled out and not just a prototype language made for Google internal use.


For me the important point was that the implementer cannot delegate the community discussion to someone else. He needs to be part of it, and waste hours of important development time on bike shedding.

Or just ignore it and apologize afterwards for rendering the bike shedding useless.


Yeah, it is bike shedding to discuss dependency management, because dependency management is an "insignificant or unimportant detail" of a programming language. /s


You say "/s", but for some reason, many people still hold the sincere position that dependency management is something that can just be tacked on to a language as an afterthought.


Of course it can. npm, composer, etc all added it to their respective languages as afterthoughts, and many years (or decades even) after the language was out.


Given the whole mjs-vs-js situation with npm, it's more of a counterexample for adding dependency management late.


That's nothing really. They have to use 2 different suffixes to denote 2 kinds of files vs modules.

Meanwhile millions of people use it for 10 or so years for all kinds of deployments.

If only all languages had that unimportant problems.


It's bike shedding when you have no actual idea about the problems and solution, but decide on something anyway. The implementor in the meantime figured it out by himself and just did it.


> Dep does not support using multiple major versions of a program in a single build. This alone is a complete showstopper. Go is meant for large-scale work, and in a large-scale program different parts will inevitably need different versions of some isolated dependency.

This seems to be a major sticking point but I'm having trouble understanding why. I'm not familiar with any other package management system which supports simultaneous major versions in a single project. If two things require different versions of a package, those seem like separate projects to me. Is this to support dependencies which rely on the same package at different versions? If so, why not just call it a version conflict and fail until it's resolved?


> If two things require different versions of a package, those seem like separate projects to me.

You may be thinking of packages on the scale of frameworks. For example, if you have a Ruby on Rails app that mixes packages from Rails 3 and Rails 5, well sure, then you're in for some serious trouble.

But a lot of packages are much more narrow in scope. Rust has a lot of small crates for basic data structures that have their APIs change betweeen versions. So you could end up with your logging package internally depending on some_data_structure=1.2, while your RabbitMQ client library internally depends on some_data_structure=2.3. As long as they do not expose some_data_structure in their API, you will never even notice (except maybe through increased binary size).


> why not just call it a version conflict and fail until it's resolved

Because its unnecessary and it doesn't scale. In Rust, versions become part of the symbol name, and the whole thing just becomes a non-issue. Go did the right thing here.


> If so, why not just call it a version conflict and fail until it's resolved?

Because resolving it, in general, doesn't scale. If the dependency graph is

A -> B -> D

A -> C -> D

B and C have to coordinate their upgrade to Dv2, so as to not break A. But B and C might not know of each other and A might not even know about their dependencies on D. Worse, C might not be able or willing to upgrade to Dv2 in the foreseeable future, for lack of bandwidth, so B is kept from upgrading to Dv2 or has to break A.

If you allow both Dv1 and Dv2 to coexist, B can upgrade to Dv2, A stays unbroken and at some point, C can upgrade to Dv2 too. No need to coordinate the timing or undue rush.

Now multiply that graph with a hundred dependencies and a thousand projects A, and it becomes clear that avoiding the need of coordinated upgrades is a good thing.


A Java bro told me they have lots of pain with version conflicts, since Java world is so reliant both on tons of libraries and on strict interfaces. Apparently there's something called 'dependency shading' to overcome this issue, though seems like it has to be done by the dependency vendor, not the user.

NPM notably doesn't have any problems with different versions (afaik), since imported modules are simply variables in the importing ones, and not in a global namespace. I think it's also possible to implement that in Python due to similar semantics, though it's not currently available.


> Is this to support dependencies which rely on the same package at different versions?

Yes, that's right. See for example NPM's docs on how this works: http://npm.github.io/how-npm-works-docs/npm3/how-npm3-works....


> Is this to support dependencies which rely on the same package at different versions?

Yes.

> If two things require different versions of a package, those seem like separate projects to me

Basically we’re talking about libs that have version conflicts, but also very slow release schedules, such that the version conflict isn’t going to be reconciled any time soon; or where one of the deps may even be effectively abandonware, but “works just fine” stand-alone so nobody’s going to update it. It just depends on extremely old versions of common shared deps. These are, effectively, irreconcilable version conflicts.

The only thing to do, in most languages, is to fork one or the other dep to fix the conflict; or to split the project into microservices such that the conflicting components can live on opposite sides of a process memory boundary. Even these are sometimes impossible for office-political reasons.

> I'm not familiar with any other package management system which supports simultaneous major versions in a single project.

Node’s npm. Versions aren’t globally resolved; instead, each package effectively gets its own version resolution. Your root-level package gets a node_modules/ directory with your direct dependencies checked out into it; but your deps' deps are checked out into further node_modules/ directories that exist within their parent dep's checkout directory. It's somewhat as if each parent dep had vendored its deps, and had a vendor/ subdirectory; except that, in this case, the vendoring process is recursive.

When a given package require()s a library by name, then, it’s actually importing its own npm-vendorified copy of the library from its own node_modules/ directory, rather than looking in your package's root node_modules/ directory. (In fact, there is no root level; you have "package isotropy", where a dep NPM checked out has the exact same NPM-populated subdirs as a project you git clone'd yourself.) Since these libs aren’t entered into a global namespace by require(), rather just have the module returned as a runtime object for the parent scope to bind to a local variable, there are no runtime conflicts between the separately-vendored versions, either.

In a language without such runtime support, though, you would need your package-ecosystem tooling to reach into your downloaded lib and do some name-mangling of the exported symbols, and then either create a router symbol that resolves calls based on the lexical scope, or further mangle your deps' parents to call the dep by its mangled name rather than its original name. I think that’s what’s being proposed here, since Go has no such runtime support. (Effectively, in Go terms, you can simply think of it as all the references to git-http URL depspecs in import statements, being amended to qualify the repo name with the full git SHA, such that two different commits of the same project act as if they were two differently-named projects from Go's perspective.)


It's a relatively minor nitpick, but since version 3, npm flattens the package tree as much as it can and tries to bring everything possible to the top level node_modules directory. It'll still use nested node_modules where necessary to handle dependency version conflicts. A bit more info here:

http://npm.github.io/how-npm-works-docs/npm3/how-npm3-works....


If you worked with Go quite a bit 3-4 years ago, but have been out of scene since then... what is the best resource(s) for getting back up to speed with its current state?

I presume that the language spec itself hasn't change DRAMATICALLY over the past few years. But so much is unfamiliar in the project structure / build process area (e.g. "vendor directories", modules, dependency managament, etc).

All the tours and tutorials focus on the language syntax and standard library. Looking for a refresher or intro to the "how do you actually get work done in the best manner?" plumbing aspects.


I'd say: assume nothing has changed. Take a look at the active projects in your area of interest. A few ones:

https://echo.labstack.com

https://github.com/go-kit/kit

Web glue frequently recommended around here:

https://github.com/go-chi/chi

Github pages like https://github.com/avelino/awesome-go have become a good starting point for finding interesting projects.


You only need to pick up the whole `vendor/` thing (you can "vendor" deps by copying it manually if it works for you), get familiar with `dep` probably (as a lot of project use it nowadays by default), and that's basically it.

Nothing that affects usual daily work has changed and that's what I insanely love in Go. Recently I had to pick up and work on project I did back in 2013, and it was so damn cool to see that it just works, compiles and there is almost zero problems with dependencies (the only one I had was due to removal of `code.google.com` and moving projects to github, so it was a matter of renaming import path and introducing vendoring).


Personally, I try to review the release notes for subsequent versions of Go. The notable changes are generally highlighted upfront there (including the vendor directory semantics IIRC), see:

https://golang.org/project/#go1

As to the recommendations to learn dep, personally I didn't (currently we use an inhouse tool at my workplace, and I expect we'll transition to the new go modules when they will be released as part of Go 1.11). The "modules" are actually not there yet, will officially land as a prototype feature in 1.11, feature-gated and with the old GOPATH way still working.


I use go as my daily driver... and I would also like to see this. Especially now that there is a blessed module system. (I suspect that since this is just in pre-release, we won’t get any battle-tested advice for a few months, though.)


One issue I have is I can't really find the dep complaints about go's approach clearly. There was supposed to be a talk but I can't find the video.

Yes - Russ should have been much clearer much earlier and not let dep get as far along as it did. But dep mentions the issues, particularly for small teams with go's new approach - I'd love to get a quick version of that.


The closest to that is https://sdboyer.io/vgo/intro/ and https://sdboyer.io/vgo/failure-modes/. IMO these posts are hard to digest and ultimately unconvincing. But YMMV.


It was a long series of "i will say you my concerns tomorrow" that just make stir up the sauce since month without any clear and obvious problem.


I think I’m more offended Russ said he was looking at cargo as ‘best of breed’ for inspiration.. but ended up ignoring it entirely to do his own thing.

Just suck it up.

You went off and did your own thing, and ignored everybody else, sticking pretty much to what you wanted to win the race in the first place (gopkg).

Right, now thats out the way can we just get on with it and use modules?

Decisions have been made, they’re here to stay. It doesn’t make everyone happy... but hey, we’re all massively relieved the mess has been sorted out, so thank you.

Lets stop these deep soul searching what he said she said wastes of time and get get started using it~


It's possible to study another product and learn things from it while still deciding to do it your own way. Deciding to do things differently is not an insult. It's one way innovation happens.


I think vgo's solution is better than Cargo's though.


This is kinda unrelated. But I am _very_ new to Go.

If one were to start a serious project with Go in 2018, what's the recommended way? Are people using Go modules? How to deal with 3rd party deps which dont have go.mod files?

When I previously explored Go, having all code under GOPATH was recommended, but seems like things have changed with `vgo`.


Modules are still in a beta pre-release kinda state, I think most people would stick to dep even for new projects right now.


Dep has also issues, it's safer to continue to use Dep when you already use it but for new project it's recommended to switch now to go modules. Anyway, most of the time it doesn't change anythings to lock the version with dep or use minimal version selection.


Off-topic rant: I really hate how Twitter has popularized this devolved style of communicating by a flurry of fragments so much that we now have services like Thread reader (which I appreciate!) to undo that mess, and to make communication coherent again.


Ya.. this is an especially funny case because I can't imagine Russ wrote this adhoc on Twitter, this looks like something very much prepared, then dribbled out, which makes me a bit sad for the world.


Twitter has that feature where you can post multiple tweets in a chain. Honestly, why have a character limit at all at this point...


So it is accessible over SMS?


How many users actually use it over SMS though?


Oh right, that's a thing (that was never available here in Germany).


I really dislike reading the tweets in this bizarre thread reader. The page width is much to wide and the, line spacings are inappropriate for the size of the font, and that's not to mention the lack of contrast.

Even worse, I could not find a single link back to Twitter, the canonical source.

It's just as easy to read on the twitter website, it provides additional context, and allows for interaction.

https://twitter.com/_rsc/status/1022588240501661696


The great rants in Usenet and later Google Groups were awesome and searchable. I miss those.


It's _Russ_ Cox, not Ross. Can the admins change the title?


Fixed. Thanks!


Calling him out like this is pretty bad, why not make this about the Go dependency management instead of one individual?


Because it's a long twitter of messages that he posted discussing Go's dependency. It's not a general discussion between various people, it's sorta like he wrote a blog post on the matter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: