Hacker News new | past | comments | ask | show | jobs | submit login
LeftPad and Go: can tooling help? (divan.github.io)
119 points by divan on April 1, 2016 | hide | past | favorite | 59 comments



> A little copying is better than a little dependency.

I actually disagree with this. Given that we're already working with projects that can have several hundred dependencies - I think the goal ought to be in removing the weight of including the dependency.

sindresorhus says it better than me though https://github.com/sindresorhus/ama/issues/10#issuecomment-1...

I imagine it also depends on whether your place of employment is writing one giant single-repository codebase vs. several modular applications, as the latter clearly affords a package manager while the former eschews the complexity of composing systems.


I disagree

"Copying" might just be: check your needed dependencies into your source code tree.

Managing complexity is important, depending on several hundred things (especially if you don't have control over them) is not managing it.

Depending on simple things like "left pad" is just shooting yourself in the foot in a future time, as this has shown

In today's world it's easy to forget the million ways in which a castle of cards might fail.


> "Copying" might just be: check your needed dependencies into your source code tree.

> Managing complexity is important, depending on several hundred things (especially if you don't have control over them) is not managing it.

You're still depending on the thing you checked in. You're just storing it somewhere else.

PS. You can achieve the same thing (frozen dependency tree) with shrinkwrap.


> You're still depending on the thing you checked in. You're just storing it somewhere else.

Yes, since it's checked in it won't change "behind your back" or disappear

You also can add fixes to it


>Yes, since it's checked in it won't change "behind your back" or disappear

It will also not get fixed, when important bugs or security issues are discovered and fixed upstream.

>You also can add fixes to it

Which is called a "fork" which you'll then have to maintain.


> is just shooting yourself in the foot in a future time, as this has shown

only because of a design oversight by npm which is now being fixed. Like I said, fix the issues in handling your dependencies, and you remove the overhead of including them, such that "depending on several hundred things" is managing it.

Checking your needed dependencies into your source tree can either be done by recreating them in your source tree or pointing to them. Your pick.


> Depending on simple things like "left pad" is just shooting yourself in the foot in a future time, as this has shown

But it hasn't. Left-pad was republished by a third party shortly after it was unpublished by its original author. It was open source and so it was trivial to fork and republish.

Personally my only direct dependency on azer's modules was a module for shuffling an array in place. A quick look in the registry provided me with a substitute that had the same API and used the same algorithm and had the same test coverage, by another reputable author.

Now imagine instead of dozens of small tiny modules it had been jQuery, or React, or Babel. Or anything else that survives the "is it too large to copy-paste into your codebase" test the article puts forth. Good luck replacing that kind of dependency in less than an hour (if your argument is "well, I can just download a copy elsewhere and paste it: what if it was unpublished because of an undisclosed vulnerability with no patch available and you need to replace it).

Facebook and others "copy" by checking in dependencies into version control. They will never be affected by a decision to unpublish something from the registry. But even so unpublishing is a red herring: it's a quirk of npm Inc's registry.

The real takeaway from #unpublishgate is: don't depend on external sources for continuous deployment. If you need a registry, proxy it (hint: there are free alternatives to npm Inc's on premise offering). If your deploys only work when npm (or GitHub) is online, your deployment process is already broken.


Facebook and others "copy" by checking in dependencies into version control.

This is the standard method for 'copying' in Go as well. It's called 'vendoring' and is often applied to Go programs (as opposed to Go libraries). You basically put a checkout of the dependency in the vendor/ folder and the go tool will prefer it over fetching the dependency and/or grabbing it from GOPATH (since Go 1.6).


So this is a risk assessment problem.

If you simple command line app depends on more than a hundred different packages then you have to balance the risk of a larger more fragmented "attack" surface in your dependencies against the cost of maintaining more of your own code.

No one is going to draw their line in the sand in the same place. It's affected by the languages culture, the programmers experiences, the business needs for speed of development.

It will change over time as the developer has good or bad experiences and it will change as the business needs change.

I don't disagree or agree with the statement because I don't live in an ideal world where I get infinite time for dev and qa.


I'm sindresorhus. My thoughts on the left-pad situation: https://github.com/sindresorhus/ama/issues/367


But we have to draw the line somewhere, i.e. where do we break DRY?

If the if-statement for negative-zero ought be in it's own package, how smaller granularity can we go before we should start copying the code? What about checking if two numbers are equal?

To play on your link to github where he talks about is-negative-zero:

For example. I have this module is-equal. Its job is to tell me if two numbers are equal. Normally you wouldn't have to care about this, but it could happen. How do you figure out if two numbers are equal? Well easy x == y. Or is it? Do you really want to have to know this might be wrong? I would rather require is-equal and be productive on other things.


Did you read Sindre's post? You're still arguing about numbers of characters and lines of code.

Determining whether a value is negative zero in JS is non-trivial and easy to get wrong. JS tries its best to shield you from the existence of negative zero but it still exists and if you do have to test for it, your intuitive solution (x === -0, String(x).charAt(0) === '-', Math.sign(x) === -1, etc) will likely be wrong.

It's not equivalent to "comparing numbers for equality". It's a specific problem with a specific but obscure solution. If you had to work this solution out yourself (if, indeed, as you're more likely to just get it wrong or arrive at something more complex or less efficient) you would put it in a function and you'd have to write a couple of tests for that function to make sure it actually works and doesn't produce false positives.

The same goes for figuring out the user's home directory: most Node developers use a flavour of *nix (usually OSX or Linux) so they may know `process.env.HOME` but what about Windows users? Most likely they will just forego Windows or assume it's also called `HOME` on Windows (because on many developer Windows machines it might be for compatibility). Instead, don't guess and just add a small module that does that one thing and can be extended should this change in the future (e.g. if OS11 comes along and starts calling it `APPLE_HOME` or if Win11 goes full POSIX and also calls it `HOME` but still passes the platform check for Windows).

Your strawman of number equality OTOH is not about a weird edge case or a cross-platform concern but merely about knowing the language. Of course it'd be ridiculous to add a function to compare two numbers for equality in an if statement because the language already provides an operator for that. The only excuse I can see for that would be if you take functional programming to its logical conclusion and want every operator to be a function your religious views prohibit using modules that provide entire collections of these functions so you need to add every operator as its own dependency.

Then again, I have published a module in the past that today can literally be replaced with four characters of code in ES2015 (plus whitespace) -- and the API documentation is even wrong on what it does (but at least it has 100% test coverage). Everything can be taken to its absurd extreme, what point you should bail at depends on the situation.


It is extremely trivial, just not intuitive. It's much harder to actually come up with a case where you know you even need it.


> Given that we're already working with projects that can have several hundred dependencies - I think the goal ought to be in removing the weight of including the dependency.

You can't remove the intrinsic weight of incorporating foreign code to your codebase, which includes giving control over bugs to a third party, need for keeping another thing up-to-date or else..., and risk of being dead in the water if the maintainer does anything weird to the library (like removing the library from repository, what happened to left-pad, but also suddenly changing interface and/or call semantics, bumping only patchlevel part of version number). And happy debugging when the library changes in a very subtle way, so your code doesn't break and blow up, instead silently starts dropping some requests.

You increase risk of happening anything from the list above when you include a dependency, and this risk cannot be lessened by tooling (at least in most languages, which have type systems less powerful than Haskell).

So when you have several hundred dependencies you don't need to remove weight of including a dependency, you need to actually make the dependency list shorter.


> like removing the library from repository, what happened to left-pad

This was entirely npm's fault for allowing it. They recognized that and fixed it. Nothing to do with tiny modules. Moving on.

> but also suddenly changing interface and/or call semantics, bumping only patchlevel part of version number

This is actually less likely with tiny modules. If it were a large module like underscore, and its just changing this one tiny function a little bit, its okay if they bump the minor verision number, right? Well, it broke your program because out of those 100 functions you were using exactly that one.


Unpublish isn't that bad. He could have just as easily made patch updates to each of his modules with malicious install scripts and/or malicious runtime behavior.

Depending on 200 tiny modules, you have 200 different risks vs just having 1 risk.


That 'just as easily' does not cause problems for us cautious devs who pin dependencies with 'npm shrinkwrap' and review changes to our dependencies.

Unpublish was an actual problem, and that problem has been fixed.


> Depending on 200 tiny modules, you have 200 different risks vs just having 1 risk.

vs the risk of rewriting common, well tested code 200 times.


Like checking if the thing at hand is an array? Or a positive integer? Or iterating through hash's keys? Or another implementation of reduce()/fold()?

Why won't you outsource your condition checking to a well-tested wrapper around if-else?


> Like checking if the thing at hand is an array?

Yes. ES6 has isArray(). Use a well known polyfill rather than write your own for ES5 environments.

> Or a positive integer?

Yes. As discussed there are edge cases around negative 0 that you may not think of.

> Or iterating through hash's keys?

Personally, yes, but not necessarily. There's 'for of' which iterates over an Object's own keys - I'm presuming you know regular 'for in' iteration iterates over parent prototype keys and why that's often unwanted - but 'for of' is relatively new.

I prefer an approach more consistent with how arrays are handled, so I use Object.prototype.[someprefix]forEach() and share the code.

> Why won't you outsource...

Using libraries isn't outsourcing. Please read https://news.ycombinator.com/newsguidelines.html


>> like removing the library from repository, what happened to left-pad

> This was entirely npm's fault for allowing it.

Not entirely. npm's fault is more along the lines of fsckup, and that happens from time to time. It's everybody else's fault for including such a trivial module to their code (I mean, those who included it directly), what leads to much bigger exposure to such things (because there's much more modules to depend on).

>> but also suddenly changing interface and/or call semantics, bumping only patchlevel part of version number

> This is actually less likely with tiny modules.

Times the number of the modules. Or rather, (1.0 - likeness) ^ count(modules). You end up with much more fragile application.

And even when we put that aside, you still have large surface for all those unlikely events when somebody makes a mistake, but it's not from the publishing and versioning ground.

> If it were a large module like underscore, and its just changing this one tiny function a little bit, its okay if they bump the minor verision number, right? Well, it broke your program because out of those 100 functions you were using exactly that one.

There are two issues here.

First, the new release was probably totally unnecessary. Why not bundle several such changes together? And semantic versioning scheme calls for increase of major number whenever the change is backwards-incompatible. Too bad that the change was tiny. So no, it's not "okay if they bump the minor version number".

Second, if you were only using one function for those 100 functions, you screwed up. You probably didn't needed that one tiny function that much, so you added an unnecessary dependency. It's the same mentality of micromodules that leads to a dependency fractal, which in turn leads to a fragile codebase.

Adding a dependency has its cost, but it costs in the (somewhat) distant future. So you're taking some debt now, to repay later at much higher price, except probably you don't see the price at the decision moment and you don't realize it's an instalment when you need to repay part of it.


"Adding a dependency has its cost, but it costs in the (somewhat) distant future. So you're taking some debt now, to repay later at much higher price, except probably you don't see the price at the decision moment and you don't realize it's an instalment when you need to repay part of it."

That is a very nice and succinct summary of dependancies


> It's everybody else's fault for including such a trivial module to their code (I mean, those who included it directly), what leads to much bigger exposure to such things (because there's much more modules to depend on)

But npm disallowing unpublishing makes that entire point moot. There is no more difference.

> Times the number of the modules. Or rather, (1.0 - likeness) ^ count(modules). You end up with much more fragile application.

Thats not how it works. A tiny module making an incompatible change to its sole function is not likely to bump a minor version, but quite likely to bump a major one. A tiny module also has less updates overall because there is no need to constantly add new stuff to it, like it often is the case for large modules.

When I add lodash as a dependency, I'm N times more likely to have to update it during a given period than just adding padStart as a dependency (where N is the number of functions it has).

If I only use 30% of lodash (not a small percentage), its quite likely that 70% of those updates are about parts I don't care about and haven't taken the time to understand.

Small modules have very different dynamics and costs compared to large ones. They are only the same in name (and maybe they shouldn't be called modules, actually).

> First, the new release was probably totally unnecessary. Why not bundle several such changes together? And semantic versioning scheme calls for increase of major number whenever the change is backwards-incompatible. Too bad that the change was tiny. So no, it's not "okay if they bump the minor version number".

There we go, another cost of large modules. They are both slower to update things you care about and less likely to have an update you do care about :)

And now suppose you are another user that doesn't use that 1 particular function out of 100. But for some reason the library bumped a major version number! Why? Because they changed a function you don't care about. Great, now you have to review every single change, including those you don't care about.

> Second, if you were only using one function for those 100 functions, you screwed up. You probably didn't needed that one tiny function that much, so you added an unnecessary dependency. It's the same mentality of micromodules that leads to a dependency fractal, which in turn leads to a fragile codebase.

What if I need two, or 3, or 30%? Where does it stop? Usually when I add a micro-module, I need close to 100% of it. Seems good enough to me.

-

This only leaves us with malicious (or protesting) actors, which would make a malicious update. But that risk grows with the number of authors, not with the number of modules. Still, because the problem isn't the module size itself, there are other ways to solve it :) (keeping your own author whitelist, deep-pinning version numbers for less trusted modules to ensure they are immutable, web of trust, reputation systems and so on)

-

It saddens me that everyone took this as a chance to attack tiny modules as the scapegoat. I think that the node community is really onto something here. Its not yet perfectly developed by any means - there is a lot of work left - but that doesn't mean we have to scrap the entire idea. There is a chance here for npm to try hard to come up with nice trust/security/immutability features to accomodate these new module dynamics better.

edit: Another reason it saddens me is that everyone missed the point of azer's protest. I guess people just don't care unless their stuff is on the line. If leftPad was named `kik`, perhaps we would've had an entirely different conversation now.


> It saddens me that everyone took this as a chance to attack tiny modules as the scapegoat.

Not really. For instance, I claim that too many dependencies is a bad thing for some time already, left-pad farce being merely an example of why is that.

It takes some long-lived code to see the cost of adding a dependency, and several such cases (and possibly deploying the code in different ways in different environments) to realize it was paid due to having a dependency.

> There is a chance here for npm to try hard to come up with nice trust/security/immutability features to accomodate these new module dynamics better.

This new module dynamics feels like a constant fall, or maybe like running before a locomotive that won't ever stop, so you need to keep running or be run over. It doesn't look healthy.


> Not really. For instance, I claim that too many dependencies is a bad thing for some time already, left-pad farce being merely an example of why is that.

I claim that left-pad is an example of a malicious actor + a system that doesn't protect from that, not of the failing of small modules. And actually, `kik` is almost the same example. "Luckily" nobody cared about kik.

I also claim that having many small dependencies is very different from having many large dependencies.

> It takes some long-lived code to see the cost of adding a dependency, and several such cases (and possibly deploying the code in different ways in different environments) to realize it was paid due to having a dependency.

It also takes some experience with small modules to see they are not like large modules at all, and you cannot apply the same thinking to them.

Example: once leftPad is performant and working, its done. You can just pin it. When is lodash done? Probably never.

> This new module dynamics feels like a constant fall, or maybe like running before a locomotive that won't ever stop, so you need to keep running or be run over. It doesn't look healthy.

And dependency hell also looked impossible before npm and commonjs brought a solution to the mainstream :) Lets see if they can solve the rest of the problems too.


> I claim that left-pad is an example of a malicious actor + a system that doesn't protect from that, not of the failing of small modules.

It is mainly that, this is right. But small modules make such a shitstorm more probable, simply because there are many more of them.

> And actually, `kik` is almost the same example. "Luckily" nobody cared about kik.

Do you believe it won't happen in the future? The more modules, the more likely it is. And project with JavaScript modules has an order or two of magnitudes more modules than in other languages.

> I also claim that having many small dependencies is very different from having many large dependencies.

Of course, but that's not what everybody talks about. It's many small modules vs. only several big ones.

> It also takes some experience with small modules to see they are not like large modules at all, and you cannot apply the same thinking to them.

But you can't apply different thinking to small modules, because you don't just use small modules, you use big ones as well, and those are in no way distinguishable except with the amount of code.

> Example: once leftPad is performant and working, its done. You can just pin it. When is lodash done? Probably never.

Oh, if it only was that easy. You don't control all the instances of left-pad you include (because indirect dependencies), so pinning it in your project is far from enough.

> And dependency hell also looked impossible before npm and commonjs brought a solution to the mainstream :)

Oh really? I thought it was solved dozen years earlier with various package managers in Linux distributions, and then re-solved by PIP and gems, and only after that comes npm.


> Oh, if it only was that easy. You don't control all the instances of left-pad you include (because indirect dependencies), so pinning it in your project is far from enough.

Yes, lets scrap the entire thing just because we cannot deep-pin individual dependencies. How about adding support for deep-pinning individual dependencies instead?

> re-solved by PIP and gems

Afaik, those two don't solve the problem. If within project P, dependency A depends on C v1 and dependency B depends on C v2, they just can't resolve that conflict. Which is a non-starter for tiny modules.



Sorry for using lodash as an example :) Its a good example for this debate precisely because its also available as tiny modules, so it can be used for comparisons without confounding factors.


It's the third time I see that link already, he's wrong, so please stop posting it :)

The way to reuse N lines of code is to refactor them in a function. The way to reuse functions is to pack them into libraries. This is something that probably everyone agrees with.

Many have issues with the idea that one should create libraries containing one function only! You've touched yourself upon the why: having hundreds of dependencies adds both technical and cognitive overhead. It's very hard to design things to be extended, so in practice one ends up with a fragile module, not one that can be improved by improving a dependency (I wonder how a function that checks if a number is negative can be improved...).

As far as I know there is no other toolchain on this planet where it's done this way, which together with the clarification above should be telling you and sindresorhus something!


Javascript has a problem unique among all the other languages toolchains.

It's primary use is to ship to a browser. However shipping an entire string-util library because you need a string prefix function is wasteful and bad engineering. But javascript doesn't have dead code elimination anywhere in it's toolchain unless you use Closure. And that really only works well if all of your dependencies are using Closure as well.

As a result of all this javascript needs a poor mans dead code elimination at the package manager level. Including a package only includes the code you will actually use. This helps to keep your javascript code that you send to the browser small. What works for Java or C++ won't work here for the above reasons.

This isn't npm's fault, it's intrinsic to javascript the language and it's primary target platform the browser.


Many have issues with the idea that one should create libraries containing one function only! You've touched yourself upon the why: having hundreds of dependencies adds both technical and cognitive overhead.

Indeed. Moreover, it's a sign of a weak or incomplete standard library.

Early Java had this problem as well. However, since distribution consisted of passing around JAR files, most utility code was bundled in larger libraries (e.g., see commons-.*).


> As far as I know there is no other toolchain on this planet where it's done this way [...]

If by "this way" you mean microdependencies in JavaScript, you should also look at Ruby, which goes in a similar direction. And at Python, which apparently tries to follow the lead.


Generally neither Ruby nor Python have "dependencies" as small as you see with NPM/JavaScript. JS takes it to an extreme.


Somehow I see a common theme there...


Yes, let's dig into it.


From my point of view, all dynamic languages usually used by people without CS background.


My first five years after getting my CS degree, all of my paid work was in languages with dynamic typing.


Some people apparently took my comment personally, but it doesn't change the fact that many in those communities aren't from a CS background.

Which is why some decisions, like the whole npm modules, or the ruby gems before bundles (if I get it right), get to be taken without consideration how it works in large scale.

Of course people with CS background also use dynamic languages. I have Python, Smalltalk, Lisp, Perl, JavaScript on my CV.


There are lots of people from both kinds of backgrounds in all kinds of communities; how much of classic hacker lore is about high school dropouts, let alone college dropouts?

A CS degree doesn't mean you'll be a good programmer in industry.


No, but it helps.

I have a technical school specialization in computing and a CS degree.

Many of the things we learned at technical school only made sense afterwards when I got into CS.

Also some of the design problems we had, which were solved by the usual"try something until it works", became basic programming exercises with the right CS information.

Yeah, it isn't a magic solution, but it helps in what concerns designing software in the large.

Most other backgrounds aren't exposed to such issues.


One of the weird problems here is performance, though. A naive approach to left-pad, might involve (new Array(n+1)).join(padChar), for example, since javascript doesn't actually have a consistent language-native way to create an arbitrary-length string. That technique ends up, in some javascript runtimes, taking orders of magnitude longer to run compared to a for loop with string concatenation. The array.join method is one line though.

So, imagine the scenario where you've got a commonly-used function that gets vendored and from there copypasta'd into a lib/common or some other location in your project. You've basically just cut yourself off from any updates that the module author might make when new javascript optimization hacks come around (or obscure corner-case bugs need protecting against). It's a small problem, but it is at least worth considering.

There's also this whole thing with licenses - there's a fundamental difference between using a library and copying its source into your build tree. The MIT license doesn't care, but the LGPL certainly does.

I do like the idea in general. You should extend it by doing tree-shaking or dead-code analysis. "This library is big, but you're just using one function call, let's just copypasta that one in here".


The code needs to pass threshold of originality to be copyrightable, otherwise you don't need to care about the license. For example `(new Array(n+1)).join(padChar)` is definitely not copyrightable.


In case anyone wonders : yes. Left-pad left-pads a string. That's all it does.

Not to worry though : it's now going to be implemented as a linux kernel syscall: https://lkml.org/lkml/2016/3/31/1108


The author seems to make the assumption that the problem with left-pad is how small the library is. But that's just the symptom. The real problem with left-pad is how little functionality is actually provides. A 200-LOC C library that only left-pads strings should be subject to just as much scorn as the original 11-LOC JavaScript one. Unfortunately, it's difficult to assess how much functionality a library provides just from its LOC count.


API surface is often misleading, too. A JSON library that basically just provides a "json_encode" and a "json_decode" could be perfectly cromulent, whereas "trimlib", providing "right_trim" and "left_trim"...


Maybe a complexity metric would be a better measure?


I don't think any algorithmic analysis of the code can determine how much value it provides to the user. Some things simply require human judgment.


but we're trying to infer that things dont have value to the user based on complexity.... the lower threshold is an easier target.


I don't know if the Node.js project has the resources for this, but if they did, it would be much better if all these 10-line dependencies everybody loves to use could be included in some sort of standard library. That way they could all benefit from centralized services (e.g. security vulnerability handling process) and offer a smoother developer experience.


@divan, any reason why you haven't submitted this for inclusion into gometalinter yet?

I'm sure Alec would be more than happy to include it =)


After some polishing and testing on large projects, it will probably will be submitted.


IMO the best way to move to the next generation of dependencies is through tooling.

Instead of ignoring node_modules in source control, why don't we commit it and allow the tooling to handle upgrades?

Disclaimer: I haven't thought about the implications of committing external packages to your own source control at all. I'm guessing I'm missing something.


> I haven't thought about the implications of committing external packages to your own source control at all. I'm guessing I'm missing something.

Committing external dependencies to source control means tons of bloat to the repos themselves. Want to clone huge_framework just to change a tiny thing? Well too bad, you'll need to clone the hundred megabytes of external dependencies too. Multiple libraries/frameworks/modules containing same versions of same modules? Too bad, everyone needs to commit their dependencies. Frameworks shouldn't commit their dependencies then? Or just libraries? Or just modules? Where is the line and what are the rules?

Why not just handle dependencies manually? Because that's a pain in the ass and exact reason why package managers exist in first place.


>Committing external dependencies to source control means tons of bloat to the repos themselves. Want to clone huge_framework just to change a tiny thing?

Oh, geez, the first thing that makes code _legacy_ code is people checking in "slightly modified" library/framework code.

Not to mention different lib versions. Things like java's Spring libs where it seems like the jar layout, names and hosting changes every couple minor versions. Different dependencies that need different versions of things. Different apps in the same ecosystem that rely on newer versions than the legacy in-house libs they are compatible with.

I've put up with a _lot_ of dependency nonsense on many different stacks over the years, and am trying to adapt to devs bringing in NPM modules for dev use, but for sure I've learned that letting devs check in random libs into source control just because its convenient ends up with a lot of problems. That includes java, php, ruby, perl, and python.

That its yet another generation of landgrabbing devs re-inventing a broken wheel, or people just permanently checking in libs for convenience seems like a devils choice.

...until you see the dev that tries to write it all from scratch themselves.


"Oh, geez, the first thing that makes code _legacy_ code is people checking in "slightly modified" library/framework code."

For myself, I consider the ability to have a dependency and cleanly put a patch on top of it, such that the tooling can help me when I'm upgrading the dependency, is a basic requirement.

Few, if any, tools seem to consider this a first-class need. Based on what I've seen, I imagine this is because most people tend to prefer putting in a thousand lines in their own code to contort things around to hack the underlying library to mostly do what they need most of the time even if it shreds every abstraction their code is nominally creating rather than making a three line change in the framework itself, even before we're discussing tooling making it harder to patch. Most of the remainder tend to, as you say, hack the library in such a way that it can not be usefully tracked; for this I would suggest the tooling should be considered at fault.


This doesn't play well with repos that require native compilation. I'm with you on this approach if you stick to using all pure javascript modules.


Commit the shrinkwrap file.


In Go tooling does help. Vendor everything, commit it to source control, filter vendor/ when looking at diffs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: