Same people who shame other's work for not being modular enough for their taste, would worship a one-line module[0] that is modular down to every subroutine.
To illustrate the problem: my dom-css module[1] needs a method to convert to camel case.
I could copy-paste from stack overflow, but I would have to maintain/test that function. I'd rather depend on something.
I could depend on a big "string-utils" library, but that would probably carry some useless baggage, and its scope may change or grow over time. Further, maybe it has a competitor module with a vehement following (like lodash vs underscore), which makes my module less appealing to them. This sucks for my module because I just want that one function.
The better alternative is just to depend on the exact, clearly named "to-camel-case" function which is very unlikely to ever change or grow in scope.
If the to-camel-case function never changes or grows in scope, then you don't need to maintain or test it.
If it does change, then you need to integrate & test your dependencies, same as if it were part of your own codebase.
I think this point is often overlooked by the "I'll use someone else's code because I don't want to maintain it" folks. Code that is frozen doesn't rot: if you never change a piece of code and don't change its dependencies, it will continue working. Similarly, usually the reason you'd in-source a library is so you can change it at-will without getting some other maintainer to accept your patches. Most of the time, the effort that you save by using 3rd-party code is exactly equal to the time it would take to write & debug the library to get it to its current state, minus the time spent hunting for, learning, and integrating the library. Both of these are pretty easy to estimate when it's a one-line function; does it take you longer to find and npm install that one-line function than it does you to write a line of code?
There's a cohesive social element to "many small modules" that fosters a community.
Module authors work with and respect each other.
Most of the time, the effort that you save by using 3rd-party code is exactly equal to the time it would take to write & debug the library to get it to its current state, minus the time spent hunting for, learning, and integrating the library.
There are plenty of small functions that might be a little trickier than is first imagined. Time isn't the only issue. There's also much less cognitive overhead in the two minutes it takes to search npm, figure out the interface, install the module, and use it. I'd much rather outsource my yak shaving.
I wouldn't copy paste some random blog or SO code without first writing a test for it.
My dependencies already have tests, so I have no need to re-write them. When my tests do fail it is usually immediately clear where the problem lies since things are isolated.
In regards to time; it might take 1-2 minutes to find a module, and then 0 minutes the next time I need the same module. Compared to 1-2 minutes of writing/copy-pasting, a lot of time spent refining/fixing the code as edge cases are found, and then more time spent the next time that function is needed in another module.
This workflow has evolved through my own experiences with modules over the last couple years. I used to do a lot of copy-pasting and "kitchen sink" libraries, but now I find it much more efficient and better in the long run to isolate code. Just a few examples that would be a pain to maintain and rewrite in each project that uses them (potentially dozens):
You also get the benefit of code that builds on its dependencies, like "lerp-array" which depends on lerp, and "line-circle-collision" which depends on "point-circle-collision".
Linking functions; but also minimizing scope creep, API changes, opinionated libraries with a questionable lifespan, bit rot, and versioning woes (eg. a major change in to-dash-case may not represent a major change in to-camel-case).
That's intense and really quite silly. It makes sense to modularize large or complex (or some better modifier?) portions of the code base, but there's a point at which it just becomes too deep of a rabbit hole. I don't believe there is a formal definition of how deep down the rabbit hole that mark is, but I'm sure it would be a function of the module's (and the project's) size, readability and complexity.
Lewis and Fowler provide a nice definition in their Microservices article. [1]
They drive modularity through the pattern of change, so their judgement is based on the development, rather than the code itself.
"You want to keep things that change at the same time in the same module. Parts of a system that change rarely should be in different services to those that are currently undergoing lots of churn. If you find yourself repeatedly changing two services together, that's a sign that they should be merged."
Although that article is about services, believe it can be applied to any type of modules.
'use strict';
module.exports = function isObject(x) {
return typeof x === 'object' && x !== null;
};
Frankly, copy/paste is a superior code reuse mechanism at this scale.
The part that really blows my mind? This project has had 57 commits to it! That's basically proof that the project management overhead exceeds a sane cost/benefit tradeoff.
> Frankly, copy/paste is a superior code reuse mechanism at this scale.
The one advantage I see to having borderline-ridiculous packages such as this is that they do One Thing Well [1]. Yes you may copy/paste an isObject() function or roll your own, but then it becomes your responsibility to make sure it works for your project, and arguably far fewer eyes on it when it doesn't. As long as the project's maintainer is dedicated to making this the best way to determine whether a variable is an object, I have no problem using it.
Although, I do find the 57 commits irksome. Most of them are just upgrading dependencies (code style checkers, etc.), "fixing" indentation, and in one case 7 separate tweet-sized diffs to the README on the same day, as if this guy has git commit/push bound to Ctrl+S in his editor.
In my experience, if these kinds of things break (whether due to a very esoteric corner case, a buggy browser that needs to be worked around, or a new version of the language specification), the person who discovered that it didn't work doesn't get it fixed upstream: they instead push a new module that they try to get people to switch to, talking about how much better theirs is (this happens partly due to the glory, but also partly due to it seemingly legitimately nonsensical to submit a patch to someone else's project that is "subtract your file and add mine", given that it is seriously a single line of code). Meanwhile, the original developer was usually part of the same community-bereft modern GitHub-era open source culture (where publishing code is the victory condition, as opposed to building culture), and isn't around to fix their library even if someone did submit a patch, so it becomes your responsibility anyway, only now it is to realize that the module you are using is "so last month" and figure out which replacement is the correct one. This is not much better than just copying and pasting code from Stack Overflow, yet frankly more infuriating as at least when you copy/paste it was more clear what you walked into.
Has any of this actually happened to you with focused modules like to-dash-case?
There is no reason for such a focused module to go out of style. As with many other small modules, the API and major version is locked barring some catastrophic event. Maybe there is a rare edge case that will break, and then you end up with two modules which are both useful in their own regard (like point-in-polygon and robust-point-in-polygon).
They don't even do One Thing Well. The is-upper-case example checks uppercaseness by converting the entire string to uppercase and checking whether the result equals the original. What if the string in question is "xxx...x" (lowercase x repeated a million times)?
Doing "One thing well" can be defined in several things depending on what you consider of utmost importance for wellness.
I think that function does what it's supposed to "well" in the sense that 1. it works, 2. the code is clean and obvious and 3. it doesn't try to be clever and do "complex" things which can introduce bugs.
I'm guessing from your complaint that you think it doesn't do things "well" because you prefer code to computationally efficient over conceptually and declaratively clear?
You can argue that "But for -this- little thing I can afford to be a clever and throw the obvious-looking code out the window!", but then everyone else also gets to do the same judgement too.
And then suddenly you're all using "clever" code all over the place with new ingenious ways to break everywhere in your code-base. And just like clockwork, your application will start breaking and nobody will know why without spending hours, days or even weeks debugging.
Let's say I generally don't think computationally efficient code is worth that cost. I'll make exceptions for specific code-portions which needs to be fast, but for my bread and butter code? Make it plain and simple!
If I'm importing a module for is-upper-case, it had better be for a reason: that someone actually did something significant on that problem and I don't want to reinvent their work.
The module in question is nothing but a #define macro. Javascript lacks a preprocessor and this community's response is apparently to outsource #define macro's to a centralized server.
>And just like clockwork, your application will start breaking
The alternative is that just like clockwork, your application grinds to a crawl and practically freezes the browser. Because you outsourced every last thing to someone who doesn't care about performance. Sure, it's fine for the static-html-page-that-you're-doing-in-node-for-some-inexplicable-reason. Just hope you know what you're doing when you do something more complicated.
That's a great point! Maybe you should log an issue or make a PR and fix it for everyone (except those who copied and pasted). This is exactly why it's a good thing for smaller modules that do one thing well, because when a tiny bug pops up everyone can automatically apply the patch.
I did do some tests a around this, but it showed nothing substantial. The comparison solution is much faster overall, the only point it falls down is with all lowercase strings where an early return would obviously be faster. Even a regexp was faster in some cases. It's highly subjective.
The one advantage I see to having borderline-ridiculous packages such as this is that they do One Thing Well
Sure, but one function per package is silly. Why not just put all the related utility functions in one package? Underscore does One Thing Well too. And if I really, really care about optimizing for small JS file size, I can just go in my Underscore.js file and remove all the functions I don't want to use.
In the case of is-object the api is frozen, and with any module it's assumed that any breaking changes would come in via a major version (ie not break your code).
Yeah, sure, if the maintainer becomes evil overnight he can break a lot of apps. If you live in fear then you probably won't enjoy npm very much.
It's not even very useful is it? shouldn't there be another check for !(Array.isArray(x)) ? Otherwise, checking for isObject(foo) will still not guarantee that foo.bar won't fail.
It's an alternative approach to software design which may feel alien if all you've ever used is copy-pasting for code re-use, or relying on large standard libraries and utility grab-bags.
First, you've purposefully picked the smallest module that you could find. And can you back up your claim that "the same people who shame other people's work worship and publish one-line modules"? Can't you see how using terms like "shame" and "worship", which poor Blake never used no accused anyone of, is inflammatory?
Second, so what? So it's alright to copy and paste the function? It is alright if a coworker wrote the function and presented you with an interface? Would you have a problem if a single module that was "designed in isolation, documented in isolation and can be used in isolation" but was published internally?
I'm a fan of ClojureScript and one reason being how easy it makes it to take advantage of the Closure compiler. But it can be a double edged sword. At times the advanced compilation mode breaks your code, and it can be extremely difficult to track down why. The combination of symbol renaming and dead code elimination usually means final JavaScript that looks utterly nothing like the ClojureScript you started out with.
Just today I had an issue where the Closure compiler decided my call to (set!) wasn't necessary, and so it removed it, completely breaking my app. The best solution I could find was a work around involving a pretty large let block. I was just happy to get it working, debugging optimized Closure compiled code is not fun at all.
Debugging advanced compiled Closure in ClojureScript is actually pretty straightforward if you know which knobs to turn - :pseudo-names true, :pretty-print true build options are pretty much all you need.
I'm somewhat skeptical about a set! being eliminated unless you were setting a property of an object from a random JavaScript library you haven't provided an extern for. The above compiler settings would have showed this immediately.
In that let expression I'm digging my way down into a native JS object to monkey patch it. The equivalent (set! (.. knex -client -Runner -prototype -debug) ...) was being completely removed as far as I can tell. Same with an equivalent aset. That channel no longer got set up and my app stopped working. If I simply put that method back to set!, then it breaks again, very reproducible. When I added in a (.log js/console "hello") to the method, I could see that console log inlined where the method call was, but no other remnant of the method could be found.
I will play with :pretty-print and look into creating a small repro of what I'm seeing. But if I don't succeed with a smaller repro, then at the very least this app is already quite small.
Max Ogden's last comment irks me. After a long and fairly productive discussion, he generalizes / calls out people, and doesn't explicate on what he's trying to point out.
It's not exactly out of character for 'high profile' node community members is it.
I love the work they're doing, and the community they've fostered might be one of the most progressive in the industry, but they can be insanely pretentious at times.
(irony of responding to a comment about generalization noted)
This is an area where the new syntax for modules in EcmaScript 6 does really well. Since exports are static, a dead code elimination tool can figure out which exports from a module are being used and remove the ones that are not being used. As mentioned in comments about how the Closure Compiler works, you can't really do this with purely dynamic code, but you can do it with static imports/exports.
This is a strange post, it's about two completely separate things.
I do agree with the "modularity shaming" idea though, the JS community does seem to be overzealous about that. However I think it better to err in that direction than the other direction.
Is the Closure Compiler specifically good at dead code elimination in Closure libraries? Theoretically I would imagine that it could be applied to anything, though I'm sure there are many Javascript tricks that would make dead code detection ineffective.
Closure Compiler requires authors to write code in a very specific static style. It's somewhat tedious, but for some libs worth the effort. In the case of ClojureScript we can of course automate the tedium for you through the compiler :)
For random JS libs Closure can't really do much better than Uglify.
> For random JS libs Closure can't really do much better than Uglify.
I wish I could use ClojureScript in my day-to-day, but for now it's out of the question. So, for us, AMD [1] has been a god send. We can defer loading other major parts of our UI until the user actually uses that functionality. Doing this on our own would be impossible, but using AMD has really worked well for us.
But, here's to hoping I can actually use ClojureScript at some future point :)
Yeah, I tried for a couple years to get Google Search to adopt JQuery, but JQuery without dead-code elimination would've doubled the SRP latency, and JQuery with dead-code elimination doesn't actually eliminate any dead code. The problem is that it's written in a style that dynamically assigns methods to the JQuery prototype, and so it's not possible to analyze which code actually exists in the library without executing the whole of the library.
You could build a custom jQuery copy that eliminated whatever methods you don't need. Since plugins use unknown parts of jQuery and Closure Compiler can't always detect that, you really need to do some manual labor to pull in what you need.
In the 1.8 time frame the jQuery devs made a call for people who wanted to use Closure Compiler's ADVANCED_OPTIMIZATIONS to participate, but just didn't get any community interest. CCAO style is not typical JavaScript and it doesn't seem that a lot of people use it.
At that point you're better off just writing the functionality you need from scratch - which was basically the coding standard when I joined Google Search in 2009. They've since moved to allow Closure, which was a big win for developer productivity, but OTOH the latency of the SRP has roughly quadrupled since I joined Google.
Anyway, I'm firmly in the "you don't need JQuery" camp now, since most of its functionality is trivially implementable in one-liners since IE9+. I'm unlikely to go back; I've been using vanilla JS for prototypes for the last 2 years and am using it for my startup now, and have yet to encounter a case where I really miss JQuery. It was a very different world in 2006 when I learned it, or even in 2009 when I joined Google.
Also, I suspect that the overlap of people that use Closure and those that use JQuery is near zero, hence why nobody in the JQuery community cared about CCAO. Both of them are highly non-typical Javascript, as is Angular. JS shows its Scheme heritage in that somehow every major framework results in completely different, mutually-incomprehensible code.
From what I understand, it should work with any JavaScript you feed it. Though, for badly-written source, it may even produce a faulty output when used with ADVANCED_OPTIMIZATIONS. Yikes! (Example from the docs - "Inconsistent property names": https://developers.google.com/closure/compiler/docs/api-tuto...). So it's more about the Closure Library being "certified" to work with it. Someone with more experience feel free to shed some light on this.
Closure Library is certainly written in a way that takes advantage of the compiler to keep compiled scripts compact. Code that makes heavy use of goog.dom could look like:
goog.require('goog.dom');
var dom = goog.dom;
But that defeats dead-code elimination and makes the compiler include goog.dom in its entirety (the Compiler only goes so far when determining if code is "dead"). So, any use of a goog.dom function will be fully qualified:
var el = goog.dom.getElement(id);
Practically, you might use goog.dom.DomHelper instead to keep code compact, and because it stores a reference to the document object you're using.
This surprisingly makes ClojureScript really nice for developing with the Closure Library (not that using any library that depends on mutable objects is particularly nice in Clojure), because you can write code like the following, and still benefit from dead code elimination:
The Closure Library is impractical without dead code elimination because it includes so much functionality, and is not meant to be used as a single script dependency.
Closure also has a goog.scope call that lets you locally alias various imports, but the compiler is aware of it and can optimize across scope boundaries. Pretty handy - besides dead-code elimination, variable renaming also works with goog.scope and so you don't have to treat each import as an extern and spell out the fully-qualified name in the generated code.
This is a dead end in a sufficiently large and complicated app (think beyond facebook/gmail etc) especially if the app supported multiple versions of 3rd party code such as charting libs.
besides writing it yourself (cue contractors going this is the only viable route) reuse and dead code elimination are necessary foundations to the next level of web apps for productivity of dev and responsive runtimes.
Browser still has to parse+run all of that JS though. Because JS-in-browsers is antimodular in that all previous JS on a given page influences all subsequently loaded JS, even if using just script tags with a URL. In a way this is very similar to the pains experienced by the C++ community for many years now.
C++ doesn't have a "load this module" statement. I does have an "include all the contents of X file here", and file X has to deal with possible duplication and cyclic dependencies, usually with something like
#ifndef _SOME_FILE_H
#define _SOME_FILE_H
// code goes here
#endif
Also any "module" can define any global variable, so name clashes (esp. with C libraries) are possible. Most languages with proper modules the "global" variables are module-wide.
An equally nasty issue happens with JS, where many libraries must be loaded in the proper order and not duplicated. There are a bunch of module systems but none is part of the spec. The good thing is that module systems are easy to make thanks to closures.
The short answer is that "it's complicated". The most accessible long answer I'm aware of for non-C++'ers is at [1] (video). If you prefer text, the standardese version is at [2]. Unfortunately I cannot find a concise text version for laymen except perhaps [3].
Of course GWT and Dart also do dead code elimination (also known as tree-shaking). There's a reason Google's JavaScript-targeted compilers implement this.
On the other hand, the consistent dedication to writing tiny libraries in JavaScript ecosystem is not a bug. Tree-shaking compilers make it somewhat harder to tell where the bloat is coming from, and that encourages people writing libraries to get sloppy.
My experience so far is that writing code that defeats tree shaking is somewhat more difficult when adopting the simple functional style of the Closure Library base libs. Fortunately for ClojureScript, this style is already idiomatic.
I responded a while back in another thread about "small modules" in npm. Filesize is only one (rather small) facet of why some people prefer this approach to the "put everything under the same hood".
[0]: https://github.com/blakeembrey/is-upper-case/blob/master/is-...