Hacker News new | past | comments | ask | show | jobs | submit login
Grunt and RequireJS are out, it's all about Gulp and Browserify (100percentjs.com)
208 points by codecurve on Feb 3, 2014 | hide | past | favorite | 157 comments



I like to set up my node apps such that no matter what the build setup is, no matter what the dependencies are, you can execute `npm run dev` to get started.

Inside package.json, this will look something like:

    scripts: {
        "dev": "npm install && make && node lib/server.js"
    }
The cool part about putting this in package.json is that it will install first so you get all the dependencies and devDependencies, and then npm sets up the PATH so that you can use all the binaries from all the dependencies you have installed. So you could replace `make` with `grunt` or `gulp` or whatever. And the `npm install` step only makes external requests if you're missing dependencies; when you already have everything you need, it quickly exits.

Then getting new developers up and running, even if the build setup changes is the same one and only step. In addition, if you pull new code and the dependencies change, you don't have to remember to run `npm install` because you're doing it every time the server starts.


Clever setup.


Author of gulp here. I've rewritten the example from the blog post to follow gulp best practices. https://gist.github.com/Contra/8780398

The code from the post is misleading/outdated.


Many thanks from one who fought through the outdated source in the OP.


On a project that I'm working I recently converted a ~260 line Rakefile into ~150 lines of gulp ... and our build is now almost instantaneous (took a few seconds to do with Ruby, mainly because we had to shell out to things like the less compiler). Oh, and we were also able to build our gulpfile.js in just a few hours after hearing about it for the first time.

I never went the Grunt route because I just can't get used to programming with massive, multi-tiered object literals. If you've ever seen a Gruntfile, you know what I mean. It's all configuration, very little real code.

With gulp, code trumps configuration. And the result is incredibly concise and fast. I would definitely encourage anyone who has been looking for a decent build system for JavaScript to give gulp some serious consideration. We were pleasantly surprised with the community support as well. There is already a wide array of community plugins for gulp that support many common use cases.


And again I'll remind people what happened in Java land:

ANT -> Ivy/Maven -> Gradle

I'll state it clearly: Grunt. is. Ant. It's a mess to follow a build script and a ritual to make it work in a real life project.

I expected a tool like gulp to come sweeping in and it did, I'm extremely happy about that and started migrating away from the horrible tooling that is Grunt.

I never understood how people can tolerate Grunt. Long live gulp and common sense!.


I agree with you that "Grunt is ant". But when I look at that code example it seems like "Gulp is ant", too. It's just less verbose. I think that Lineman is the Maven equivalent in the JS world.


+1. I have spent more time reading about grunt than I have spent investigating + converting to + using gulp in production.


I'd say Grunt is Maven because IDEs have no way to figure out beforehand what the build process will do.


That's the opposite of Maven. IDEs know exactly what a maven project will do (unless you're using plugins the IDE doesn't know about), because a maven build only does a very limited set of things (it mostly just compiles the source in src/main/java into target/). Which is what makes it great.


The one thing that keeps me from using browserify or webpack or the likes, is dev time debugging. It's not very useful to find that line 13729 of script.js threw an exception.

Source maps may help, but many browsers don't support them, and I want to be able to debug everywhere. Plus, when the browserified js actually came from Coffeescript or TypeScript or the likes, I already have source maps in place. Can browserify source map my source maps?

Is there a solution to this? How do browserify fans do this?

I guess what I'd like is for browserify to have a mode that removes the require()s from my .js files and generates a bunch of script tags in the right order.


Well, source maps are usually the solution for me. And yes, browserify supports source-mapping all the way back to coffeescript files using browserify-middleware. In the off case that I need to debug something in a browser that doesn't support source maps, you can turn minifying off and usually it is pretty easy to recognise the code you're inspecting, even if it comes from coffeescript. I've never had a case where I've been completely out of luck (usually this case only happens in IE).


Unable to find the link/name at the moment but there is a method (it isn't vanilla source-map) for this which basically wraps the modules into self-executing functions and it shows up in Firefox and Chrome as distinct files. It was related to source-map but not

However, I have found Chrome and Firefox's support to be buggy.


Maybe you're thinking of webmake. The sourceURL lines that it uses in the eval() sections resolve correctly on Chrome, but Firefox+Firebug don't seem to work for me.

The webmake site references a bug in Firebug that was resolved ~1 year ago, but I still have issues it with. I end up swapping back and forth between Firefox+Firebug and Chrome to get all of the debugging features that I want.


sourceURL? You can even do it with functions in the console:

   function hello() { 
      console.log("hello world");
      //# sourceURL=hello.js
   }
This will then appears in the sources panel under the name "hello.js". Which is great for debugging little scripts you're putting together in the console as you can set breakpoints, watches etc


This post reminds me of the joke about the Native Americans getting ready for winter[1]

[1] http://voices.washingtonpost.com/capitalweathergang/2008/11/...


That's great - I can use that whenever the topic of 'the echo-chamber' comes up.


And just like that I have one more reason to ignore them both and use Make.

Speed? Check. Simpler syntax? Check. No need to "fix" code that isn't broken? Check. No need to waste my attention on every new fad? Check.


Why even use Make? I just use shell scripts for automation.

The only case where you really need the incremental behavior of Make is C/C++ builds (and arguably it's increasingly inappropriate for this domain as well). For all other kinds of automation I just use shell scripts, since Make is mostly a horribly reinvented shell script dialect.


Your post just punched me in the gut.

I've spent the last couple of weeks playing the "front end build tool dance" (primarily a back end dev) to find a system I like and it simply never occurred to me I could write my own as a quick bash script or python if I think it'll grow.

That's an embarrassing oversight.


Good :) Shell scripts will be around for a lot longer than any of those tools.

Shell is also significantly easier and faster, if you just want to say run jslint, then minify, etc.

I program in lots of languages, so shell is the first tool I reach for. Learning the language is enough; I don't have time for all these quirky build and deploy tools.


The embarrassing part is I automate everything with the shell and still never thought of it :).


Build utilities like Grunt or Gulp will be hopefully more compatible across systems than shell scripts will be. I'm not too familiar with Node's support on various platforms, but I'd wager that it's probably decent.

Gruntfiles and Gulpfiles are in javascript too, which lowers the barrier for entry for developers who aren't as versed in linux.

Is there a shell script equivalent to npm? Sidenote, that could be really useful as a service.


> Grunt or Gulp will be hopefully more compatible across systems than shell scripts will be.

On your own stack and dev environment, why would you care about compatibility of shell scripts? Do you change your stack much?


For open-source JavaScript projects, using Grunt or another Node-based tool makes it easier for Windows users to contribute. Node has excellent Windows support, while shell scripts are mostly the domain of *nix.


Yeah. In the past three years I've gone from Windows, to a CentOS VM, to an Ubuntu VM, now on OS X. Node has worked on all of those platforms. I can't say the same for my .BAT files.


You can pack several shell scripts into one Makefile as .PHONY targets. You can share configuration values, subtasks between the tasks. It is somewhat easier to supply options to tasks e.g.:

    make deploy SERVER=example.com
Shell syntax is horrible itself. Make doesn't add too much on top of that.


Make is the easiest way to map all .foo input files to .bar output files and have the outputs only rebuild when the inputs change. This has a ton of applications outside of C and C++, really anything where the build takes time.

A shell script cannot adequately express task dependencies, or one that did would become a build tool like make. As it stands, make has a very simple and light syntax for expressing dependencies and has remained useful with little modification for quite some time.

Make isn't necessarily the best automation tool, but it's a great build tool.


Not writing task dependencies is kind of a feature though, in a KISS way. One of the reasons I have been tending towards using stupid shell scripts is that its much easier to guarantee that everything is built correctly if you rebuild from scratch everytime. With make I often find myself dealing with bugs in the makefile due to forgetting to specify some dependency or things like that.


Saying that Make is a horribly implemented Shell dialect clearly seems like a misrepresentation of Make to me.

Make is a non-procedural way to describe the dependencies of your build process. You describe what you want as a result rather than the steps that the interpreter should follow.

It's possible to like or dislike this approach but it's clearly qualitatively different from Shell scripting.


I understand that distinction, but my point is that for automation like running jslint or minify, the dependencies aren't that necessary. Those commands don't take very long, and often don't have long dependency chains.

Make basically adds a very small "declarative" dependency layer, and then reimplements the shell poorly:

   - variables (from the environment)
   - command tokenization
   - escaping rules
   - functions
   - if statements
   - include
   - globbing (in the target line)
   - various magic $ variables
   - process substitution
   - eval
   ... etc.
That is, it's a full fledged programming language with different syntax than shell, but mostly the same features. I guess what might do in some cases is have a very simple "actions.sh" shell script, and then a Makefile which ONLY has simple command lines calling into actions.sh. Any functions would be in bash rather than make.

This is actually nice because there is an "upgrade path". For most simple things, you can just stick with shell. But when you need the power of make, you can upgrade without rewriting anything -- just adding a short/readable makefile.


Autocompletion of make targets in the shell.


I wrote my own completion for that :) Here is the pattern I use:

   run.sh

   build() {
     ...
   }

   test() {
     ...
   }

   deploy() {
     ...
   }

   # This lets you do ./run.sh build foo bar at the command 
   # line.  The autocomplete scans for functions in .sh files
   # and fills them in as the first arg.

   "$@"


I've been using make myself. The main barrier I keep running into is that it's actually quite challenging to :

1. Use make/bash to work with things like JSON, mustache, images, markdown, less, sass, uglifyjs, etc. etc..

2. Do so in a way that is portable to even other unixish machines.

3. Why doesn't make provide an easy way to input a BIG LIST of files into a command? The choices (I'm aware of) are to put them all on one line, work out some wildcard (which doesn't work on arbitrary lists of files you need in a particular order), or--- have backslash escaped line endings! yuck!

nodejs isn't available in the debian stable packages repo. The available mustache command line tools are pathetically bad at this task (I had to write my own). I can make it work beautifully on my machine, but as soon as it hits my co-workers machine, the build breaks because they haven't installed pandoc, or ripmime, or whatever other utility I had to use to get things done.

So, I don't know, maybe I'm doing things wrong. But I haven't got this to work particularly well yet.

And uh.. windows. yep.


Well, to be fair, make and browserify do completely different jobs. You can't substitute browserify for make, nor vice versa.

But to be honest, if you see learning new things as wasting your attention on new fads, I don't think this stuff is for you. I really like trying new things that people have made and seeing what they can do. If that feels like a chore/hardship to you, you absolutely should just keep using make.


Believe it or not there are people in the world who aren't technical enough to use stuff like Make, and are perfectly fine with using tools like Grunt or Gulp to get them through their days. Mindblowing, I know.


Believe it or not there are people in the world who aren't technical enough to use Grunt or Gulp, because they are much more complicated than Make... Mindblowing, I know.


Really no.

The makefile syntax is obscure and hard to maintain.

We've been there. We don't need to go there again.

The stories of 'recursive makefile dependency hell' deserve to stay back in the dark ages of the 90s where they belong.

There's a reason lots of people are inventing new systems for doing these sorts of tasks (for example, spawning a local webserver to work with after compiling less and coffee files and parsing the output for errors and displaying them nicely, then watching the filesystem for changes and recompiling on demand).

It's not because make is evil, it's because make does a poor job of some of these sorts of tasks.

...if all you're doing is turning .coffee files into .js, it's file, but it's not a solution for complex website builds.


Yes, Makefile syntax gets somewhat obscure after the simplicity and power of target definitions and variables (e.g. http://mrbook.org/tutorials/make/).

Yes, Make does a poorer job at some of the tasks (although spawning a webserver process isn't one of them).

But I don't think having to write a plugin for EVERYTHING (http://gratimax.github.io/search-gulp-plugins/) is a sustainable answer.

P.S. I am sorry, I searched for "recursive makefile dependency hell", but I didn't find an explanation of what that is. It would be useful if you could provide some stories. I have been using Make only for two years, so maybe I haven't run into the exact problems yet.


He's probably referring to this old article - http://aegis.sourceforge.net/auug97.pdf which suggests that recursive make is bad because it can cause a lot of unnecessary processing even if you only changed one or two files. I think this just comes down to how you're using make - for me recursive make works just fine because I tend to run the specific target I need while I'm working (as it's usually in my cwd anyway) and only run the root `make all` on deployment.


Oh, how I long for the days when condescension was a thing reserved for Lisp programmers.


This is a tangential question, but how do front-end people feel about the constant change in the field?

I worked in the front-end and followed the trends for years and have found the changes difficult to follow. In 1997, the rage was VB and lots of cottage companies set up and advertising custom ActiveX widgets, on the web one had to learn ColdFusion and HTML/CSS. In early 2000's, VB6 was retired in favor of .net and a painful migration/learning-curve followed. Meanwhile, PHP was gaining traction so as a front-end person, one had to also start learning the LAMP stack in addition to asp and also CSS hacks to get different browsers to render mocks. Then around 2005ish is when AJAX/Web2.0 started gaining traction, one suddenly had to learn the burgeoning frameworks of the time, jQuery/Mootools/Prototype/Dojo/YUI/Sencha (at the time, no one knew which framework was going to win. I spent a lot of time on Dojo before moving to jQuery which started to gain the most traction); at the same time, web sockets still wasn't secure enough so there was also a lot of demand for Flex/Flash/Silverlight. Then around 2008-2009, when HTML5 started becoming more popular, Flex/Silverlight became obsolete; JS mobile frameworks such as PhoneGap and jQuery Mobile grew in favor but later in 2010-2011, they fell out of favor due to "responsive design" frameworks such as Bootstrap. Not to mention native mobile tech stack such as iOS and Android. In addition, around the same time, next-gen JS MVC built on top of jQuery have popped up such as Backbone.js, AngularJS and Ember.js and it's not certain who is going to win out this time in the year of 2014. On top of those, there are now JS AMD loaders (Require.js) and build/integration tools, Grunt that one needs to set up for a project which it seems may also be falling out of favor. Finally, new video/sound/web-socket standards revolving around HTML5 standards is demanding new learning bandwidth.

I'm frankly overwhelmed of learning and being exposed to new technologies. The physical draining feeling of learning new keywords to fulfill the same urges is as if I have watched 15 years of porn following from the grainy days of Jenna Jameson on VHS to the heady-days of Internet dial-up gonzo porn of the early 2000's that really explored anal (Gauge, Taylor Rain) to the streaming flash videos of Web 2.0 (Sasha Grey) to the now completely splintered and social-mediafied porno-world with all the mind-numbing categories under the sun (reality, high-art, webcam etc). I'm simply drained and spent.

There certainly has been changes in the field in back-end, from Java applets to Spring and Struts to now Scala and Clojure on JVM or transitioning the scripting language from Perl to Python, and adoption of Boost in C++. But I didn't have to re-learn old concepts and the changes were incremental instead of revolutionary; and the whole shift from declarative programming to functional languages is not new as you've learned Haskell/Lisp in undergrad anyways. Whereas what I had learned as a 9 year old on Turbo C doing DOS programming would still apply today, what I learned then for VB4 and HTML/Frontpage is now completely useless.

I'm scared for my brain as I get older as I may not have the time nor the energy to devote myself every year to relearn all of these new tech. I'm wondering for people who are above the age of 30, how do you deal with it?


Repost, don't know the original author:

I agree, I can't keep up, I just finished learning backbone.js and now I've found out on HN that it's old news, and I should use ember.js, cross that, it has opinions, I should use Meteor, no, AngularJS, no, Tower.js (on node.js), and for html templates I need handlebars, no mustache, wait, DoT.js is better, hang on, why do I need an HTML parser inside the browser? isn't that what the browser for? so no HTML templates? ok, DOM snippets, fine, Web Components you say? W3C are in the game too? you mean write REGULAR JavaScript like the Google guys? yuck, oh, I just should write it with CofeeScript and it will look ok, not Coffee? Coco? LiveScript? DART? GWT? ok, let me just go back to Ruby on Rails, oh it doesn't scale? Grails? Groovy? Roo? too "Springy?" ok, what about node.js? doesn't scale either?? but I can write client side, server side and mongodb side code in the same language? (but does it have to be JavaScript?) ok, what about PHP, you say it's not really thread safe? they lie?? ok, let me go back to server coding, it's still Java right? no? Lisp? oh it's called Clojure? well, it has a Bridge / protocol buffers / thrift implementation so we can be language agnostic, so we can support our Haskell developers. Or just go with Scala/Lift/Play it's the BEST framework (Foresquare use it, so it has to be good). of course we won't do SOAP and will use only JSON RESTful services cause it's only for banks and Walmart, and god forbid to use a SQL database it will never scale

I've had it, I'm going to outsource this project... they will probably use a wordpress template and copy paste jQuery to get me the same exact result without the headache and in <del>half</del>quarter the price


The problem here seems to be that you're using HN, a site about what is new as a way of finding out what you should be working with. That's like asking an ADHD kid what game you should be playing. You'll get a different answer every 5 minutes. :-)



    I'm frankly overwhelmed of learning and being exposed to new technologies.
    The physical draining feeling of learning new keywords to fulfill the same
    urges is as if I have watched 15 years of porn following from the grainy
    days of Jenna Jameson on VHS to the heady-days of Internet dial-up gonzo
    porn of the early 2000's that really explored anal (Gauge, Taylor Rain) to
    the streaming flash videos of Web 2.0 (Sasha Grey) to the now completely
    splintered and social-mediafied porno-world with all the mind-numbing
    categories under the sun (reality, high-art, webcam etc). I'm simply
    drained and spent.

Uh...


Ah, so that's why it's called analogy!


The man really knows his porn.


Moral of the story is, if you spend the time you're watching porn on programming instead you would be a master of all these technologies.


you could also say he would be the "master of his domain".


Spending his time mastur-ing web technology fads is unlikely to be more productive than just watching the porn.


Yeah, I was not expecting that. Kind of offended. (a) I'm at work and (b) was not aware of the "splintered and social-mediafied porno-world" nor all its categories.

But props to a pun-intended use of the word "spent".


Isatmour, I apologize for offending you, I certainly didn't intend to impose pornography onto you nor condone porn addiction (for being a porn addict, I know that it is no laughing matter although I try to use humor to try to be honest and confront my issue). I guess I wanted to frame learning front-end framework as a kind of OCD/obsessive behavior similar to any positive or negative addition you or someone you know might have experienced and chose the "lowest common denominator" metaphor I could find: porn addiction. Again I apologize for this who may find it offensive or an emotional trigger. And for anyone who's reading this and have this issue, I def. recommend this book: "The Addictive Personality" (http://www.amazon.com/The-Addictive-Personality-Understandin...) which has helped me understand and reform my behavior.


Thanks, I accept your apology, though in improvement, I would focus on your writing. Habits are hard to change, but what you write is public and read by others. A careful eye (try not to post before going to bed) and use of the Edit button on HN can go a long way. Remember that not everyone has the same life and interests as you -- all we can assume is shared here is technology and business, so anything outside those realms spoken here is ... uncomfortable. I'd feel the same way if you'd talked about changes in filmmaking or fabric. Admittedly this was offensive, but it was more the bait-and-switch unexpected nature of it. I'm agreeing with what you wrote, everything's great, then paragraphs get longer and suddenly--Bam! It's not about technology but how the world is changing too fast. That's where you lost me. ;-)


It drives me nuts. I spend probably 1/5 of my time doing front-end (but have been around through all of the generations you mention) and after re-org after re-org I've finally settled on a build system with a Makefile and browserify to package things up that isn't an enormous monstrosity of 1000 different node packages and bajillion-step build progress.

Frankly, I think that a large part the problems in front-end have to do with how hard it is to write maintainable JS w/ a proper separation of responsibilities, due to the way JS files are loaded/have no coherent module system (on the front-end that is). Because there is no standard module system (and yes I know about CommonJS and AMD loaders, but both have issues), to use a given component you oftentimes have to adapt an entire philosophy of package management that can lock you out of other packaging philosophies. In the end, we have millions of front-end programmers saying, "eh it seems like to much work to integrate this packaging philosophy, I'll just write my own duplicate copy/library." So basically projects silo themselves off and share little code until someone decides on yet another package philosophy (see: http://xkcd.com/927/).

People love to write build system after build system, in every field, a fetish I've never quite understood. Makefiles build some of the most widely used/complicated packages out there.

And as a final note, I'm really excited about emscripten in allowing front-end developers to move away from designing abstractions around JS/DOM in such a way that eventually we can stop relying on JS and rely on more the same primitives we use everywhere else in programming.


>People love to write build system after build system, in every field, a fetish I've never quite understood.

I have a firm suspicion its what writing a framework, cms, or blogging platform used to be.

In time the leader will show itself and people will get the message that this project is no longer fun and trendy.

I have to agree makefiles are pretty much the be all and end all when it comes to building projects. It offers pretty much all the power you could need.

The only plausible reason for makefiles to not be suitable for a culture brought up on shiny macbooks and "GUI's" would be that not many people want to deal with the complexity which comes from having that much power.

http://www.chemie.fu-berlin.de/chemnet/use/info/make/make_16...

Take this for example. Its pretty hard to look at let alone understand. That scare is probably reason enough alone for a bunch of wheel2.0s to be created. The 183 page plain text manual doesnt help much either. Its too daunting for today's culture.

Its just not about being correct and feature complete anymore, its about having the path of least resistance. Simple makefiles are simple, but nobody upvotes a hacker news article about how amazingly easy makefiles are.


It's all just marginal convergence towards "best", and in real practice, most of this "progress" can and should be ignored. For every tool, wait it out until it's been around and still in active use/development/maintenance for at least 5 full years.

But always stay playing. Always try out the new things, because some of them may just scratch a burning itch.

Fear not age, because if you've been around long enough, and are still actively learning, all this new stuff starts looking very much like mere variations of old things.


Perhaps I'm just rusty but age has only made me cautious. I'm responsible for a rather large and complicated UI framework that we use internally. The thing I've discovered over the last 5-6 years is that APIs are volatile as are JavaScript libraries, browsers and security concerns. It's terribly hard picking something that you can rely on. Even jQuery has enough breaking changes for me to have a week long task getting past 1.4.2 on our product. We have over 400 complicated web app pages that need testing to make sure we haven't broken any edge cases.

The learning curve is tough, the maintenance is tougher and the churn is terrible. It's a living hell on a large project. It makes me long for WPF or even Win32 where 20 years without significant reengineering is a good bet.

On the web, the basic DOM API is the most stable. Sacrificing convenience for maintenance these days is what I'm for. Screw the frameworks.


The problem with the frameworks is the very definition of the word. Not every problem/process fits in the workable frame, and then you end up with a pentagon shaped framework with dingles off the end, and nobody knows what the hell you did except you. Better to just write straight prototype OOP and use a preprocessor thatll add KVO to accessors/modifiers if you really need events (which most apps do.)

Emscripten is good for that. And there are many languages that have KVO that will compile to JS using emscripten. Really, JS isn't ideal for large projects, and these frameworks are silly, because they're targeting this lowly base for UI when a better language with better standard libs could be used to do lots of the logic, ie. Clojure.


>We have over 400 complicated web app pages that need testing to make sure we haven't broken any edge cases.

Reading your comment I get the feeling the impression that testing these pages is time consuming and difficult.

What kind of QA automation do you have in place?


A lot but it's not perfect by any means and it's very time consuming as the test suites take 8h+ to run.


It the best practice of a wise developer is to wait 5 years before touching a tool then that tool will be tested only by unwise people.

Devote 2% of your time and try out some of the things once in a while; don't bet on them but at least be a small part of the community which will make them either better of fail.


This.

One mod: 5 years is a very long time in this space. Node itself is but 5-years old.


(front-end dev)

Couple years ago when Google+ was the new kid on the block, I made a tiny userscript that hooked into their DOM and cleaned the UI up a bit. You'd think this was an easy task and you'd be right except it was quite a pain in the rear to maintain the extension. Google kept changing the classes and IDs, and moved the DOM around so frequently (sometimes within hours of the previous change) that my extension was constantly broken, and all my time was spent tweaking my code to keep pace with the changes propagating from an entire team of Googlers and their automated commit bots. It wasn't long before I gave up on the effort.

Following front-end trends today feels exactly like that experience; there's a whole host of prolific authors, even teams, coming up with new approaches for almost every nut and bolt in the stack. I think for the time-constrained it's best to wait for the wheat to rise above the chaff, even it means falling behind the curve a bit.


There's a few things going on here that combine to cause this mess.

First, task runners, like templating systems and module bundlers, are easy to write so there are lots of them. Grunt in particular doesn't bring anything to the table that bash scripts don't.

Second, most open-source projects don't make their value prop clear (I learned this the hard way first-hand and I'm still dealing with it) and most people don't have a good rubric to evaluate technologies so they fall back to crappy ones like gzipped size, number of dependencies, or the twitter account of who wrote it. Increasing the level of understanding of performance and how system complexity evolves over time is an important next step for the community to take.

For example I think the excitement around Gulp is legit because the tasks are composed as in-memory streams which is a scalable way to build a performant system. Browserify not so much, since it doesn't bring anything new to the table except maybe that it's so damned easy to use ("philosophy" does not count as bringing something to the table). Webpack, on the other hand is a whole different story since it accepts that static resource packaging (not just single-file JS modularization) is a problem that needs to be tackled holistically and necessitates a certain level of complexity.

I named specific projects not because I have any vested interest in them (I don't really use Gulp for anything) but because I wanted to show concrete examples of how to evaluate technologies on real merit.

Finally, the web frontend community has a huge problem with NIH (not-invented-here) syndrome which is encouraged by npm. For example, there are lots of copycat data binding systems that claim to be "lightweight" or "simple". They're usually written by people that don't know about all of the important edge cases which necessitate certain design decisions or library size. It goes the other way too -- a lot of people are building monolithic app frameworks without doing due diligence on existing systems to see if they can be reused.

If we can slow down and try to respect what others have done and acknowledge what we may not know, I think we can fix this problem.


> Grunt in particular doesn't bring anything to the table that bash scripts don't.

Other than a simple declarative API for all of your build tasks and a huge ecosystem of tasks.


Yup, exactly. A generally clear, new-developer-need-not-know style of `grunt dev` and you're out the door. And say the new dev needs to change something once he gets running? Hardly an issue. Just look at what is essentially, for most cases, JSON.

Grunts clarity has a ton to offer. With Gulp, while there are some speed increases, that clarity and DRY mentality disappears.


Interesting point regarding npm and NIH syndrome. I think part of the problem is lack of discoverability, which has led to balkanization of the ecosystem. The next big wave is probably in someone's undiscovered Github repo, be written by someone who simply never found the current best-in-class tools.

I think JS is an interesting case study as well because it's historically with such an impoverished standard library/distribution.


> Grunt in particular doesn't bring anything to the table that bash scripts don't

One thing that comes to mind is cross-platform builds, which in some scenarios is very useful.


Also, how do I call a JS function in a Bash script? There are lots of JS libraries that existed before Grunt that are useful to incorporate into a build system. If I was relying on Bash scripts, I'd have to write all sorts of wrappers in JS anyway.


#!/usr/bin/env node

What a heavy wrapper. ;-)


I'm 33 and a frontender. To be honest, it doesn't bother me too much. I think there are a core set of skills that see you through all of the change. Things like: knowing how to work well in teams, working well with graphic designers, experience with how sites work in terms of UX, web service integration, good understanding of backend structures and HTTP, estimating on projects, dealing with clients, dealing with management ... these are the tricky things that make good developers great to have on projects I think. None of the new tooling, workflow and languages that come around are rocket science, and you can get up to speed on something in a few hours, especially if you have knowledge and experience of what came before and the problems the new tools are trying to solve. I still enjoy learning new things, I don't think we can expect the rate of change to slow down - if anything it may speed up. Its a young industry, nobody knows the right way to do things yet, let alone what the "end game" state of interactive information delivery to humans will look like!


As I've gotten older I've noticed I've become more pessimistic of changes in the field. Occasionally a really good idea comes along which sticks. A lot of the time though it feels like a new hit framework is cooked up every week, and experience has taught me that this weeks hip framework can quickly turn into last years boring support nightmare.

In general I think we're heading towards better things... you just have to watch out for the warts along the way.


> This is a tangential question, but how do front-end people feel about the constant change in the field?

I deliberately hang back on investing time in something unless it's immediately, drastically simpler than what's there now.

- My 47-line Gruntfile became a 23-line gulpfile, and I understood it better, so I learnt gulp.

- I don't see any huge advantage in using browserify, just syntactic difference, so I'm sticking to RequireJS right now.

- After reading about ractive and how simple it was (have an object, have a mustache template, you have bindings) I started using it in place of Angular.


Speed is the reason. If you have gruntfiles that rely on a lot of compilation you know the pain of 5 or 6 second build times, with Gulp it's like a half a second.


Full-stack dev that came up through the front-end ranks here - I'm in my early 30s and have been doing this in some form or another for 17 years. I started with Perl and C CGI scripts, worked with Java Swing (on the desktop) for a while, had a brief foray into MFC, did a whole bunch of PHP in college, switched to Django/JQuery while working on my startup, and now use a whole bunch of Google-proprietary techniques along with the native browser APIs.

I've found that the best way to stay sane is to ignore the hype. After having worked with a dozen or so technologies, I can say that knowing your preferred toolkit well matters a whole lot more than which toolkit you know. Nowadays I usually tend to prefer whatever is native to the platform (vanilla JS for web, Java for Android, Objective C for iPhone), because it will perform better, there are fewer opportunities for bugs to creep in, and you can access more of the platform functionality without waiting for frameworks to catch up.

It was actually uTorrent (remember that?) that caused a major shift in my thinking: before that came out I was all like "Yeah, frameworks! Developer productivity! I want to squeeze every inch of convenience out of my computer!", but then here was this program that was a native Win32 app, no MFC or Qt or Python or anything, and it blew the pants off the competition because it didn't use a framework. I didn't want to deal with Win32 at the time, but Web 2.0 was just getting started, and within a couple years I was doing a startup with Javascript games, and I found that if I wanted acceptable framerates in a game I couldn't use JQuery and had to learn the native browser APIs, and the difference between them was incredibly stark (this was pre-V8, when no browser shipped with a JIT, and I was getting about 2 fps with JQuery and about 15 with native browser APIs). And once I'd used the native APIs for a bit, I found they weren't actually that bad; they were more verbose and less consistent, but not in a way that seriously slowed me down.

I think it does take a certain amount of security and confidence in one's own abilities to do this, because it definitely makes you uncool. I've been in Hacker News threads where people are like "No. What the hell are you smoking?" when I suggest that you might not need a JS framework. But the folks who matter to me respect my abilities enough to pay me generously for them, and not staying up-to-date with the latest and greatest gives me time to cross-train in other fields like compilers, machine-learning, data analysis, scalability, UX design, management, and other skills that I've frankly found significantly more useful than yet another MVC framework. If I need to use a framework I'll learn it on the fly; I've done that for several projects, and (after I learned my first half-dozen MVC & web frameworks) it's never taken me more than a week or two to become proficient in another one.


When you start using any framework that's supposed to do the things you don't have the time to learn yourself, you take on technical debt. For example instead of learning how to create mobile web applications you start using JQMobil, Sencha or PhoneJS. Now you end up spending time to learn both the CSS, HTML stuff in addition to the framework to get the end quality you want.


I agree with all but will point out that a benefit to a framework or any extensible tool is community contributions and opting out to native means that you don't get that benefit. That being said, when javascript ES6 modules land it'll be much easier to reuse code and dependency on big libraries would be unnecessary, meanwhile you can use Browserify or Component that have a lot of community contributions with minimum dependencies on big frameworks or libraries


We are programmers. Our job is to create abstractions and remove repetition. I understand you're making a general point but what happens when you take your argument further? Machine code? Writing bits to a magnetic platter with a magnetised pin?

No. The goal is to find the right abstractions - the right libraries/frameworks/patterns etc. If your tools end up costing you time then the tools are flawed - not the concept of using tools.


Having opinions is fun but you misunderstood me with passion :)

I'm simply saying when we have better way to package up software on the frontend there will be less reliance on monolithic libraries and frameworks, the right abstractions will be easier to write, find, and maintain. One of the greatest things about Unix is the modularity and compose-ability and Node.js adopted that philosophy, that's why I recommended Browserify and Component. They are a way to package and distribute the abstractions you speak about.


I've always seen 'boutique' development as something that eventually requires transcending frameworks in favor of optimally architected code. uTorrent is an example of a high caliber of development.

In a world where people relentlessly defend good enough (generally, engineering advice on HN is often straight-up amateur hour), it's telling that great sometimes requires abandoning the tools that hipsters clamor for.

All you need is a great UI design, good architecture, and a keen eye for details.


I've always enjoyed the change and the re-start of standardization efforts reminds me of the fun of bad old days without the massive browser incompatibility. For me the transitions weren't about being "forced" to do anything but rather a continuing quest to find something that sucks less. I've been on the w3c/mozilla bandwagon since '98 so I've avoided 90% of the plugin thrashing you mention. The DOM libraries are different interfaces over the same underlying API so they work the same. For the MVC libraries, I've been exploring the space since 2008 (I was writing docs to release my version of the Backbone library when Jeremy released Backbone and his code was better) so I don't see it as new and upcoming. Having a build seemed obvious when I started writing 10k+ sloc apps since I'm not going to put it in one source file and making 30 requests for js is terrible and I had a rake script I copy/pasted around for years before the node build systems showed up. AMD always seemed like a solution in search of a problem to me and I just concatenate everything.

For what it's worth, we're about to go through another generational shift in frontend tech. There are a few major developments in the pipeline: ES6 generators and Web Components. Generators allow for different async flow control patterns [1] which greatly changes the feel when writing JS. Component systems (Ember, Angular, Polymer, React) offer significantly improved state control and huge code savings. If you aren't already using one, you will be in the near future but it's still early enough that it's unclear which will come out on top. There's a set of emerging standards work around enabling self-contained Web Components (shadow DOM, template tag, scoped CSS) but these don't dictate how you structure your code so there's still room for conflict.

[1] https://github.com/petkaantonov/bluebird/blob/master/API.md#...


I am a full stack developer and spend a fair bit of time on the front end. Like you I feel a lot of this work will be made obsolete very soon but I am resigned to the fact because it has been going on like this for years.

I find the best strategy is to wait for wider adoption and hedge your bets. Gulp looks great but depending on the amount of time you can spend on it, it's best to wait for the ecosystem to mature (more plugins, more framework support, migration of existing Grunt plugins etc). I would give it another year or so because community migration can take time (backward compatibility, dependencies, endowment effect, etc).

The other thing that works for me is to adopt frameworks with a lower learning curve even it requires more manual plumbing. Plumbing is cheap and you can always refactor. Backbone JS was easy and I am looking forward to Riot JS because it has no learning curve as long as you have a good familiarity with writing robust JS.


HTML/CSS, .net, PHP, LAMP, AJAX, Jquery, HTML5, Backbone, Grunt, PORN, Gonzo, Anal, Sasha Grey, Scala, Python, C++


Brb. Updating my LinkedIn profile, "originally a front-end developer, but later foray into Gonzo/Sasha Grey has really opened up my interest in implementing modern concurrency 'best practices' in the back-end."


What's the point of developing stuff in the first place? To make the world a better place, to learn things, to grow. Will that be accomplished so much better by switching from Perl to ASP to Ruby on Rails to Node.js to Go?

Look, there will always be a new tool|library|framework|language|paradigm that is cooler and better. If you pick the coolest language available today, you will be "so 2014" in a few years.

We need to be more pragmatic and use the tools that make our mission easier. Sometimes that mandates a complete rewrite in a new language|framework, but that is rare.

What the article calls "revolutionizing" is really not that big a deal. Maybe it makes things a bit easier, faster, prettier -- but it won't make or break your company.


The thinking might be different for behind the scene basic tools and more UI level libraries.

For this OP, the move from grunt to gulp for instance is a simplification of the build process. It requires some refactoring of the build process, but shouldn't impact the application code, the two libraries can coexist (not cooperate I guess, but at least you can set up both and use one or the other as you want), and can be tested with simple use cases first and expanded to the whole app afterwards.

The barrier to entry is low, it doesn't require much commitment and can be done on the side. I'd try to keep up as much as possible with new available tools, as long as they match the above criteria.

For libraries and frameworks closer to the UI and application structure, I have the feeling they generally need more time to pickup, learn the strengths weaknesses, deal with the quirks and bugs. Even with reasonable documentation, most of them seem to need at least a few dives into the source code to really get how they work and what they expect to be doing.

Trying Ember or Angular on a somewhat realish project takes enough time to make it a chore to try a few alternatives, I'd guess most devs would want to wait months or years to see which libraries die in infancy. I think for this space, trying more than one or two picks here and there a few months apart is just insanity, except if you really enjoy it or it's part of your job. As time goes by I feel the timespan I wait for something to see if t sticks goes longer. I remember a Jeff Atwood post [1], about how ruby is now mature enough to be taken seriously.

I think the same thought process can be applied to big enough frameworks.

Personally I'd tend to go for the libraries that are simpler or with the least 'magic' to avoid getting in situation where I invested weeks doing something and there's bug I don't know where it comes from and need to spend days on it because of the amount of abstraction going on. That's a way to mitigate risks when trying out random libraries.

[1] http://www.codinghorror.com/blog/2013/03/why-ruby.html


Sorry I'm late, but I thought I should weigh in. This problem used to frustrate me but I realised that it just isn't worth worrying about. I wrote a big whiney article about it a year and a half ago: http://danielhough.co.uk/blog/i-cant-keep-up/


Interesting question, I feel that if you are good in a particular area (Say php / css / js / html) then you will find each new component / framework actually simplifies your life or makes it easier. The trouble is if you are more of a heavy js developer and are typically labelled as a web developer. Then you feel like you have to know about SASS (css) / HTML5 video standard / Scala and everything else related to the web field!

I think that as web development grows and matures, it will finally have multiple experts who need to work together and with each expert having no problem in catching up or using the latest paradigm in their area.

I would recommend you specialize / try and learn whatever you are good at.


> I feel that if you are good in a particular area (Say php / css / js / html) then you will find each new component / framework actually simplifies your life or makes it easier.

I disagree. Frameworks/components are written by human beings; some are a good idea, others a bad idea, and any a complete mix. I would hazard a guess (stats was never my strength) that 50% of new frameworks are a bad idea and/or complicate matters down the road.

> I think that as web development grows and matures, it will finally have multiple experts who need to work together and with each expert having no problem in catching up or using the latest paradigm in their area.

We're there already. But the pace of change means keeping up is still an issue (esp once you have partner/family and don't want to spend evenings/weekends playing/learning) and actually you need to see across areas for some topics (front-end performance being one).


be realistic, this is akin to your team changing a few lines in a makefile,

sigh, javascript people,


You're like Lance Armstrong after he retired from road cycling and decided to get into mountain biking because it'd be so easy to stomp the competition in a race with merely 20 miles of distance and 5000' of climbing. Needless to say he got his ass handed to him because there was a whole world of technical skills which he did not even know existed. Riding up a rutted, rocky, rooty slope does not allow you to stand up and leverage your superior legs and lungs because guess what your back wheel just slipped out and you wasted a ton of energy falling over.


lol. i only do recreational drugs.

and i'm actually a javascript people myself. And i just moved away from grunt in my makefiles (not to that though) a couple weeks ago. my comment still stands.


> my comment still stands.

My analogy is one of the best in the history of ill-advised programming analogies, the more I think about it the better it gets (of course it may be lost on people who aren't mountain biking full-stack developers). Your comment on the other hand doesn't have a snarky leg to stand on.


I could say i'm pretty good mountain biker who is laughing at road bikers discussing if shaving a beard would trim down enough weight to win a race or create more drag.


As a neutral third party observer, I've enjoyed the exchange and only lament that if this was me at work, how much time I'd have spent on crafting the perfect comeback message to someone on the Internet and how much pride I've taken in my work and only to know in the deepest of my heart that it is only a "labor of love" like the sound of one hand clapping that no one'll ever hear or appreciate. Kudos to you both, gentlemen.


One tiny benefit of Gulp - Grunt wasn't packaged on Debian because of JSHint which relied on JSLint which had the "The Software shall be used for Good, not Evil." licence clause http://en.wikipedia.org/wiki/JSLint


The people in that Debian thread[1] completely misunderstand how Node packages work - the 'jshint' dependency is in the 'devDependencies' section of the package, which means it is not installed by default when `npm install --production` is used, which is how it should be packaged in the first place.

Refusing to package Grunt for Debian because it has a devDependency on JSHint is like saying that Node can't be packaged because one of the contributors used a non-free IDE to develop it. No actual non-free software is distributed as part of the code.

Regardless, it's not a big deal to `apt-get install nodejs; npm install -g grunt`.

[1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=673727

Additionally, I'd like to take this moment to remark that dcrockford is being completely pigheaded by not removing the "for good, not evil" part of the license of JSLint. It is completely divorced from reality and detrimental to free software in general. I understand that he thinks he is being an idealist, but there are better ways to promote doing good in the world than assuming that somehow, somewhere, evil-doers are writing evil Javascript (and they would get away with it too!) if only they weren't blocked by your license and could lint their code.


> Regardless, it's not a big deal to `apt-get install nodejs; npm install -g grunt`.

Unless of course you have automated the installation of software on your systems. At this point it's somewhere between confusing, mildly annoying and terribly annoying, depending in what route you go.

You could make a Debian package yourself, or you use your configuration management system to install the npm package (assuming it supports npm or there is an npm plugin)

Both have disadvantages: there are tools to automatically create debs from npm packages, but I've seen bugs with them causing various levels of pain. Or you make the package yourself and get blasted by full-on bureaucracy that is the Debian package build tool chain.

Going the configuration management route has the disadvantage that now multiple pieces of software are used to install packages which is a bit inconsistent and might be confusing to other users.

Yeah. Not a big deal, but getting stuff directly from your OS vendor is VERY convenient.

I agree with your reasoning about the devDependency though. That makes not much sense.


Gulp looks very interesting. Browserify vs. RequireJS seems more complex to me, though - it isn't difficult to use RequireJS in the CommonJS pattern. When I tried Browserify it was incredibly slow to build - but that might have been me setting it up wrong in Grunt...


I think the author oversimplified the choice of Browserify and RequireJS. They have some overlap, but are not 100% replacements.

Both have the concept of modules, and they overlap in this area.

Browserify allows you to use Node style modules in the browser.

RequireJS allows you to asynchronously load parts of your app, while Browserify bundles all JS into a single file.


To further clarify, RequireJS also allows you to bundle all JS into a single file via the r.js build step.

If you're planning on going that route with RequireJS, you should also look into AlmondJS: https://github.com/jrburke/almond


If you do end up using RequireJS, using r.js is practically a requirement. Declaring asynchronous dependencies in a production build is murder for performance.

Grunt's grunt-contrib-requirejs[1] does a good job of traversing your dependency graph with r.js and building out a single bundle, but it is not especially quick; it takes about a full 3-5s on my current build. But it's a hell of a lot better than the alternative.

Of course, if I were to start over, I'd run browserify and either build the files in a build step with Grunt or Gulp, or just use browserify-middleware[2] and skip the build step entirely (with something like node-sass for styles).

[1] https://github.com/gruntjs/grunt-contrib-requirejs [2] https://github.com/ForbesLindesay/browserify-middleware


Well you are right for most front-end application, but what is nice to have is modules, and you don't want almond for that. And by modules I mean you have major pieces of application code that you don't want to load until the user triggers an intent to need that stuff. If you have a single page application and most of your users are just consuming content, why load all the config and editing tool JavaScript? Make modules for pieces of your app, and then if needed load the JavaScript for those things. User clicked edit? Ok now load the JavaScript for that stuff.

Browser caching is nice, but if you're doing a continuous integration/deployment type, you're probably invalidating your cache often. If you use a single build you may be invalidating all of your JavaScript for each tiny deploy even though maybe 1% of your JavaScript changed. Another reason to want modules.

I don't know if Browserify can do that.


I don't know have experience with Browserify, but if it does not support modular architecture, then it's more a regression in comparison with Require.js

Using the r.js optimiser, we are able to create several modules for our SPA that include logic and templates, and load them at runtime only when required. By default, we load the most used bits of our application, and load the rest when the user accesses the more rarely used views.


I just started using browserify after using RequireJS for a while, and also got annoyed at how slow builds were. Then I searched around and found watchify, and it's great! Build time during development isn't painful at all anymore.


I've been using a different strategy lately: no explicit build process at all, just use middleware (like browserify) to automatically compile/compress/concatenate, and an HTTP cache in production to store the results (either middleware, or external such as nginx/Varnish/CloudFlare/whatever).

This helps with the principle of minimizing divergence between development and production environments.

It's worked well for me for small projects, but maybe there are issues scaling up? Has anyone else tried this approach?

One interesting possibility is make use of the Gulp ecosystem. You could imagine "gulp-middleware" that lets you use any Gulp compatible module on the fly.


This worked fine for us for a year or so.

Now, I sometimes need to change stuff when there are a couple thousand online visitors on one of our sites. Their requests gang up on middlewares and causes duplicate compilations. We already have reverse proxy servers to keep duplicate requests waiting for the first one but that still leaks due to different browser capabilities and stuff.

Having everything prebuilt makes more sense at this point.


Just FYI (and anyone else listening): this is called "dog-piling" if you want to look around for solutions: http://en.wikipedia.org/wiki/Cache_stampede


You can totally use gulp plugins outside of gulp. Since they are just streams, you can do some really cool stuff. I would love to see an http wrapper so you can do wrap(req).pipe(uglify()).pipe(res)


The main issue I have with Grunt and other frameworks/build tools is plugin authoring. Can't we just put this stuff in Make files and be done with it? I know, I know, Windows. But it seems like an awful lot of time is wasted building plugins that are just dumb wrappers for CLI options. Or worse, the tool's options get buried deep within the framework, ramping up the learning curve. As a recent example, I stopped using the Grails' resource plugin, and just compiled my static assets outside of Grails. The workflow became simpler because they didn't know about each other--I could use each tool to its full potential instead of relying on often incomplete plugins.


I agree exactly, and have ignored Grunt and now Gulp in favor of Make. Gulp looks to be basically reinventing Std I/O, when we should just be shipping these NPM-based tools with a bin that works via Make or spawn/exec in any other build tool. It makes all the more sense now too, with the early trend toward virtualizing devel environments in Linux VMs.


Skip the require vs browserify debate and use https://github.com/webpack/webpack.

Allows for amd commonjs and almost any other format around. With gulp and grunt plugins!


I've been messing around with require.js and browserify all day, but I don't really LOVE either solution for a large angular app. I'm going to give webpack a try. Thanks


Our team is using Brunch on an Angular app and the proper module system on top of Angular's "module" system does indeed feel wrong.


Ahh, finally, a flow-based build system for Javascript. Now if they would just port it to NoFlow and hook it up to a GUI…


I think I'll wait 24 hours before changing over my entire workflow...


One fact that has not been mentioned yet is that gulp plugins (i.e. simple object streams) are very easy to unit test. (With grunt there still is not an easy way to unit test plugins.) Also by way of their nature as streams they tend to be small and composable, again making them easier to maintain.

Also, there is already an effort going on towards finding a generic API for node task runners at https://github.com/node-task/spec/wiki


There have been numerous posts stating that Gulp is faster than Grunt. However, the "faster" argument needs to be qualified with "under these conditions ____". Even better, here's my configuration. Here's the processing time of x and of y.

With JavaScript builds in particular, it's important to separate when the task running is used. At least in my workflow, I have two distinct times:

1. Development time 2. Build time

"Development time" is when the task runner is used during development, such as running livereload to update the app with saved changes to source files. Speed during development time is very important.

"Build time" is the time used when building a production package. With build time, if one task runner is 5 sec. vs. 10 sec. it makes little difference.

Is one task runner always faster?

What are the differences in performance?

And are these development time or build time differences?

I'm all for new tooling, but let's have measurements in place so that people can make intelligent decisions about when one vs the other is right/wrong for a given project.

As for configuration complexity, it would be great to see a large production configuration of Grunt and Gulp.


I've used Grunt for most projects in the last year, and just tried Gulp for my newest.

Gulp seems snappier overall to me for tasks that watch a source directory with coffeescript or sass files in it. Maybe it's grunt-watch vs gulp.watch().

I think Grunt touches the filesystem more than Gulp which could potentially be a bottleneck. The creator (IIRC) of Gulp made a slideshow[0] .

Here's an example Gruntfile[1] generated with Yeoman and generator-angular. And a Gulpfile[2] which does fewer things (no production build yet).

[0]: http://slid.es/contra/gulp

[1]: https://github.com/jjt/dramsy/blob/master/client/Gruntfile.j...

[2]: https://github.com/jjt/LUXTRUBUK/blob/master/gulpfile.js


If you want to compare build time, it would be best to add plug-ins that trigger only the tasks that deal with changed files:

https://github.com/osteele/grunt-update

https://github.com/goodeggs/grunt-skippy

https://github.com/aioutecism/grunt-diff

https://github.com/sindresorhus/gulp-changed


gulp is always going to be faster than grunt as long as grunt writes temporary files to disk. Using 3 plugins processing 20 files as an example: grunt will do 60 reads, 60 writes while gulp will do 20 reads, 20 writes


Gulp seems more like a feature and less like a build system. In fact: http://blog.evanyou.me/2013/12/29/gulp-piping/ and https://github.com/wearefractal/vinyl-fs

Grunt works just fine. Make is probably ok once you learn it. It is a shame that we as a community don't take time to create amazing tutorials for our older tools like we do with our newer ones.

I think the main issue ends up being that github has created a double edged sword in open source dev land. You have people creating projects just so they can earn stars and thus resume fodder. This ends up creating packages with no maintainers and multiple projects that do the same thing. I mean really, how many different string parsing npm libraries do you need?


Wonder why the website hosting the tutorial mentioned in the article is "on vacation"?

Here's a rather ugly google cache'd version: http://robo.ghost.io/getting-started-with-gulp-2/


At least the guy admits he is a first-adopter.

All I really get from this blog is that some influential guys approve of these two new JS libraries.

For someone who still builds website HTML, what are Grunt vs Gulp originally supposed to do? And why does Grunt do it better?

And a similar questiion for RequireJS / Browserify?


I have no intention to throw out Grunt or RequireJS, just because there's a new hype about some new tools. It takes time to adjust things, and I'm perfectly satisfied with them as they perform now. Also I somewhat doubt that they are a 1:1 replacement.


Can we get python in the browser already.... please?



To be honest, I wish people would stop mentioning this project until it actually usable for real projects. I have attempted to build a sass task for Fez and failed when I realized that the core project simply does not have the ability to be configured to do very simple things. In the case of the sass plugin, Fez does not support globs, making it difficult/hacky to properly ignore files prefixed with "_", a well-known sass convention.

I agree that Grunt syntax is not fantastic. It has improved since 0.4 but the reality is that the average "long" Grunt build has a whole ton of customization thrown into the mix that is simply not possible with newer tools such as Gulp or Fez.

Additionally I really don't think that the comparisons are fair. Yes, it's nice to declare a fileset and define transformations on it, rather than defining transformations and declaring which files you want to use with each one. But I could easily show you an easy-to-read Grunt configuration that is just as simple as the ones you see in Fez or Gulp marketing.

I applaud what these tools are attempting to do, but we need to keep the comparisons fair.


Indeed, fez is not quite completely ready yet - its still in the development / experimental stage.


There is an entire range of build tools. some devs still use `make`, bash &/or hand-written node scripts. Once you have them set up, & keep same folder & msc patterns, they can all be satisfactory.

The advantage of Grunt is that it has a huge ecosystem. But Gulp has many of Grunt's heavy-hitters building plug-ins for it. So you can look at Gulp as front-end-build-tools v3.0 (Addy & Paul Irish were pushing Ant before Grunt).


Well (taking cues from the article) it has been a couple of weeks.

This is the first I've heard of Gulp though. Looks promising.


Hmm, i don't see much difference in configuration between grunt and gulp. Both do the job.

And Performance? Which Performance? They call tasks that do something, and the task need performance.


Agreed. Honestly I had little to no experience with configuration files like those used in Grunt before a year or so ago, but I don't find anything more intuitive with what I've seen so far with Gulp, than Grunt.


Typical build may look like this: Task 1 passing many files to Task 2, then to Task3, concatenate and so on. It's all about Disk I/O performance. Streams piping reduce files reading/writing.


Gulp is written in a way I've always wished Grunt could be used and I have already started using it in new projects. RequireJS is still my front-end module loader of choice.


Seems like it's only a matter of time before the "de-facto" status of our build tools will fluctuate as rapidly as Bitcoin's valuation.


It's seems many people are comparing Grunt to Gulp, without considering the subtitles:

Grunt is the "JavaScript Task Runner" and Gulp is "The streaming build system"

There is not much point in using grunt for building or gulp for running tasks, IMHO.

My team is moving towards using gulp for js/css builds and make for tasks. We used grunt before. We probaby should use npm for tasks, though (cross-platform).


How often do you have so many dependencies in a web app that you actually need something like RequireJS or Browserify?


I have an anecdote for that. I work at a bank and in the past two years we've built a front-end for investment banking using BackboneJS and some other frameworks. This application has close to 200 views, 26 routers, and probably 200-300 models/collections, totaling at about 30K lines of front-end code. You really have to use RequireJS or Browserify for a project of that scale, or even for any non-trivial project.

New projects (and the one I'm working on atm) are built in AngularJS, which has a built-in dependency system. We're not going to use RequireJS as a script loader anymore; in practice, it'll just load all the files on application load, which doesn't really add anything. I like Angular's simple approach of just loading a bunch of scripts using script tags. The existing application is going to get rewritten in Angular too, it's expected that it'll need a lot less code.



100,000 lines of JS is best divided into smaller parts.


I'm still not sure what browserify really adds over concatenation. Never noticed any speed increases, I guess the one thing would be explicit dependencies, but I've always found you can do that with module patterns anyway, and it breaks half the libraries you want to use.


For front-end apps it's hard to not recommend Brunch[1]. It gives you the module system of Browserify and some of the build power of Gulp/Grunt with much less configuration.

[1]: http://brunch.io/


Is the world ready yet to get a cross-platform build tool? One that can compile Java, lint Ruby and concatenate JavaScript? Seems like there's a core set of tasks that are pretty common across platforms.


There's not really a lot that's common, and most devs seem to prefer to have their build tool written in the same language they're writing.


If only someone would Make something like that. Boy, that would really Make my day...


I still prefer bash scripts.

And I think Gulp is much more complicated than necessary for 90% of the builds out there.

My 'bild' module is simpler for most things. Although I will stick with bash.


This story comes off as the emperors' new clothes.

As long as minified compressed HTML, CSS and JS are served up with the least amortized dev and support time, it's all good.


I wouldn't say that. RequireJS vs. Browserify is still a very contentious issue that is up to a developer's preferences.


Is it really, though? Aside from RequireJS's subjectively awful syntax (I have built a few non-trivial applications with it, and find it generally very noisy) - it simply does not do many of the very useful things that Browserify does. Browserify is much more than just a simple dependency management toolkit, it allows you to easily write node-style code and use npm dependencies in the browser. It's practically apples and oranges.


RequireJS allows for asynchronous loading. Browserify doesn't.

That alone is a huge reason for why people use RequireJS.


Seems like it will be an awful lot of work to support older work?


what ! I just started using Grunt in a project .

on a side note: I realized some time ago that I really can't be using the latest thing always ( at least with JS ).


Oooh, look at the shiny keys.


Yay. Another week to waste rebuilding all the builds and fixing all the continuous integration jobs.

Is anyone else thoroughly unexcited about all this playing in the sand? We're building products; why do we spend so much time and money on build tool churn?

I avoid build tools whenever possible (using built-in build commands and things like compass), but when I do need a build, I just use Rake. My toolchain stays the same so I can get shit done.


I'm not sure anyone is seriously suggesting that you take an existing project that's already using Grunt and convert it to Gulp.

But if there's a new build tool that I've heard good things about next time I start a new project, I think it's worth a bit of investigation to see whether or not it might serve my needs better than what I've been using.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: