Hacker News new | past | comments | ask | show | jobs | submit login
Generation JavaScript (bernhardt.io)
150 points by logician_insa on Dec 30, 2014 | hide | past | favorite | 147 comments



What is happening with the web stack now is a sort of renaissance with an unprecedented pace of activity. It's easy to feel overwhelmed, which is why I started a local meetup group that has now grown to 800+ members, all wondering the same thing: where do we start and what is worth investing our time in? Through conversations and presentations we learn what has worked well for others. Gradually, I expect the "storm, norm, perform" cycle to work its way out in the JavaScript community as consensus starts to build around which tools and frameworks are best. That is already happening to some extent. I don't think bemoaning the state of rapid innovation in the web stack is terribly useful; rather we need experienced developers (unbiased ones, preferably) to curate their collection of JS tools and resources and publish it for the benefit of others. One of the talks I've been wanting to give for a while is simply a "3000 ft survey of the JS landscape" or similar - mapping out the territory in terms of client-side MVC, server side JS, build tools, static code analyzers and tracers, libraries, etc. (If anybody knows of existing efforts to curate such a list, I'd love to know about them)...


Exactly. "Oh no! Too many people are publishing their code! Developers have too many excellent options (along with many awful ones)!"

It's true that our community could use a renewed focus around best practices, and it's true that the stack itself is way more complicated than it needs to be. In that sense, I don't disagree with the point of this article at all, but it's important to remember that when things grow organically they will tend to get messy, and the alternative (somehow structured growth, governing bodies, etc) is far worse.


>Exactly. "Oh no! Too many people are publishing their code! Developers have too many excellent options (along with many awful ones)!"

I fail to see the "many excellent options". Awful ones are, on the other hand, abundant.

And even those that are decent have the other issues the article mentions: they are short-lived, with developers soon moving to the next shiny thing etc.


Organic growth tends to have a lot of weeds, insects and other hazards.

I've been loosely following a lot of projects for several years now... it's hard to even keep a broad awareness at this point. That doesn't even count the platform/systems tooling that I'd honestly rather not think about...

Chef, Puppet, now Ansible... Hadoop, MongoDB, RethinkDB, Cassandra, ElasticSearch, Redis, Couch, etc... Hapi, Express, Koa ... NGINX, HTTP2/SPDY, Varnish, etc... Cloud-VMs, Azure, Amazon, Rackspace, Joyent, Linode, Digital Ocean... OpenStack, Varnish, Docker ...

It goes on, and on... there's too much to keep up with awareness of... let alone what needs to be dug into at this point. It's honestly a bit overwhelming. I just want to get back to whelmed...

I'd really like to be doing more time coding, which right now, I'm spending more time on platform stuff... evaluating packages in npm/github isn't so bad.. much more than that, it gets interesting.


I can't possibly count the number of times people have tried to create such lists. I feel like it's every other day.

Javascript is definitely in danger. I feel like its just hurtling down a trajectory of becoming the assembly of the web. The library churn is not helping.


> I feel like its just hurtling down a trajectory of becoming the assembly of the web.

IMHO that's the best possible thing which could happen to it. JavaScript is a hideous, ugly, horrible, broken language. But so is machine code, and it doesn't matter because I can write in real languages and let a Sufficiently Intelligent Compiler™ figure out how to encode my program. If I could write in a real language and have it compiled to JavaScript, I'd be a lot happier about writing browser-based apps.

(I'm still not happy about every single web page demanding executable code in order to, e.g., display a blog entry, but that's a different issue)



I'm thinking of things like TodoMVC. There's an abandoned list on nodejs.org that categorizes many of the earliest npm modules. A lot of those modules have matured and are still in use. MicroJS.com is another one I'd probably reference. Curation is a problem in most languages, especially popular ones. Java definitely has the same problem, and anyone wanting to develop native cross-platform apps is faced with similar bewildering choices.


Some of the storm, norm, perform can be figured out by looking on the stats on different repositories (e.g. https://github.com/search?o=desc&q=language%3AJavaScript+and...).

I think a good attempt at providing a "3000 ft survey of the JS landscape" is http://alpha.ideavis.co/529cc5f

I'd love to see an automated tool that could aggregate stats from a range of JavaScript repositories (e.g. GitHub, NPM, Bower) into something like Dustin's graphic.


There are A LOT of curated lists on github for JS and everything else, for example:

https://github.com/sindresorhus/awesome

https://github.com/sorrycc/awesome-javascript


I don't think this is about javascript, any more than Kamp's article was about C, or any other specific language. It's about the culture of speed, and the whole notion that getting something working as fast as possible is the primary virtue to be encouraged. I see this viewpoint engrained in lots of the younger developers I work with today, but I don't think it's their fault, and I don't think it is something whose DNA we can trace back to the dot-com bubble. It's been going on longer than that.

Some of you may be old enough to remember the days when managers attempted to measure our output with KLOC/day metrics. That goes back at least to the early 90's and maybe the late 80's. I believe the reasons for this are more fundamental than these articles hint at.

The problem arises, imo, when you have management in a position of relying totally on people whose work they don't understand at all. They are educated to manage risk, and yet they are paying us large amounts of money to essentially function as a black box, out of which comes something that works and makes them heroes, or doesn't and loses them their jobs, and they have few ways of measuring which way the battle is going.

The response is to try to manage ever more closely, treating the process as one of production: X number of people sitting at their computers working hard will produce Y amount of software. Those are the kinds of processes they are trained to deal with in the management schools. They aren't trained to manage inherently risky processes of creation, rather than production. The word "craftsmanship" as used in the linked article is exactly right, but "craftsmen" don't fit into the modern world of value production and they never have.


Yeah. There's a great recent post by Jeffrey Ventrella on "Slow Programming"[1] highlighting the problem that comes from the "fallacy that all engineers are fungible"[2] and throwing a bunch of programmers immediately into an iterative development cycle without proper up-front planning.

I think it's important to note that big reformations, like Agile replacing Waterfall, are reactionary; by necessity they tend to be overly so; to promote change they demonize the old way. Unfortunately there's usually a core of good ideas that get thrown out as well.

I don't think the Waterfall methodology works for most products, but it made up-front planning and careful consideration part of the design process that's missing in a lot of projects today. I think a balanced approach is needed; just enough up front planning to provide a strong, clear vision to the team, consideration for the tools and processes that will be used to get there, and someone with the vision and technical knowledge to lead it. These are some of the points I like to hammer out before launching into a project:

  * Design - what is the product supposed to look like?
  * Testability - how do you intend to test your code?
  * Documentation - how will the code, product be documented?
  * Tooling - what's your preferred dev tool chain?
  * Code architecture - What's your design philosophy?
  * Prototyping - straight to product or prototype first?
  * Delegation - how to structure for future delegation?
  * Security - how to audit the security of your app?
These points often seem to get left as an afterthought, but these broad strokes have worked pretty well for me and still allow for iterative development and functional refinement.

  1. http://ventrellathing.wordpress.com/2013/06/18/the-case-for-slow-programming/
  2. http://www.datamation.com/career/article.php/3757311/The-Myth-of-the-Interchangeable-Programmer-Can146t-We-Just-Offshore-Him.htm
  3. http://pragdave.me/blog/2014/03/04/time-to-kill-agile/


Thank you for this comment! I would also add something like "System architecture" to your points, because that often affects how things are developed.

I'm curious about the details. How, exactly, do you hammer those points out? One big meeting? Multiple meetings? A week or so "workshop"? How detailed are the answers to, eg. the Design, Documentation, and Architecture questions? Wireframes? Mock-ups? Architecture diagrams? Who do you invite to the planning session(s)? Do you re-evaluate things periodically using the same points? Do you run major new features through the same process, or just slot them into ongoing iterative development?

I'd be very interested in any elaboration you're willing to go into.


Sure. First off, I should say that as the lead developer I took ownership of these decisions. Design by committee is certain death. I didn't ask for permission; I sought input from various stakeholders (often one-on-one) and made it clear that while the final implementation might differ from their suggested approach, I would be proposing solutions that took everyone's basic concerns into consideration. After gathering feedback, it would be several weeks before a new iteration was put forward for review. Stakeholders could use and interact with it. More feedback would be solicited and then the next iteration would begin, again taking the feedback into consideration but not making any promises. I had managers who were furious with me for not kowtowing to their demands, but I'd produced a very successful prototype (featured at Blackhat 2013 and Mobile World Congress) and they grudgingly had to admit that the results were better than anything the previous design-by-committee approach had produced.

  * Design - what is the product supposed to look like?
Good design is not part of the DNA of large enterprises. This makes it low hanging fruit for upstart competitors to come in and impress clients with the usability of their "slick" design. Because purchasing decisions are increasingly being influenced by the people who will actually use the product, this gives them a tremendous competitive advantage. In large software teams, its typical to begin design with the data model, and progress through successive layers of code until the data finally goes splat on the users screen in a great meaningless pile. Smaller companies, especially ones with a legacy in web application design, have figured out the value of designing the User Experience first, and then developing the lower layers. This is not without tradeoffs; a new use case that you never envisioned can force changes to the data model that break compatibility. I found that working from both ends of the stack (prototyping the UX while simultaneously considering the data model and future use cases) was beneficial. Anyway, it's amazing how much software actually gets built without anyone even thinking about what it should actually look like to the end user, until the very end. When managers come from the era of punch cards and PDP-11, well, this is just how it is. You have to fight that mentality by pointing out the contracts they lost because the product looks like crap.

  * Testability - how do you intend to test your code?
It amazes me in this day and age that there are still product managers who believe automated testing is a waste of time. I set out a goal of having 100% test coverage, which I maintained for quite a while. Gradually the test coverage declined as new use cases needed to be supported. It simply wasn't possible to cover off every error case or alternative configuration of the product, but I still keep coverage between 80 - 90%, with most of the uncovered code being error handling corner cases that are not supposed to happen in production, or code that exists purely for running in a non-production environment. Incidentally I use Mocha and jscoverage for this, but there are lots of options. The important thing is to choose one. Tests are triggered from npm, which is in turn triggered from make, which is in turn triggered from Jenkins every time code is submitted to our repository.

  * Documentation - how will the code, product be documented?
Initially I set out with a goal of having the project be "self documenting". I settled on having an API that roughly follows HATEOAS principles - the idea being that given the root URL, a user could easily discover and navigate their way through the rest of the API with a web browser. I then used tools like RESTClient and POSTman along with Selenium IDE to automate some REST API developer tests. The output of these could be cut and pasted into an OpenOffice document that I submit to our code repo along with a PDF version. This isn't as fully automated as I would like, but I've got an excellent Install Guide, Developers Guide and User's Guide that can be easily updated and are readily available. They were created early in the project, in tandem with development. Tasks were not considered complete until the documentation had also been updated. QA helped ensure this.

  * Tooling - what's your preferred dev tool chain?
This is purely a personal preference, but I use Sublime for most of my JS related development. Outside of that, I'm a dedicated vim user. I dislike big IDE's because I feel people and projects become dependent on them and they foster unnecessary bloat and complexity because they don't require a developer to hold the whole design in their head. An ongoing problem is poor quality code getting into the repo and not being flagged until a bug surfaces or a build fails. I wanted to move that feedback much further forward in the development process and make it immediate, to catch problems before they ever got submitted. For that, I wrote a sublime plugin for jslint (Sublime-JSLint) with a lint-on-save capability. Any time a developer saves a file, they get immediate feedback on whether it passes the linter. This gets repeated on the build server, and lint warnings will literally break the build. As a result, all of our JS code is very consistent.

  * Code architecture - What's your design philosophy?
In addition to web-first, mobile-first, and API-first approaches, you might have strong ideas about MVC architecture, REST API best practices, abstraction oriented architecture, OO, Functional programming, etc. Being trained first as an artist, my preferences are pragmatic. I want strategies that will get me from concept to product as quickly and easily as possible. I've found an API-first approach with an abstraction-oriented architecture to work well. I began blogging about REST API best practices as a way to gather information on the topic. I also wrote a couple of pretty bad "REST" APIs before I started getting better at it. Now there are frameworks like Sails and LoopBack that will guide your API design.

  * Prototyping - straight to product or prototype first?
I do believe in prototyping first, but you should be willing to view the prototype as throw-away work. Prototypes are often thrown together quickly and incur tons of "technical debt" almost intentionally. Trying to productize a prototype can be more difficult that simply starting from scratch. A lot of times managers will wonder what the hell is taking so long... they saw a working prototype and they've already got customers lined up to take it, so where's the product? They don't realize the prototype is all smoke and mirrors running off a flat file and your personal Google API key. You may need to explain that the prototype is not suitable for productization, and why. On the other hand, if you do want to go from prototype into production, you need to be careful when you build the prototype not to incur the technical debt of a quick-n-dirty mockup, and figure out the path for getting from prototype to production. Things like replacing the data access layer with something that talks to a real database, and having a proper configuration file, etc.

  * Delegation - how to structure for future delegation?
API first design is helpful because it provides a sort of contract by which various components can interconnect. Although I initially wrote the API, server, mobile app, and web interface to the product, the API allows any of the client applications to be forked off into a standalone project. Eventually we did this with the mobile app. This is where the documentation started to prove its worth. The point is to think about how the project and the team is likely to grow, and which parts could be handed off to another team, and how to define clear boundaries with a clean interface between them.

  * Security - how to audit the security of your app?
It's often just assumed that applications will be secure, and then of course they aren't. Session hijacking, SQL injection and cross-site scripting vulnerabilities have been present in most of the web-based user interfaces of projects I've reviewed. Enterprise sw developers tend to be unaware of what these attack vectors are and how they work. There's no substitute for being knowledgeable about web security but there are tools like Nessus and Skipfish and various penetration testing tools that can help. I guarantee they will turn up security issues you never knew existed. If you can automate the process, great. If not, making a security audit part of your QA process is important.


You're right, but most replies to you have overlooked the customer:

Customers, paying customers, are accepting Javascript apps. Java was never good enough for that to happen except internal to large organizations where aesthetics and user experience didn't matter.

There may be a zillion crappy, similar overlapping Javascript libraries - and to my background it seems like a massive waste of time - but the fact remains that people, everyday people like my mom, sister, uncle and aunt are all spending real money for services accessed through Javascript. It is good enough.

Should I tell them: "You know that website you like so much, e.g. reddit- it would be so much better if it was a native app on your PC written in C++, so, like, don't use it. Demand and wait for someone to re-write it in a proper language with libraries that have a proper pedigree. Then you can run it on a 300 MHz computer instead of 3 GHz."

That'll never work! The javascript/PHP/whatever else stack that implements reddit is absolutely adequate for them to browse cat pictures and interact with their magic tricks communities.

Now, diehards like me cringe because of the waste of CPU, but guess what - CPU power is no longer a scarcity. There are only philosophical and environmental reasons to wring our arms about using less CPU power. And possibly these concerns are non-sensical. Are we wasting the world's resources making fast CPUs so people can use inefficient sw libraries? Who knows. But either way, the point remains:

Javascript's customers and users, and the services they demand and pay real money for all support Javascript and nothing other than a fanatic religion will convince them to change their ways.

It makes me mental, but I've even been learning Javascript over the last few years. It's about the same speed (and variable scope) as the BASIC I used on the Apple 2 in 1984, but so what, I'm paying the rent with it.


Customers are accepting web apps. They don't care if the apps are written in Flash, js, BrainFuck, Haskell, or morse code converted into asm.js via a transcompiler written in APL.

But development is not a democracy, and the fact that n thousand projects use a certain stack doesn't mean the stack is fit for purpose.

Take the security horror that is Wordpress. It's popular because it's easy to use, but from a professional point of view it's an infosec plague and server killer.

The quibbles about js have nothing to do with its popularity. They're more about the overall lack of professionalism in software development.

The underlying issue is scope. We don't have a cathedral, or a bazaar. What we have is exactly one (1) planetary network, made of a loose affiliation of barely cooperating nodes and applications.

The original Internet infrastructure worked because the RFC process meant that the core features were designed and refined by peers. Hobby coding in the bazaar removes all elements of peer review.

But the alternative isn't a UNIX-style cathedral, it's a revised set of standards leading - eventually, I guess - to a planet-scale operating system and shared library framework that takes away all the cruft by making it unnecessary.

None of the current stacks are the right way to make that happen.

So the issue isn't js - it's more that web stacks have become de facto operating systems for industrial computing projects without any of the rigour, peer review, or oversight of a well-designed industrial gold standard OS.

And it seems that no one - or at least no one that Tim O'Reilly's authors talk to - is even thinking about how to create an OS of that quality.


I don't think it is the culture of 'speed' necessarily, but rather one of 'feedback loops'. With javascript in the browser you can see and experience code-reload after an F5†. There is no compilation time; no real detriment to importing another non-used library if it's not compiled into your code.

Opposed to using a JVM language, where everything is statically compiled into a single container, including another library (or upversioning one) will impact compilation times and oftentimes trigger a complete re-build, often taking minutes before you can see the output of your code.

Minutes versus seconds when dealing with code correctness really helps, and is what is needed most when trying to 'hack it together'.

† Note: This is the same with test suites: karma tests are often instantaneous and spock/junit tests sometimes take 30-40 seconds before they start to run, depending on environment.


>With javascript in the browser you can see and experience code-reload after an F5†. There is no compilation time; no real detriment to importing another non-used library if it's not compiled into your code.

While that may be true, typical JavaScript projects require a compilation step. Plain JavaScript is starting to get somewhat rare, and projects typically use a compile-to-js language like CoffeeScript or TypeScript, or perhaps ES6-to-ES5 compile step. And then you need to minify that code since it's not recommended to run a non-minified "debug" version during development. And if you're project is any complex you're probably also compiling a single .css file perhaps from .less, compiling .html from .jade and packaging it all up in a different public folder.

The point is that typical non-trivial web applications need a build step.


Why would you minify your code during development? Isn't minified code for production?


Because minified code functions differently from non-minified code. It helps to detect bugs caused by minification. There's also a general principle which states that the application shouldn't be developed in a "debug" mode, because the debug mode might cause the application to execute differently from production.


As I understand it, minification is basically removing whitespace. Why would that make the code behave differently?


Minification can also involve changing variable names; sometimes there are dependencies on names that are unknown to the minifier. Also, the removal of whitespace can cause lines to combine in interesting ways if there aren't semicolons between them and if the minifier doesn't automatically add it.

http://stackoverflow.com/search?q=minification+bug


JavaScript minifaction is much more than that. It renames function parameters, does all kinds of fancy tricks to refactor code and so on. AngularJS is one example where minifacation breaks the code if you don't use the "special" minification-safe pattern.


On the off chance that whatever minification library you're using broke something somehow.


Was about to write the same thing. None of the complaints in the article were about Javascript. It was more about the current habits that have evolved around it. Granted, I think the PHP world has had a good 10 years of dealing with limitless code libraries, it just wasn't as well organized with bower, npm, grunt...etc.


Shouldn't the response be to look for managers with programming background and (at least some) understanding of the process?

You don't have to be Michael Schumacher to become responsible of an F1 team. But you have to understand basic staff and hire proficient personnel to do the rest.


I think it's just a matter of recognizing what software development is, and what it isn't. Somehow large, well-managed corporations are able to deal with creative teams in marketing, in design. But when it comes to software development it just seems to me we've been stuck into the wrong bucket. We sit at keyboards, we type, and make computers do things, therefore we are 'IT workers.'


JavaScript is an awful language, no matter how much some people stress that you can build working things if you are just careful enough and all that. But that is only the beginning, the entire stack is more less broken. HTML and CSS got repurposed from document markup languages to GUI markup languages. We allowed this awful language to escape the browser and infiltrate our servers in form of Node.js. Microsoft also came up with the clever idea to develop desktop applications and apps with JavaScript, HTML and CSS.

And even the largest selling point of the entire stack - develop once, run everywhere - is far less true then many like to admit. In the past a substantial part of development time has been spent on getting the thing to run consistently across browsers but I fear we are far from the end of the road. Now we have more or less consistent behavior across browser for all the basic things but you still regularly run into unusable web application because you happen to have an unexpected screen aspect ratio, an unsupported video codec or missing WebGL support.

Unfortunately I have no good suggestion how to escape from that situation (quickly) but it seems pretty obvious to me that we have a lot of problems to solve.


> And even the largest selling point of the entire stack - develop once, run everywhere - is far less true then many like to admit.

But it's more true than any platform in the history of mankind.

I've noticed a resounding chorus of seasoned developers railing against the web as application platform for over a decade now. But their complaints are always comparing an open standards based platform against some proprietary platform or GUI toolkit. Of course Microsoft or Apple are going to put together a more cohesive app SDK than the W3C. But neither of those behemoths has ever approached the ubiquity of the web, not by a long shot, because every single platform they target has to be developed by them. The web is available everywhere, and if a new corporate behemoth builds the next great platform, they must include web support because that is now considered baseline functionality. I'm surprised more old-school developers don't recognize what an amazing accomplishment this is.

So yeah, JavaScript is terrible, but it's standing on the most ubiquitous platform ever created. And you can't build that platform by thinking to yourself, "what would the ultimate cross-platform application platform look like?" It doesn't work that way. The web won because of it's braindead simplicity, that was the thin edge of the wedge. You see being document-based as a weakness, but that's its strength, because documents have a higher utility-to-complexity ratio than apps. Even today when all us software elites are thinking about javascript-based apps, the web is still dominated by a massive substrate of documents, and many apps exist only to manipulate documents, so being document-based and allowing anyone to edit documents was the killer feature of the web. How many network application protocols/platforms/toolkits have been invented since the birth of the internet? Certainly a lot more than anyone will ever be aware of because they didn't have the means to gain traction.

Systems on the scale of the web don't—can't—happen by design. Instead of complaining how terrible it is we need to look at what incremental improvements are possible. Over time this leads to real evolution, and yeah it's slower than we'd like, but there's no other way forward without chucking out all the users with the bathwater and becoming irrelevant.


Agreed about the utility of documents. But that's not the issue. The issue is contorting a document to make it into an application. Mobile app devs don't have to deal with this and the benefits are many. Let documents be documents and apps be apps. I'd like to see a webgl based app toolkit that works like a mobile app toolkit and performs as well as a mobile app. I don't think it will happen though because so many devs have now been raised on html, css and js. They like it.


Unfortunately I have to leave, but I agree with most of your comment. I just believe we should not give up because it seems very hard to come up with and get adoption for a better solution. Continuously patching and bending two decades old technology for another decade or two does not seem a sustainable solution to me. Not that old technology is necessarily bad, SQL is a great counterexample, but unfortunately not everything out there got created with so much thought put into it.


> Unfortunately I have no good suggestion how to escape from that situation (quickly)

Java apps! ducks But seriously folks, I'm curious what went wrong there. The JVM these days is very fast, runs everywhere and comes installed by default on most operating systems. I was still a student during the java hayday, but the only downsides I vaguely recall were poor UI, poor security and poor speed. The last two have been fixed and the first one seems very fixable. Furthermore, you don't have to write apps in java anymore. It just seems like there's a lot of untapped potential there these days.


One issue I had with Java was the aesthetics of UI elements.

Inheriting the DOM, especially its graphical elements was a boon to JS apps.

Technically Java can/could do all the same stuff, but in the DOM, input, output, graphics and text all flow together. They're too rigidly separated in Java layouts.

While I'm at it, upgrading the Java runtime was always an annoying process, taking time, confusing my sister, etc.


I'd argue that the world has moved on from desktop apps. Maybe that's reversible, but I'm not convinced.


I strongly agree with "the entire stack is more less broken". Back in 1989 Sir Tim Berners-Lee put a lot of careful thought into the design of a protocol for sharing documents using IP/TCP. However, when Ajax and Web 2.0 got going circa 2004, the emphasis was on offering software over TCP, and for that the HTTP protocol was poorly suited. Rather than carefully rethink the entire stack, and ideally come up with a new stack, the industry invented WebSockets, which was then bolted into the existing system, even relying on HTTP to handle the initial "handshake" before the upgrade. WebSockets undercuts a lot of the original ideas that Sir Tim Berners-Lee put into the design of the Web. In particular, the idea of the URL is undercut when WebSockets are introduced. The old idea was:

1 URL = 1 resource

Right now, in every web browser that exists, there is still a so-called "address bar" into which you can type exactly 1 address. And yet, for a system that uses WebSockets, what would make more sense is a field into which you can type or paste a vector of URLs, since the page will end up binding to potentially many URLs. This is a fundamental change, that takes us to a new system which has not been thought through with nearly the soundness of the original HTTP.

Even worse is the extent to which the whole online industry is still relying on HTML/XML, which are fundamentally about documents. Just to give one example of how awful this is, as soon as you use HTML or XML, you end up with a hierarchical DOM. This makes sense for documents, but not for software. With software you often want either no DOM at all, or you want multiple DOMs. The old model was:

1 URL = 1 resource = 1 DOM

Much of the current madness with Javascript is that developers want to get away from HTTP and HTML and XML and DOMs and the url=page binding, but the stack fights against them every step of the way.

Perhaps the most extreme example of the brokenness are all the many JSON APIs that now exist. If you do an API call against many of these APIs, you get back multiple JSON documents, and yet, if you look at the HTTP headers, the HTTP protocol is under the misguided impression that it just sent you 1 document. At a minimum, it would be useful to have a protocol that was at least aware of how many documents it was sending to you, and had first-class support for counting and sorting and sending and re-sending each of the documents that you are suppose to receive. A protocol designed for software would at least offer as much first-class support for multiple documents as TCP allows for multiple packets. And even that would only be a small step down the road that we need to go.

A new stack, designed for software instead of documents, is needed.


Wow, I have rarely encountered a more ludditic comment on HN. The number of bare assertions alone is bemusing.


You might want to lookup the definition of the word ludditic. But I totally agree with the parent comment. JS and friends are too high level to be whipped into shape but too low level to be productive.


adj. of, related to, or characteristic of a Luddite.

Luddite: one who protests against modern labor-saving technology.


Right, so how is criticizing JS ludditic in any sense?


Feel free to point out where I am wrong.


Okay.

> JavaScript is an awful language

This is a bare assertion. The same thing can be (has been) said about every programming language, ever.

> the entire stack is more less broken.

This is another bare assertion.

> HTML and CSS got repurposed from document markup languages to GUI markup languages.

This is a red herring. There is nothing intrinsically wrong with the evolution of markup to encompass more layout capabilities.

> We allowed this awful language to escape the browser and infiltrate our servers in form of Node.js.

Another red herring. Node.js exists. Nobody is forcing you to use it. It's being used successfully by quite a few folks. Furthermore, it was Netscape (the originators of the language) who originally introduced server-side JavaScript. SSJS has been around in various forms almost ever since there was JS.

> Microsoft also came up with the clever idea to develop desktop applications and apps with JavaScript, HTML and CSS.

They were certainly not the first. The trend started, I believe, with Mozilla's XUL architecture, evolved through things like Lazslo, and is steadily moving towards web components. This trend has been going on for a decade; even in enterprise Java land things like Struts and JSF got on the application markup bandwagon. Quite simply, markup has proven to be a good way to lay out interfaces.

> develop once, run everywhere - is far less true then many like to admit.

There are inconsistencies, true, but there is really no other stack that has achieved as much cross compatibility as the web stack.

> In the past a substantial part of development time has been spent on getting the thing to run consistently across browsers but I fear we are far from the end of the road.

This is somewhat of a "slippery slope" fallacy: 'There were problems in the past, so I fear there will be problems in the future, therefore we shouldn't solve the problems and the whole thing is crap...'. It doesn't hold together.

> Now we have more or less consistent behavior across browser for all the basic things

Which is incredibly powerful and frankly unprecedented.

> but you still regularly run into unusable web application because you happen to have an unexpected screen aspect ratio, an unsupported video codec or missing WebGL support.

Very rarely. For experimental new apps yes. But when you look at what projects like Clara.io have accomplished, they have blown away the preconceptions that the web is not suitable for things like high-end 3D graphics development.

> Unfortunately I have no good suggestion how to escape from that situation (quickly) but it seems pretty obvious to me that we have a lot of problems to solve.

Let me make a suggestion then. Contribute to solving the remaining problems instead of berating the technology, or if you manage to find a better alternative, write about that instead.


"There are 2 types of programming languages, those everyone complains about and those nobody uses..."


> JavaScript is an awful language

This is a bare assertion. The same thing can be (has been) said about every programming language, ever.

This is just not true. You can always find someone not liking a language but if you asked people that have used many different languages they can of course rank them from worst to best. And although I am not ad hoc aware of any study demonstrating this I also never met any developer seriously suggesting that JavaScript would rank on top of that list.

> the entire stack is more less broken.

This is another bare assertion.

This is obviously my opinion and I assume everybody is able to mentally add IMO somewhere.

> HTML and CSS got repurposed from document markup languages to GUI markup languages.

This is a red herring. There is nothing intrinsically wrong with the evolution of markup to encompass more layout capabilities.

If table based layouts or hundreds of wrapper DIVs are not a sign that the underlying technology is not suitable for the task then I don't know what is.

> We allowed this awful language to escape the browser and infiltrate our servers in form of Node.js.

Another red herring. Node.js exists. Nobody is forcing you to use it. It's being used successfully by quite a few folks.

That somebody uses it successfully does not make it a wise decision to use one of the worser languages in a place where you are - unlike in a browsers - not forced to do it. And don't get me wrong, everybody is free to use whatever they like, I just want to point out that there are better options. You can use a screwdriver to drive a nail into a board, I won't stop you, I just want to suggest to at least consider getting a hammer.

> Microsoft also came up with the clever idea to develop desktop applications and apps with JavaScript, HTML and CSS.

The were certainly not the first. The trend started, I believe, with Mozilla's XUL architecture, evolved through things like Lazslo, and is steadily moving towards web components. This trend has been going on for a decade; even in enterprise Java land things like Struts and JSF got on the application markup bandwagon. Quite simply, markup has proven to be a good way to lay out interfaces.

I completely agree that using a markup language is a good way to build GUIs, the bad idea is using HTML and CSS for that because they were not intended for this and are not really up to the task.

> develop once, run everywhere - is far less true then many like to admit.

There are inconsistencies, true, but there is really no other stack that has achieved as much cross compatibility as the web stack.

I completely agree with that, but this should really not imply we should stick with JavaScript, HTML and CSS. We can do way better and we should do so.

> In the past a substantial part of development time has been spent on getting the thing to run consistently across browsers but I fear we are far from the end of the road.

This is somewhat of a "slipper slope" fallacy. There were problems in the past, so I fear there will be problems in the future, therefore we shouldn't solve the problems and the whole thing is crap.

No, we should of course fix the problems, but I believe we should at least think about fixing them at the fundamental level instead of keeping patching the holes. The obvious problem is of course the adoption of a better replacement and that is why I am not sure if and how this could happen.

> Now we have more or less consistent behavior across browser for all the basic things

Which is incredibly powerful and frankly unprecedented.

> but you still regularly run into unusable web application because you happen to have an unexpected screen aspect ratio, an unsupported video codec or missing WebGL support.

Very rarely. For experimental new apps yes. But when you look at what projects like Clara.io have accomplished, they have blown away the preconceptions that the web is not suitable for things like high-end 3D graphics development.

My point here is that people quite often try to suggest that portability is kind of a unique feature and essentially for free with the web stack which is not true. A simple program is really easy to write in a cross-platform manner in most languages. Then there is a range of applications where the web stack is exceptionally good for building cross-platform applications with very little overhead. At the upper end it again becomes hard and costs effort to get it working across platforms.

> Unfortunately I have no good suggestion how to escape from that situation (quickly) but it seems pretty obvious to me that we have a lot of problems to solve.

Let me make a suggestion then. Contribute to solving the remaining problems instead of berating the technology, or if you manage to find a better alternative, write about that instead.

As another commenter pointed out, we know how to build better solutions. What we don't really know is how to get adoption for it.


>This is just not true. You can always find someone not liking a language but if you asked people that have used many different languages they can of course rank them from worst to best. And although I am not ad hoc aware of any study demonstrating this I also never met any developer seriously suggesting that JavaScript would rank on top of that list.

So because it's not number one it's an "awful language"? That doesn't make any sense and I know plenty of people who enjoy working with javascript. In fact the few people I do know who hate JS are people who haven't done more than copy/paste code from stackoverflow.


If few people hate programming in Javascript then explain this to me:

https://github.com/jashkenas/coffeescript/wiki/list-of-langu...


Firstly, just because someone makes a compile-to-js language doesn't mean they ncesserily "hate" JS ... I mean, it may, but it could also very well mean that they prefer their own language instead of JS, but not hate JS per say.

Also, compare this to the number of people who are just using JS normally worldwide, then we'll have perspective of just how many more people are just fine with JS Vs the people who feel the need to make these.

My guess would be that the number of people who like JS and use just normal JS in their projects is far far greater in number than the people who use compile-to-js solutions. So in comparision to the former, the latter group could be called as 'few'.


You can not count people only knowing JavaScript because most of them are probably not even aware how bad JavaScript is because they never used anything else.


Ok.

> largest selling point of the entire stack - develop once, run everywhere

Rather than being the largest selling point, this is talked about by virtually nobody who actually works with Node.

Easy handling of concurrent I/O is the largest selling point.

See http://nodejs.org/ : "Node.js® is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices."


Develop once, run everywhere was obviously not meant to be applied to backend code but to the client part. But besides that asynchronous IO is (usually) an OS-level feature and there is really no need to surface these features with JavaScript, many other languages could and can do this as well.


"I have no good suggestion how to escape from that situation"

Everyone knows what the solution is; a sandboxable arch-independent batteries-included bytecode format for the web. The solution is obvious, the problem is in organising the detailed work necessary for such a format to emerge.


This is exactly what I meant - the technical solution is not the hard part, gaining widespread adoption is.


No, the technical solution is the hard part. Designing a bytecode format for the web would not be a trivial undertaking.

Assuming this work was completed, widespread adoption is mostly a matter of browser support and time. I suspect the Chrome team would be open to implementing it after it's been designed (there are parallels with the Native Client work), and other browser teams would be unlikely to resist if there are clear performance and workflow benefits (which there undoubtedly would be). Basically it's a "if you build it, they will come" situation.

To understand the scale of the task, consider the 'batteries-included' aspect. The core libraries for the bytecode format should be standardised (i.e. core library API set in stone for each major version of the bytecode). That's a huge amount of work. Then you'd also have the security testing needed (to ensure the bytecode was securely sandboxed). You'd also need to work on ensuring the bytecode was efficiently implemented for all processor architectures. I don't know about you, but I certainly don't feel qualified to do that.


Is "bytecode format for the web" not basically what Java applets were? Seems that adoption is the problem.


Except it wouldn't really be that hard. If Google and Mozilla were on board that would be enough. But neither has any interest it seems.


Yes, that thing was JVM, java.. It too escaped the browser and infiltrated the server..


No, it wasn't. It wasn't sandboxed and the libraries were not restricted enough to ensure a reliable target. Java applets were essentially running on the desktop, not in a website.


Write once, debug everywhere.


I know it's a Java joke, but it applies to everything cross-platform, all the more reason to make the core as uncomplicated as possible.


Javascript is definitely a weird language in a lot of ways, particularly given the prototype system, assorted mixed paradigms, and the cultural focus on anonymous function callbacks, but I think it's a bit much to say it's "awful".

Coffeescript in particular makes me feel like Javascript has a Bizarro-world cousin of Ruby hiding under all the C-style syntax.


English is not my first language so awful may not be the word I was looking for. It was intended to mean ugly but not as bad as PHP.


English is my first language, and awful is precisely the word I would use for Javascript. True, not as bad as PHP ('appalling' is appropriate there, I think), but Javascript is not a good language.


Hm, I'm learning JS at a slow pace reading D. Flanagan's 6th edition book. I'm an intermediate level Ruby programmer. The funny thing is that this book is so well written, that makes my understanding of Ruby even better, because I make direct comparisons.

JS and Ruby are similar in a way, the syntax is not all too strange to me so it's easy to run any JS code in my mind, on the fly.

That said I find JS extremely cumbersome compared to Ruby... But if I could easily add JS functionality to my Sinatra/Rails it would be great! :-)


Take a look at Coffeescript (http://coffeescript.org) and you'll see what I mean about there being a Ruby-like language hiding underneath all the cruft.


As engineers, we should recognize very clearly that JavaScript is just another tool that has trade-offs as everything does. We build applications optimized for their use case. Does the bridge over the meandering creek need to be made a suspension bridge of concrete and steel? Of course not!

Time is the true currency of this universe, so we optimize for efficiency as best as we can without knowing the future.


> Does the bridge over the meandering creek need to be made a suspension bridge of concrete and steel

The problem being discussed here is more along the line of building the Golden Gate with wood and plaster.


"Messages" are the currency of the universe: the more and better messages you emit, the more successful you will be. Some messages take no time, some take a great deal of time; message rate and type varies substantially over a human's 30k days of life.


Can I pronounce this "Crockford Syndrome"?


If time is the true currency, then the whole `bower install whatever` approach is basically potential crushing credit card debt.


Yes, exactly. Developing a web application is basically the act of putting square pegs into round holes, over and over...


maybe one solution would be to use another interpreted language, like python, lua, ruby, dart... and make it use the DOM.

I wonder if rust or c might run in a vm, I guess yes. C compilers can be very small too.


The proliferation of JavaScript libraries is sort of like what happened when Amazon.com came online and you now could choose from +1M books instead of the 10K that were available at your local non-big-box bookstore.

No longer are you dealing with a scarcity situation, where the bookstore was your filter, but an overload situation where you could access everything as easily as a click. Amazon dealt with this by allowing for ratings (one to five stars), reviews and ratings of reviews. It also tracked what books people bought in the end. I think this can be applied to NPM and other non-JavaScript package managers.

We have much the data in Github and NPM to do this, with the exception of proper ratings (stars are not that informative, as they are only positive) and proper written reviews.

I'll be honest, there is a real space to move in this direction towards collaborative filtering of packages based on both implicit and explicit signals.


The need for such rating system is evident when you see all the closed questions on stackoverflow: "what is the best library to do X"?


Good point. I like to browse github projects by stars and npm packages by downloads which helps to identify the popular software. The search engines are not really optimized for this usage though, and long form reviews would be a nice addition. Right now you need to combine the search ordered by popularity with google or stack overflow searches to get the more qualitative information you would get from user reviews.


I don't see how rating libraries and frameworks would work. You use a library, depending on if it solves your problem or not. Some libraries solves your problem but might not solve others. And that's fine. That's why there are multiple libraries in the first place. For instance, saying some library is less useful than another on Github because it enforces a functional style of programming, is not very useful for others that might like the functional style. Remember, these things are not books, but tools.


Remember that not all books are novels, many are informative books people read to solve their issues or learn specific things -- many non-fiction books are essentially tools.


You can compare ratings of similar libraries.


I disagree with almost every word in this essay. So lets get started.

"Imagine you are a newcomer to Javascript development these days. Which librairies should you pick to get started?"

Who cares. What's stopping anyone from just writing a JS file and adding a script tag to their html, that's how most of us learned it.

"There is, however, a price to pay to the approach of coding through instant gratification."

No there's not. It just depends on what type of coder you are. I like to think a problem through BEFORE coding and then hack, when I come to another problem I pause, think, solve the problem, (all away from the computer) then want to code super fast again. Anything that makes the "turn my idea/algorithm" into running code faster is good, when writing C/C++ anyone will tell you they hate compiling, why? because its slows down the "idea-to-code" part. To the authors credit he seems to prefer this approach as well, it just doesn't follow that you have to hack super fast all the time and never think just because you don't have to wait for code to compile.

"The hardest thing in Javascript development today is maybe less the development itself than knowing which libraries to pick."

That's a problem? Seriously, I think many languages would love to be able to "complain" about this.

"It is Torvald’s “Release early and often” without the community smashing the ego of anyone publishing sub-par code."

Here's the way I think about it. If you can't find what you want, build it yourself, maintain it, and use it. Everyone does this in any other language and it works fine, its also working for JS too. see here: https://github.com/sorrycc/awesome-javascript

The only thing I would agree with is that yes, it is ridiculously easy to publish something to npm so there is a lot of garbage. The days of submitting your email patch are winding to a close and I for one am happy about it.


I think your comment and perspective hit the heart of the matter. You view your work in isolation. And if you're not working in isolation, you are assuming that your co-workers are in the 99th percentile of coders. If you're making some small webapp or mobile app, then that mentality is fine.

But think about it from a different perspective. Let's imagine you are hired by a company to come in and build a large, web-based, responsive, buzzword, buzzword application for them. It's big enough that it is going to take months to develop and require numerous developers. The strategy of "just writing a JS file and adding a script tag to their html, that's how most of us learned it." is not gonna fly. And if you are spending your time doing "If you can't find what you want, build it yourself, maintain it, and use it" then how are you getting your actual application done? You don't want your devs spending a month of time to build (and then support) some templating library or some such when your real task is to write some kind of CRM application. So how do you go about picking technologies, training teams, building upon existing libraries, etc. in order to build your actual product?


It's just a cycle. Every community goes from overdesigned to overhacked and back, like a pendulum. 10 years ago we were all building large fancy over-architected apps with frameworks like Spring and JSP and Makumba, now we're hacking stuff together on Node with 2-day-old NPM packages.

In a parallel universe, the Ruby people did it the same, but the other way around, started with hacking stuff together with as much automagic stuff as possible, and now they're writing blog posts about Dependency Injection.

Admittedly, it's not a perfect sine, but give it another year or 3 and the JavaScript people will have settled on one of the four package managers, and the most-used frameworks will be well-maintained and strike a healthy balance between hackiness and enterpriseyness. Sure, any popular language will have incompetent people and they won't suddenly delete their GitHub accounts, but in my opinion, we're not that far away from there. Modern JavaScript tools like React or Webpack really have that craftsmanship you write about. In a way, you could argue that Angular v1 had too much of it.

It appears to me that this cycle happens in each language ecosystem, but with a different phase and starting in a different direction. I tend to prefer programming languages that used to be quite hacky and just started to discover that, well, yeah, oh, there's something good to say about all that architecture stuff those old people kept rambling about. IMO, JavaScript is there right now and moving in the right direction. Elixir is taking off in the same direction (fueled by hundreds of disgruntled ex-Rubyists), Java is currently descending back towards more hackiness (fueled by gazillions of Android programmers who fead a tutorial).

I'm still not sure where to put the PHP guys here but hey, at least they have a working package manager these days (and not four).


JavaScript is like working on an old Jeep.

It's not very clean; it's not very efficient; you're dealing with weird nonsensical issues all the time; there's a good chance you will cut off your fingers or mash your knuckles.

But... it's incredibly common, there are a ton of writeups on whatever you're trying to do, and the barrier to entry for getting something done is so damn low that for better or worse, it's not going anywhere soon.

The only thing that's going to rescue us from JavaScript is when every browser ships with a common language-agnostic runtime (a la CLR, JVM). The JavaScript-as-a-Transpiler-Target movement has the right idea and a little bit of momentum, but I think the potential obsolescence, the hassle involved in setup and debugging, and the perceived inefficiency scare most developers away.

Imagine if you could code your web app in whatever language you wanted because they all compiled down to the same VM bytecode...


>the hassle involved in setup and debugging, and the perceived inefficiency scare most developers away.

Hmm, I don't see why there should be any hassle in setup an debugging. You need tooling to do that automatically, and the tooling is almost there (although it still requires a bit of manual wiring, but it's only a question of time). Browsers already compile this .js code to run at near native speeds, and modern browsers support debugging compiled code reasonably well. There are still significant areas to improve, of course.

In fact, you can treat "compile-to-JavaScript" as a form of VM "scriptcode".


A CLR doesn't solve much I think, if you look at .Net there's a single de-facto language (C#) and VB.NET died out just because people will gravitate toward the language which has all the help, stackoverflow answers, etc.

Sure, it makes using other languages possible, but typically that'll just be seen as weird.


Well, there are others, but maybe the JVM is a better example, where Scala seems popular enough.


I think VB.NET was just tainted by Visual Basic. Back when it was released there was a lot of resistance from the VB crowd where as all the C++/MFC programmers could see a huge and immediate benefit in C#.


You can't really compare the two situations (.NET and a hypothetical browser bytecode/VM). The reason C# ended up the main .NET language is that it's a brilliantly designed language - far more than you can say about Javascript.


I don't think its just that. Its impossible to design a "trully general" intermediate programming language so what ended up happening is that the CLR was a perfect fit for C# and a not-so-perfect fit for everything else. At least thats what I've been told by my programming languages professor.


+1. My collegues and I dream that one day browsers will accept some language-agnostic byte code, to which your favorite language could compile.


This reads a bit like early internet critics complaining about how there are too many voices online. "Why isn't the Times enough for everyone? Why do these bloggers need to be telling everyone what they think? Some of them don't even check their facts!"

> It is Torvald’s “Release early and often” without the community smashing the ego of anyone publishing sub-par code.

The relative lack of Torvalds-esque public shaming is exactly what I love about the Javascript community. There is an acknowledgment that we can all do things our own way, and that it's OK. We don't need a dictator shouting down people "doing it wrong".


And it's such a different set of goals. You're unlikely to break something fundamental with Javascript, which is fantastic. It's a safer thing to play with than a kernel. Put sub-par code in the Linux kernel, and your computer doesn't boot. Mess up your JS and your webpage renders off-center. The severity of failure is so different that it'd be silly to get bent out of shape over it.

Of course, you can do insane things in JS, but that's part of the fun. Even at its worst, it'll crash a program or browser tab that can just be reloaded. Annoying and inconvenient, but the computer's still chugging and no harm's done.


The worst case is not an off-center web page, the worst case is exposing user information or making your backend attackable in other ways. Just because you use JavaScript and it runs on a client machine far away you are not magically protected from screwing up.


Sure. Yes, security is hard. You got me there. Javascript does not decrease your attack surface, and is not, in fact, magic.

That's not a free-pass negation of my point that crashing a browser tab is substantially less severe than a kernel panic, and thus likewise greater engineering due diligence is needed for its code. There's nothing magical about it.


There's a freedom I had when I didn't know what I was doing to never know I was "doing it wrong". Reading Reddit, Hacker News, and various forums over the course of my career have definitely put me in the upper echelons of expertise (and my clients have certainly appreciated that). At the same time, there's a creativity that comes from "I'm going to solve this in the only way I can think to" that people often loose sight of as they gain expertise. When you know it's the road less-traveled, you're less likely to walk it yourself, even if there are some great solutions down there that nobody else can see because they're blinded by groupthink.


The solution the article is proposing isn't realistic. You can't reach anything resembling "craftmanship" without a lot of failed projects preceding it. Craftmanship or quality is not a binary state, it evolves organically. A lot of projects are shit, and then there's a few golden nuggets that outcompete the shit. This applies to life in general and also local fields, like programming.

JavaScript has turned out to be one of the most versatile programming environments for this "evolution" to happen. Yes there's a thousand ways of doing things, and maybe only 10 of them are good, and maybe those are difficult to find. But the solution isn't to shut down the shit so that there are only 10 things created instead of 1000. That's just stunting growth, and instead of 10 great things you might just end up with 1 because there was no creative competition (or at least inspiration) in the environment.

So to hell with this thinking, you can't expect to be able to control and lock down a vibrant creative environment and expect the same amount of golden nuggets to come out of it.

I've been writing JavaScript code for 15 years now and the things people are making with it are greater than ever, and many of those things have been reborn out of great ideas implemented poorly. With some kind of "craftmanship" standard, those poor implementations would never have come to exist, and there would be less great ideas in the world.

For the sake of being a bit more constructive in my own rant, I think our efforts should be put into guiding people and helping them find the golden nuggets. And this is happening. There's a lot of tutorials out there. Granted, even the tutorials suffer from the same shit:golden ratio, but finding the golden nuggets is what communities are for.


>* The solution the article is proposing isn't realistic. You can't reach anything resembling "craftmanship" without a lot of failed projects preceding it.*

And yet older generations of programming seem to have managed it.

>Craftmanship or quality is not a binary state, it evolves organically

I don't thing code grows into craftmanship, much less organically -- either you have craftmanship as an attitude from the beginning of writing something or not.


I believe the only reason you think that previous generations managed it is because previous generations of programming were tremendously less open in terms of visibility. I would even go as far as to say that if you could collect and objectively grade all code in the world 10 years ago and today, the collective code today might actually be of higher quality, simply because code is shared much more today. This leads to more knowledge, which leads to higher quality. In the past, that code was simply left behind on someone's hard drive.

Code itself doesn't grow into craftmanship, because craftmanship is what a person does. A person gets better with experience (both from doing and from observing) and inspiration. Greater visibility and sharing of code means better experience (through observation) and better inspiration (from people sharing their great ideas).

If you think I'm wrong, the alternative solution isn't difficult: you can avoid shared code repositories altogether and instead write more code from scratch. If you still want code sharing, you can start your own online repository where only thoroughly vetted code is allowed in. The important value to me is that people are not encouraged to censor themselves, but that we push ourselves harder to increase quality, by sharing both our knowledge and our work even more. If navigation is an issue, we need to build better tools.

Multiple choices, competition, and organic growth is healthy for the system – enforcing artificial limits believing it'll increase quality is not.


>We are now in another period with great opportunity for people in the IT industry, and we want to have as many people as possible join the party! Everyone should code. And we have made it so easy! Create a GitHub account, watch some of the many, many tutorials out there, and get going! Yay!

>The sad truth is that something essential gets lost in all of this everyone-can-do-it-yay-lets-all-code euphory. I’m not exactly sure how to call it, so I’m going to pick something that is close enough to describe what I mean. I’m talking about craftsmanship.

Just because many newer developer's code is public on GitHub, it doesn't necessarily mean code quality is going down. Back five years ago, when many developer's code was stored on their machine, there was no way of knowing if their code was any good.

Now, developers are encouraged to show off what they have, because Git shows how you've progressed as a developer. It also allows for developers to help each other and make each other's code better. I know collaborating on GitHub has helped my code quality go up.


Exactly. Is there value in learning to code? Of course. Does learning to code mean you will be able to write quality software? Unfortunately no. Especially if you learned to code in a sort of crash course on web application development. To be honest I've forgotten 90% of what I learned in engineering school, but I feel the triple-variable calculus, statistics, circuit design and assembly language driver hacking somehow prepared me for solving rather tough and unfamiliar problems in the real world.

It's like music lessons - very beneficial in all kinds of tangential ways although very few students will become concert pianists.


Or to put it another way: you can't become a good programmer without first being a bad programmer.


I don't see what the problem is here. Angular isn't going away - I daresay the 1.x will be maintained. EmberJS looks rock solid. Backbone hasn't suddenly become useless just because React is around.

A proliferation of projects is an indication of an exciting and experimental community. If you run off half-cocked and take on an unproven and unmaintained library and build an application on sand then the fault lies with you, not the ecosystem.


> If you run off half-cocked and take on an unproven and unmaintained library and build an application on sand then the fault lies with you, not the ecosystem.

The entire contention is that it's difficult to tell which ones are going to become unmaintained because of the amount of churn.


For which the writer makes an entirely unconvincing argument.


I concur, however it is burning out a lot of web developers. Javascipt is moving way too fast for many people with the constant new libraries and frameworks coming out. Having to learn and write code in Angular 1.x, then having to re-write it for 2.x is a pain no one should have to do. But it is the only way developers can gain the advantages in performance from the newer iteration.


Its quite a waste as well.

I am a back end developer who has been venturing to the front end world. Having to change frameworks all the time means never getting to know any in depth. Whereas on the back end I can stick with a solid framework - Django, and the stuff I learned 3 and a half years ago is still relevant. And the SQL I learned ten years ago is also relevant. And I can do a lot more complex querying now than I did ten years ago, when I struggled to learn LEFT JOIN syntax. I don't see any of my JavaScript experience being as long term.

(Actually the NoSQL movement is causing similar issues in the back end, with so many devs insisting on using the latest new key-value store when a relational database would be the most suitable tool, and the time spent performance tuning rather than learning a new query language).


> we need to do something

Do something about what?

Lots of discarded projects on Github? Too many people working with Javascript? Fast iteration? Lowering barriers to entry?

Those all sound like good (or at least neutral) things to me. I've clearly missed the point of this post.


In my simplified view, I think we need to do something about the fact that you cannot build an application today that will not be obsolete in 6 months (at best). And by obsolete, we're not talking about obsolete like a COBOL application. We're talking about relying entirely on frameworks/libraries that may literally no longer exist. Think about an angular 1.x application 5 years from now.

If you're building an application that will take a non-trivial amount of time to build and will have a lifetime of years, what technologies would you pick? How do you train your team members? What is your strategy for maintaining it into the future?

If someone built a webapp 10 years ago, it would be php, rails, asp.net, java, etc. And you can find someone who would (possibly reluctantly) get in there and do something with it. But what about the app built using 20 npm libs. What do they do in 10 years?


> What do they do in 10 years?

Where is this mystical app that gets sudden developer attention after 10 years of maintenance-free use? I've never come across a 10 year old application that hasn't been actively maintained that didn't need at least parts rewritten.


> But what about the app built using 20 npm libs. What do they do in 10 years?

Read the code, just like they'd read the PHP/Ruby/whatever? Not sure I understand what problem you're suggesting.


Maybe the point is maintainability?


I'm not sure I really buy it, if you want to use the libraries/frameworks others are using that's easy, there's plenty of content out there that pushes you in the right direction. You can also take a punt on a framework/library that hasn't been maintained, or doesn't seem to have much interest, but that's your choice.


Before picking a library, one should do some research: take a look at the issue tracker, the mailing-list, open the source code, etc. It will give a good idea whether it's a good bet or not. I don't think this issue is specific to JavaScript.

Note that the author comes from Java. Developpers coming from dynamic languages will be much less annoyed by JavaScript vibrant ecosystem.


Did you read it?


I did, and I somewhat agree with `richmarr. The author of the blog does have a point, and it's not really wrong much, but I've come to the fairly strong opinion it's an unhelpful thing to complain about publically. (Which is a slightly ironic stance, I admit.)

As a plant engineer, you're going to have to part out and understand a system in its entirety at some point. And you'll be rating bolts and nuts and greases and such to meet the specification you need. There are a lot of bolts to choose from. It's the same for chips or components when designing a circuit: there are lots of transistors to choose from. For many tasks, many will effectively be interchangable. Sometimes you'll need to be careful, and other times you'll be working on a system where you'll need to research the manufacturer to make sure they're likely to be still in business in ten years.

But you'd never reasonably complain about the choice. Nor about all the regular folk just building a porch out of some wood and screws from Home Depot. You wouldn't complain about a kid learning circuitry by plopping some random components from Radio Shack into a breadboard (which they'll never move to a proper PCB). And I don't see why you'd complain about millions of people interacting with software the same way. Learning to deal with the glut is as simple as going to a curated aggregator (Home Depot/Radio Shack) or becoming an engineer.

I've seen the sort of stuff non-programmers inflict on systems. I'm not even talking about JS on the web, but like simple relay-equivalent ladder logic on heavy machinery. It's just as bad there. But. Work needs to get done, production needs to move forward, and craftsmanship is expensive and time consuming. There's a place for the cowboy programmer, just as there is for the craftsman.

So it seems a bit silly to complain about the situation. To me, it just means software is becoming an industry just like any other.


To me, it seems more like what's happening is a lot of people think it's less interesting to work on an existing, established product than to rewrite it altogether or make a new one and so we're seeing a steady procession of neat but immature frameworks instead of a mature one gaining steam. This, to me, is a problem because it means programmers waste a ton of time learning the incantations for a new framework instead of focusing on the actual problems they're trying to solve. It's like if you switched between similar programming languages for every new project you started without becoming very familiar with any of them.


The article boils down to: "I don't like Javascript".

Although it's not a great language by design, I think there's a reason why it has reached wide adoption. And don't tell me it's because there are no alternatives.


> Although it's not a great language by design, I think there's a reason why it has reached wide adoption. And don't tell me it's because there are no alternatives.

I'm pretty sure that this is exactly the reason: JavaScript is found in every browser, and only JavaScript is found in any (modern) browser. That's the only reason it's widely-adopted.


I don't think so. He's not saying JS sucks; he's saying there are problems with an ever-shifting ecosystem (which he says is not totally unique to JavaScript).


"And don't tell me it's because there are no alternatives." Then I don't know what to tell you. Why do you dismiss this as a reason?


There were a few alternatives on the client side. Not many, but they do exist: Java was supposed to be the alternative for building web based applications, but Applets didn't gain wide adoption. Flash did better but eventually became unnecessary as the web stack itself got good enough to build things like Google Docs, Processing.JS and such. Then there are the "transpilers" like GWT and the various *script dialects that compile to HTML/JS/CSS. On the server side there are tons of alternatives, of course. Even things like Virtx.io that has bindings to a number of languages including JS.

Those are some of the alternatives - at the end of the day most people seem to prefer going back to straight HTML/CSS/JS to build things. They're not perfect but they're pretty easy to learn, they separate the concerns of content, presentation and logic, and they work everywhere (sometimes with a little help from jquery or modernizr).


From a user standpoint, anything that relies on plugins to work is suboptimal, so that rules out Java applets and Flash.

Transpilers are reasonably popular, but they're band-aids over the real issues, and are susceptible to inherit the problems that the shaky foundation lies out, and the mechanisms for library interoperability are not always stable. These problems also persist in the ever evolving landscape of JS frameworks, which are symptoms of the limitations of JS.

The other side of it is about what works in the commercial landscape. Building around an uncommon tech stack is a risk when it comes to hiring. The widespread nature of JS is part of what keeps it popular, but that's not necessarily based on merit.


Exactly. From a purely technical standpoint, it's better than any existing alternative for scripting in the browser.


So what are the other alternatives for scripting in the browser? Other than VBScript on IE? Hah. (Languages that compile to javascript don't count. At the end of the day, it's still javascript in the browser.)


(Sorry, that was my point. I was being sarcastic)


Sorry, it was late at night. I was dreaming of a world where there were actual alternatives... By that, I mean, a real language that wasn't invented simply for doing form validation, pre-loading image rollovers, and popups, then embraced as the next big thing by small children and hipsters almost 20 years later.


This says something important that I've been thinking for a while, though I couldn't conjure the words as well myself.

> The sad truth is that something essential gets lost in all of this everyone-can-do-it-yay-lets-all-code euphory.

This attitude has been wearing on me as well. Although we shouldn't self-aggrandize, it's hard to do this work well, and so many don't do it well. And I realize that yay-everyone-can-do-it is a correction to the insularity and shitty forms of exclusivity that developed in some areas of software development, but it's sadly not the whole truth.

And you can't really say this without being a "hater" and so forth, especially in the Javascript community. They're very self-congratulatory about their community's ethic of acceptance, which is of course a wonderful thing. But there is no space to offer a viewpoint counter to this or to say "yeah but...". I suppose this will change in time.


> and fueling and attitude wherein the well-being of the developers of a tool, not the one of the users, is most important.

This x 100000

This is why I hate package managers in a nutshell. The reason is that it reduces the friction for developers of packages to issue updates to a critical level where they turn round and blame package consumers for not using the latest versions while the consumers spend their lives battling with the ever changing bugs and APIs of the packages.

This isn't just an open source problem, as I first ran into it in a company that made many of the mistakes a decade ago that the open source community is just making today. Lots of really nice sounding ideas that increase the productivity of a few individuals working on inner level packages at the expense of the whole organisation.


I think this is a very specious connection. I've been using Rails since 2005 and so I got a good dose of vendor/plugins versus bare gem installation versus Bundler (ie. steadily moving up the continuum of package management), and my observation is that library quality and maintenance is utterly orthogonal to package management capabilities. The package manager itself has only helped my life as a developer, the tricky part is picking trustworthy dependencies.


It's not like things have been better in the 90's, when everything was about Visual Basic, COM, OLE, CORBA and 're-using' application components. I'm not a big fan of JS, but the whole JS+browser+node.js infrastructure is a million times better then this whole VB+COM bullshit. Even the Darwinian framework situation in the Javascript world is much better then getting a incredibly complicated, over-engineered, designed-by-committee system like COM rammed down your throat. Don't like a particular JS framework? Pick a different one.


But why should we judge the web stack by comparing it to the worst examples of the past? We should really compare it with the best options available today.


Sure, but the good thing today is that the 'best option' isn't dictated from an elite that 'knows better', and 'best' isn't a question of money anymore, but strictly quality (with 'quality' not meaning particularly well designed, but solving a specific problem well). A side effect of this 'freedom of choice' is that 99,9% of available options don't solve a problem for me, and it's harder to choose the right option, but I'm completely fine with this. People just need to stop treating programming languages and framework as silver bullets.


The writer commented on how some of his best code was written away from the computer and how all of this fast reloading causes poor code to be written. I'm not sure if he would disagree with me, but I think a lot of it depends on the kind of person you are and the kind of problem you are solving. Some problems can't really be solved well iteratively (voice-recognition and DSP for example), while others can. Similarly some people work better iteratively while others do not.

I think it's similar to writing an essay in some ways. Some people sit down and think for a while, then write the essay in one sitting. They then make some minor grammar corrections and otherwise polish the essay before submitting it. Others (like me), sit down and write whatever comes to mind. Then they rewrite everything, and then they rewrite everything again, etc. Finally they have an essay that they are happy with, make some small polishing changes and submit it.

In the end, both approaches work fine for writing an essay, just like both approaches work fine for writing code. The key, when talking about speed and iterative development, is that the code isn't finished when you have something that appears to work. You have to go back through and rewrite things, refactor, add additional tests and clean the code up.


Where is the issue ? When in inspect a library;

- I can see the number of downloads on the npm website and so on.

- I can see the issues and pull requests on github

- I can do a git clone + npm test to see if the tests still run or even better see the travis batch that is in the readme.

I prefer a vibrant community and lots of libraries over a slow and "solid" library development community.

In the end - I need to write code any way to fix any of the libraries issues... With lots of libraries, at least I have the chance that somebody already forked the library and fixed my problem.

Otherwise, I ll fork the lib - fix the issue and use the fork myself and send a pull request to the original lib. Once the pull request is accepted - i can just point my dependencies again at the original lib.

Perfect isn't it ? I fix the problem for myself and the community gets a fix back to the original lib almost for free.

So yes, there is garbage out there, but I feel I have a lot of tools to "smell" it and stay away of these before investing too much time.


I think this is better than the "Generation PHP" we had before.

Both languages aren't sexy at all.

But JS is more versatile. Runtimes are available (and preinstalled) on many servers AND clients. And it's OO and FP seems to be more flexible to me. I get a more smalltalk-feeling from using JS than I got from PHP back in the days.


I can see the problems of the current situation, too. I guess everyone involved does. But what's a viable alternative?

I have seen a few project that were completely build on its own, without dependencies on external libraries. Not only that those projects turn out to be even less modular (because it's not enforced by a package manager / module system), but the chances that code gets unmaintained are also rather high, since developer time is often rare.

An interesting point is releasing the software. No problem with putting it on GitHub, but if projects are commited to official registries like NPM or Bower, the author should really put some commitment into it. Currently it gets ever more complicated to find good quality libraries, becoming a needle in haystack problem. So a "release ethic" wouldn't be too bad.


Here is my dilemma as I consider if I should double down on JavaScript and adopt it on the severside. Is there an alternative for the web? and publishing as a whole?

We are a small shop. Two people and hope to grow. We publish a web database. Our market is academics, libraries, and scholars. Our Data must be kept in XML. It just makes the most senese.

But we are spread over a half dozen different languages. Halfway proficient at all of them, but as we move around to a new project and set one language aside to pick up the other we regress. What we had known must be relearned, the way we were thinking must be retrained.

If we created our web services, our internal API's, our production system all largely in the same language we should, I think, we come more proficient and quicker in this language. At the end of the day we need a well functioning, modern web application. So I have no alternative but to be very current in javascript it feels.

Might as well set aside our old Java 5 app. Might as well not create a new python project as fun as it is for me when I am the only one that can understand the python. Might as well stop pouring all my efforts into a silo. Why note double down on javascript.

A key source of inspiration for this line of thinking is from a conference presentation I saw given from some at O'riely. They were experimenting with their upstream source of their products being HTML5. The HTMLBook[1] project seems to logical enough. Their output is almost exclusively HTML. Be it on the web, the ereader (epub/mobi) or with the PDF. They were using Attenahouse to convert HTML to PDF using CSS.

Why mess around with several different languages. Keep it simple, and produces nice things quickly. If they aren't as nice as you hope refactor. With not having to retrain parts of your brain for javascript as you were working on a Python project exclusively for the last month and a half we should be that much quicker.

[1]https://github.com/oreillymedia/HTMLBook


I don't think this problem involves programmers at all except in their initial decision to use JavaScript. And those decisions would need to be examined on a case-by-case basis, not as a generational critique.

Companies need to team up to put resources behind projects they like and organize their efforts in a sustainable way. One effort that's proven effective has been Apache: http://www.apache.org/foundation/how-it-works.html

Their foundation ensures projects have a good mix of contributors and aren't driven by the whim of a sole corporation or a few hobbyists who would never be replaced when they leave a project.


"Oh but if we can only stop the dirty unwashed masses from sullying our pristine discipline..."

Said any group that has implemented certifications to make entry into their discipline more difficult to newcomers. "Craftsmanship certification" is something I wouldn't be shocked to see becoming a business in the next 10 years.


Speed of innovation seems to be bothering the author... really not the right industry for him to be in.


I had the chance to debug a NodeJS application that was based on a framework which had a lot of low quality dependencies which had a lot of low quality dependencies of their own... Long story short it was anything but fun. I think the NodeJS ecosystem needs a quality bar.


Well, I was just about to publish my first library to npm but just changed my mind. I have enough performance anxiety as it is and when I know people think I'm destroying the scene with my "sub-par code" I definitely won't publish.


Publish it anyway!

There's a worst case of no-one noticing, which will hopefully be a great relief for the anxiety (and yet you can say you published your code), and a next worst case that it's never used and is simply inspiration for a better library! Of course best case is people use your code, modify, expand, and build upon it. You'll be credited in their libraries as part of their inspiration, and the state of the art moves forward.

There are many that want deeper curation on quality; it's hard to deal with all the noise out there. But it's critical to remember that we can always layer curation on top of the bazaar at no additional cost. As the scene settles into its grooves, the better curators will stand out.


Aw, come on, you really should. You know what happens when you publish something to npm? Approximately nothing. Your module gets in there with a bazillion others. Maybe someone will use it. Maybe not. Man if you could see the lame-ass library I first published to NPM you would laugh. The only way you get better is by putting yourself out there and just keep doing it.


I don't really follow the current thread this article seems to. It just sounds like the cool CS kids lamenting how the web used to be cool. The masses have discovered it and are contributing to it now. Wasn't that kinda the goal?


Wow, this thread is a tough read for someone who ranks JavaScript as the best high-level language in common-use today.

Haters gonna hate!


Solution: Think for yourself


andy berhardt (not related) gives the best arguments against javascript. the fact that there's no integer is really crazy to me.


Jeff Atwood stated in 2007 that any application that can be written in JavaScript will eventually be written in JavaScrip. Thats what we have today.


That doesn't really address the point of this blog post at all.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: