As a meta observation, I noticed this was submitted by user luu and his website also has a blog post with a similar theme ("boring languages"[1]). Therefore, I wonder if the "boring vs exciting" advice somewhat depends on the personality. I.e. if person has a tendency to prefer conservative technology, it means external advice that advocates "boring tech" will resonate with that person.
I think choosing boring technology makes sense but I'll give a contrarian view anyway.
A lot of people like new and shiny things because it's interesting and spurs the imagination for future work.
For example, in 1974 when the new Altair 8800 made the cover of Popular Electronics[2], an excited Paul Allen showed the magazine to a 19-year old Bill Gates. The "boring tech" advocates would tell them they're wasting their time with the "new fad of microcomputers" and they should look at boring IBM 360 mainframes instead. Those IBM machines were around since the 1960s and were more proven. The problem is that Bill and Paul weren't interested in those mainframes. The choice isn't proven/unproven. The choice was really interested/uninterested.
Similar situation with WhatsApp in 2009. Apple just released the iPhone SDK in 2008. The older tech was PalmOS, Windows Mobile CE, and Symbian. Those legacy mobile operating systems were around since the 1990s. It didn't matter to Jan Kuom that the new iPhone didn't have a track record. It was the more interesting platform to work on and he made a bet to write an app for it.
From what I read, Google's first AdWords server in 2000 was built on MySQL. MySQL was released in 1995 so using a relatively new 5-year old technology may have been more risky than picking traditional Oracle RDBMS which had been around since the late 1970s.
It's ok to choose new and unproven tech if you have a coherent thesis of why it makes sense to try it. You also have to be willing to pay the price if your bet turns out to be wrong. You can also deliberately choose boring tech in as many areas as possible so it frees up brainpower to aggressively make a bet on new unproven tech in a particular area where you think it will make a difference.
Also, your role dictates the freedom to use new and unproven tech. If you're an employee at a mature and conservative company, you'll be constrained to choose boring technology to minimize risk. (This reinforces the saying, "Nobody ever got fired for buying IBM.") On the other hand, if you're an entrepreneur, there's a good chance you'll need to make a calculated risk with new exciting technology that is (1) unproven, (2) has non-existent documentation, (3) does not have much tooling to make implementation easy.
At one point, even all the "boring technology" was new and interesting. In 2019, what's some new and unproven tech that we should look at?
> For example, in 1974 when the new Altair 8800 made the cover of Popular Electronics[2], an excited Paul Allen showed the magazine to a 19-year old Bill Gates. The "boring tech" advocates would tell them they're wasting their time with the "new fad of microcomputers" and they should look at boring IBM 360 mainframes instead.
I remember reading a book around the late 90s (the book was written in the early 90s) about Microsoft's history and that is exactly the sentiment that computer people had towards microcomputers at the time.
That's a false analogy, because the micro was a category changer. It was literally the basis of entire new markets.
A lot of people saw that coming.
A lot of other people - like DEC and DG - didn't. IBM mostly didn't, but was lucky enough to have a small division that did in spite of the culture around it. (And they lost it. Too bad.)
Node, Mongo, Clojure, etc were never category changers - they were solutions made by people who like to tinker for other people to tinker with. For the sake of tinkering. Hyped into orbit with industrial quantities of hypeology.
If you're trying to Get Shit Done, they're mostly a disaster. (Clojure less than the others - but it's still not solving the problems that actually matter in a production context. Not really.)
Bottom line is micros had an obvious business benefit - low cost, ease of installation, cheap software - that was never going to be matched by the mini and mainframe markets.
Node etc don't have an obvious business benefit at all. They appeal to a certain set of developers. But for most applications they don't cut development time, minimise bugs, simplify maintenance and bug fixes, de-complexify system architecture, simplify new hire onboarding, streamline the development process, scale with minimum effort, or any of those other boring requirements that actually matter when you want to get good working software out the door.
There is a benefit of hindsight here: The micro, in the beginning, was a bit like 3d pronters now. Nice toy for enthousiasts but that's it. It took the GUI, wordprocessor and spreadsheet to get them out of the toy category and in the usefull category.
Mainframes etc could also deliver this, the main differentiator being price. IBM was dreaming of a mainframe for each city, and phonelike terminald in the company. Minitel and internet proves the vision was right, only the price wasnt
The category killer needed to be cheap as well ad usefull.
Node also has an obvious benefit: same tech in frontend and backend, hence less devs needed. A.k.a. cheap, at least it seems so in the short term. History will judge us both on the long term, I guess.
> Node, Mongo, Clojure, etc were never category changers - they were solutions made by people who like to tinker for other people to tinker with
Rust is the killer micro of software development. It's all about getting rid of pointless hype and helping you "ship good working software out of the door". That's why people don't get it, and dismiss it as a "toy".
I'm not commenting on the analogy, my comment was specifically the part i quoted since it sounded to me like the poster assumed it was how "boring tech advocates" would react: i confirmed that this is exactly how they reacted.
Outside of that i sit on the fence of "it depends" with a bias towards conservatism. After all I do mostly write desktop software in C and (Free) Pascal, i do not care about web development at all, i see smartphones as a neat distraction but overall having done more harm than good to the stuff i like (desktop software and UX, primarily) and of all the new languages popping up here and there i find Go the most interesting (i'd also have an interest towards D but Free Pascal provides pretty much everything i'd need from D outside of the crazy metaprogramming, which i'm not sure it is a good thing in the long term).
Ok, actually i might sit a bit further towards conservatism than i initially thought :-P.
If you're trying to Get Shit Done, they're mostly a disaster. (Clojure less than the others - but it's still not solving the problems that actually matter in a production context. Not really.)
Hrm, as someone that has written lots of Clojure code over the years, including a few business critical systems, I'm wondering what leads you to believe it's not a language for getting shit done.
I can spin up a web service in just a few minutes, drop in the middleware I need, route requests, spit out JSON, talk to DBs, and in a pinch, leverage libraries from the Java ecosystem for any missing pieces to the puzzle.
Deployment, dependency management, testing, etc... are all fully solved problems, and very mature technologies underpin the pieces that matter regarding battle tested services (jetty, jdbc adapters, etc).
Not trying to be a zealot here, but I also hate to see misinformation permeating the thread.
> But for most applications they don't cut development time, minimise bugs, simplify maintenance and bug fixes, de-complexify system architecture, [...] scale with minimum effort
I honestly have yet to see a Java or C++ codebase that is better on any of these fronts than a Clojure one.
> they were solutions made by people who like to tinker for other people to tinker with. For the sake of tinkering. Hyped into orbit with industrial quantities of hypeology.
This seems to lack context of how we were doing development at the time these technologies picked up.
One should be careful not to apply the "just keep doing what we did in the 90s/80s" argument to every piece of technology not taught in academia. I've used NoSQL databases in cases where traditional RDBMS would have fallen over and exploded.
> From what I read, Google's first AdWords server in 2000 was built on MySQL. MySQL was released in 1995 so using a relatively new 5-year old technology may have been more risky than picking traditional Oracle RDBMS which had been around since the late 1970s.
well they could've also used postgres/ingress. but at the time it was way less used than mysql and way more conservative. mysql was probably used because in the 2000's mysql was already a huge community. way bigger than most shiny graph databases.
i remember mysql being all the rage when it came out because it was "fast". I don't remember what it was compared to but the reputation was a fast database. I remember the postgres crowd bemoaning mysql's lack of ACID compliance (this was even before innodb came out) and the response was "yeah but it's fast".
iirc, MySQL already had built-in replication support around that time. This alone may have been a deciding factor. Postgres didn't provide built-in replication until a decade later.
I think choosing boring technology makes sense but I'll give a contrarian view anyway.
A lot of people like new and shiny things because it's interesting and spurs the imagination for future work.
For example, in 1974 when the new Altair 8800 made the cover of Popular Electronics[2], an excited Paul Allen showed the magazine to a 19-year old Bill Gates. The "boring tech" advocates would tell them they're wasting their time with the "new fad of microcomputers" and they should look at boring IBM 360 mainframes instead. Those IBM machines were around since the 1960s and were more proven. The problem is that Bill and Paul weren't interested in those mainframes. The choice isn't proven/unproven. The choice was really interested/uninterested.
Similar situation with WhatsApp in 2009. Apple just released the iPhone SDK in 2008. The older tech was PalmOS, Windows Mobile CE, and Symbian. Those legacy mobile operating systems were around since the 1990s. It didn't matter to Jan Kuom that the new iPhone didn't have a track record. It was the more interesting platform to work on and he made a bet to write an app for it.
From what I read, Google's first AdWords server in 2000 was built on MySQL. MySQL was released in 1995 so using a relatively new 5-year old technology may have been more risky than picking traditional Oracle RDBMS which had been around since the late 1970s.
It's ok to choose new and unproven tech if you have a coherent thesis of why it makes sense to try it. You also have to be willing to pay the price if your bet turns out to be wrong. You can also deliberately choose boring tech in as many areas as possible so it frees up brainpower to aggressively make a bet on new unproven tech in a particular area where you think it will make a difference.
Also, your role dictates the freedom to use new and unproven tech. If you're an employee at a mature and conservative company, you'll be constrained to choose boring technology to minimize risk. (This reinforces the saying, "Nobody ever got fired for buying IBM.") On the other hand, if you're an entrepreneur, there's a good chance you'll need to make a calculated risk with new exciting technology that is (1) unproven, (2) has non-existent documentation, (3) does not have much tooling to make implementation easy.
At one point, even all the "boring technology" was new and interesting. In 2019, what's some new and unproven tech that we should look at?
[1] https://danluu.com/boring-languages/
[2] https://www.google.com/search?q=altair+8800+magazine+cover&s...