Hacker News new | past | comments | ask | show | jobs | submit login
Choose Boring Technology (boringtechnology.club)
1357 points by luu on July 2, 2019 | hide | past | favorite | 344 comments



Always glad to see this making the rounds, as it has been influential for me, especially the perspective of "Happiness comes from shipping stuff".

I often think about this part, in particular:

>Then we walked away. We didn’t do anything related to activity feeds for years after that. We barely even thought about it.

>Then one day I said, “hey, I wonder how activity feeds is doing.” And I looked at it and was surprised to discover that in the time since we’d launched it, the usage had increased by a factor of 20. And as far as I could tell it was totally fine.

>The fact that 2,000% more people could be using activity feeds and we didn’t even have any idea that it was happening is certainly the greatest purely technical achievement of my career.

I get a huge kick from making something, and then forgetting about it. For me, implementing something that provides value with near-zero maintenance for years is the ultimate sign that I've done well.


For me, implementing something that provides value with near-zero maintenance for years is the ultimate sign that I've done well.

Such a good point it's worth repeating. We have a bunch of small tools of which the code might not exactly be brilliant or adhering all possible good practices, but after 10+ years they still just work without any unsurprising behavior nor bugs and still also just build/deploy with a click of the button so to speak. That's just nice.

Does make me wonder sometimes what exactly all other knowledge I gathered since then is good for. Some of it is definitely wasted on shiny stuff. But most of it is software architecture and with repect to the subject I'd translate that as 'how do I make this large scale application a combination of all those small and nice tools, and make that combination itself also work as those small nice tools'.


I've found that systems usually get robust over time - most issues in commonly run code paths get ironed out in production, and if we don't keep modifying the code, and the system's relative external world remains stable, things chug along. It is the greatest feeling though!

We have a DOS application (inventory, accounts etc.) that has been in operation at a couple of retail stores for about 15 years now. No major change after the second year.

This transiency of our work as programmers is something I am yet to come to terms with. I can't recollect where, but there was an interesting consolation in that while the code might be transient, the value created was real, and that's what matters. But the code is an artifact of our waking hours; it matters, like memories matter.


Having been on both ends of this (developer and user), I've seen all kinds of reasons that bug reports drop off, after the first couple years, that have nothing to do with software quality improving. A few actual examples I've seen:

- I reported 5 bugs, and they were all ignored, so I'm not going to report any more now. It'd just be a waste of my time.

- I reported a bug, and in the process of fixing that, the developer broke something else even worse. (In one case, it turned a "delete some" button into a "delete all" button which caused a minor catastrophe for all users.) Now I want the developer to not touch anything. I know what little bugs exist in the system, and I've come up with my own ways to work around them, thank you very much.

- As a consequence of some other bug fix, it came to light that the developer has no backups. (I don't have sufficient pull to fix that underlying problem.) I want the developer to not touch anything.

- The users aren't the buyers, and it's easy for the buyers to know where to report bugs, but that information is hidden from the users. As a result, all of the bugs that cropped up during demos (for or by C-level execs and VPs) are fixed, but nothing beyond that.

- We brought on new users, and they don't know that the software is custom built (2 months ago!), so they don't know it's possible to file bug reports. Or they don't know where to report them. Or they're afraid it's going to be onerous. Whatever the reason, they think this program has always been around and can't be changed.

- Maybe the process really is onerous. One company has gotten increasingly naggy about all bug reports including a 1/2GB blob of logs and other PII that takes several minutes to generate. Even when I just want to report "there's no way to do X, and there should be a way to do X", I get nagged 3 or 4 times to include this diagnostic blob, and it's implied that my bug won't get much attention without one. I only rarely report bugs to that company these days.

- Since the early days of a popular website, the keyboard shortcuts were supremely annoying. In the interests of being supportive of that great new venture, I only reported issues which caused the system to be completely unusable. Later, when the major issues had been ironed out and people did report that stealing and remapping all the standard OS text editing keys was maybe not the nicest thing to do, we were told the system was 3 years old already and by now someone else "would surely complain if it was an annoying problem".

In other words, everyone from users to developers comes to depend on the status quo. The softness of software is a benefit during initial development, but mostly just a liability later. We value the status quo even more than quality.


You were still deploying a DOS application in 2004 ? Do you mind telling us more about the reasons why ?


That was the only programming environment (CA-Clipper) I knew at the time. VB6 was a thing, but I had some GUI-antipathy, coming from the older, "purer?" world of DOS. Also POS terminals needed first-class keyboard accessibility. SQL wasn't welcome either - you had to give up the fine grained control of row-level cursors in the flat-file xBase database and use a rather unwieldy looking language.

I would still love to go back to the ergonomics of building user interfaces in DOS. No mouse, no events, no callbacks - just straight imperative programming where everything, including user input, blocks. Nothing async - even network communication was thru Novell Netware file writes, using shared locks. And a single screen to design: 25 rows and 80 columns, and just 16 colors.

After doing GUI work for many years after DOS, thru VB6, jQuery DOM manipulation, Angular etc., ReasonReact and Hooks are the closest I've come to recapturing the ease of building UIs again. I'm also looking forward to trying out concur one of these days (https://github.com/ajnsit/concur-documentation/blob/master/R...) -- it is a Haskell GUI library that lets you do evented handling in an imperative-looking manner.


Well, if you would be using Turbo Vision, then there was mouse, callbacks, events, the whole stuff. And async would just be a couple of chained interrupts.

I also used CA-Clipper, Summer '87 and the OOP variants 5.x.

Most of the stuff did use a mouse and some TUI navigation, with menus and stuff, we had a couple of libraries for it.

Now doing those forms it was pretty neat.


I had a pascal turbo vision accounts app. The client decided to share the data between two machines over a 10baseT network. With the exception of an occasional, but fully recoverable write issue (the user just had to retry their last save operation), I didn't have to lift a finger to get it past y2k. Mouse, printer (serial and parallel) were no problems. Good times.


Good times indeed!


Yeah I used to look longingly at Turbo Vision GUIs, but C++ was scary at the time. Do you think building TUIs with TV was qualitatively better than approaches popular today? Any insights why?


I used Turbo Pascal's variant of Turbo Vision.

As for building TUIs not really, other than the usual Clipper entry forms.

For example, when I moved from Turbo Vision (MS-DOS) into Object Windows Library (Turbo Pascal for Windows 1.5) I did not regret it.

One thing I do conceed, a full keyboard experience is much better for data entry forms and on GUIs the mouse is given too much focus, although the same approach could be easily implemented as on TUIs.


Nice to hear that!


It’s like a plumber or a home builder. You only go back if something isn’t right.


> but after 10+ years they still just work

I think this is also a mindset of devs who are committed to a job long term. Many places seem to churn through people every few years. If I had to look for a job every year or two I'd be doing resume-trendy "best tool for the job" stuff too. If I knew I'd be on the same system in 2029, I'd be keeping things easy to maintain.


And in all fairness, many people seem to churn through places every year or two because (among other reasons) they value newness over longer term projects.


"If you’re giving individual teams (or gods help you, individuals) free reign to make local decisions about infrastructure, you’re hurting yourself globally. It’s freedom, sure. You’re handing developers a ball of chain and padlocks and telling them to feel free to bind themselves to the operational toil that will be with them until the sweet release of grim death."

That last sentence really should be, "You’re handing developers a ball of chain and padlocks and telling them to feel free to bind you to the operational toil that will be with you until the sweet release of grim death." They can (and will) pack up their tents and move on well before any consequences appear.


Exactly. And then the poor sods who have to come to pick up their (frequently undocumented or poorly documented) train wreck have to try to and keep it running, or rewrite it.


I value stability, but the companies I've worked for haven't provided any. There's definitely two sides to the coin. Would love to find a job that deserves long term commitment.


Do not agree, changing places has a very high cost. People change places because that's seems to be the only way to get a meaningful raise.


That is one reason people job hop. However I’ve known lots of people who also admit to just routinely getting bored after a couple of years.


A lot of companies aren't set up for long term existence. They want an acquisition in the next 5-10 years.


The code that I've shipped that, 10 years later still runs at previous organizations with no support needed is really something I'm very proud of when I look back.

At once point we actually wrote an AJAX site that worked in IE6, Firefox and Safari (pre-Chrome), before Prototype, jQuery or JSON was even a thing...and it still worked perfectly 10 years later. All hand coded JS and backend PHP + Java.

It's one of the reasons that I get kinda shocked when I see people look at code on Github and then avoid it because it hasn't had a recent commit. It's also of the big reasons that I'm a fan of Elixir, because the language is approaching a point where it's creators consider it "complete". Boring and stable is the goal.


> it still worked perfectly 10 years later. All hand coded JS and backend PHP + Java.

Is the server running ten-year-old PHP & Java or did you manage to write code not affected by any language changes? I would expect some changes to be necessary for PHP 5.2 era code to keep working even on PHP 5.6.


I know it was still running but I don’t know about the hosting infrastructure upgrades after I left. When it was deployed it was PHP4.


I built a custom ticketing system for a hosting company, and maintained it over 6 years before ultimately moving on.

After the first 6 months in production it basically sat there for 4 years with no bugs, no dropped tickets/emails, and so forth. One of the things I was proudest of is that no one ever EVER went into work hoping the ticketing system would be up.

They eventually started asking for changes due to growing as a company, but I love it when my work is so stable people don't even stop to consider that it might not be available that day. During all that time, it was only ever down or malfunctioning when the infrastructure had problems (something I had no control over and a major reason why I eventually moved on).


Beautiful code denies its existence so perfectly you forget you wrote it.


That is a lovely way of putting it.

The unfortunate flip side is that everyone else forgets about it too, and one day there's no one left who understands it any more.

I came to the conclusion that software which just works invisibly can have a shorter useful life than software that demands some attention every now and again. So don't make your software invisible!


> "Happiness comes from shipping stuff"

Actually, the rest of your comments points to: "Happiness comes from shipping good stuff"


"The way to ship good stuff is to ship lots of stuff and keep the good bits" - apologies to Linus Pauling


Or perhaps to build lots of stuff, and only ship the good bits.


If you don't ship it you don't know if it is good or bad.


You don't know for sure whether it's good. You may know for sure that it's bad.


Shipping it over to user testing will help.


Maybe more that if you ship bad stuff you don't get to keep shipping stuff, you have to stop and worry about keeping the bad stuff running.

Be happy by shipping stuff, and keep shipping stuff by shipping good stuff.


I'm always amazed at the lifespan of some of the code I've written over the years. I wrote an api that we use internally in 1996 that is still being used in all of our new production code in that language in our organization. I've learned a lot in the intervening 23 years so looking at some of that code can be a bit jarring. We've had to adapt it a bit when our environment changed a few times, and we've certainly added to it, but I don't think the core code has had any problems that I know of this century. Millennium, I guess, technically.

Part of me really wants to rewrite it, but it doesn't make sense from a developer-time perspective (and I don't want to spend the time tracking down the inevitable bugs again).


> Happiness comes from shipping stuff

But Buddha says that the road is more important than the destination ...


Shipping stuff is the road.


Does he?


I know the comment you replied to was in jest, but he did actually say quite the contrary (at least according to Theravada, one major Buddhist tradition): He in several places said the point of all his teachings was to help people put an end to suffering; he only taught the particular things he taught because according to him, they were the best/only way of reaching that goal.

Some schools even go so far as to say that the Buddhist path is something to let go of as well at the very end of the process in order to truly reach enlightenment. It's only a tool to get there.


>Some schools even go so far as to say that the Buddhist path is something to let go of as well at the very end of the process in order to truly reach enlightenment. It's only a tool to get there.

Interesting view (we share the same). To understand it more simply think as a big ocean. Enlightenment being an island. This is where the 4 stages are named after (stream entrer, non returner etc). Once you reached the island the boat is no longer useful (and the boat being 2500 years old as a tradition is a pretty quality one)


Systems that provide value over long periods of time with zero input are the opposite of tech debt. Compound interest?


Yesterday reviewed my first script that went global about 8 years ago. I think it had one update from my original deployment. =)


In 2017 I got a call from someone saying "the system's down". It was code I'd deployed for them in 2002. Crazy reviewing some of that code. Some was really good, some was... not so much. :)


My favorite on these old systems...."Your software has a bug"

No, at this point there might be some un-anticipated behavior but most likely there is either a piece of hardware going bad or you have changed your procedures and are no longer using the software the same way.


A huge part of our VC bubble in OSS infrastructure (NoSQL, clouds, middleware, automation) is fueled by a generation of technologists not choosing boring technology.

This is a great presentation, and great advice.

What this piece misses are the marketing, hiring practices and incentives in that capital pool that is fueling FOMO (fear of missing out) as the main driver of our technology trends.

Most people orbiting IT wants to be a part of something that changes the world: it’s their ticket to higher paying job elsewhere, bragging rights at a conference, internet fame/notoriety via blog posts, etc. This is how we wind up with Service Mesh proxied MongoDB on Kubernetes as THE ANSWER, and all the ensuing political battles / turf wars to fight boring alternatives as “yesterday’s news”.

It would be nice to see a greater focus on business outcomes, but I find few have the patience or bandwidth to truly track the success of a technology choice, unless you’re a larger company making a periodic cash bet with a business partner (eg software vendor). And even then, failure can be swept under the rug sometimes.

It requires strong leadership and management (or a great culture) to encourage your teams to communicate tradeoffs and make these sorts of “boring tech” decisions without it being more about individuals playing a turf game.


The thing that is missed with this argument is that tech just enables the business.

If a company is boasting constantly about how it moved from X to Y, they either have too much money, or none at all. If they have the luxury of re-writing a core business system every 6 months, we can glean two things:

1) they have terrible leadership

2) their teams can't agree to support each other.

One thing that is really dangerous, is thinking that shipping _any_ code, is better than shipping the _right_ code. Its very simple to think that your teams are being productive, if the only measure is shipping code.

It soon becomes a circle of deploy-refactor-rearchtect-urghLegacy. You feel productive because you are constantly creating new and tools. But they are to do pretty much the same thing, but on a different platform.

You are correct to observe that having a "boring" stack requires lots of hard work. It also moves against the current "dump anything into prod, because, docker". Like a news paper, it requires planning, editing and a style guide, with enforcers and gate keepers.


In my resume I used to list a bunch of technologies. These days I write the outcome of my projects. Eg 'My team increased customer retention by 3%'.

Technology is a tool to solve a problem and all problems are people problems.

Ultimately, you need to have a balance between tech and talk as that's the job to translate the two.

Lean too far on the talk and you end up in management, to far on the tech and you're writing software no one wants.


I would like to throw my 2 cents in that some of these advances around "new" ways do lead to some good things. The focus on service discovery is something that is incredibly helpful even for older technologies.

Consul is something you can add to even a very old application and give it modern scaling features, simply because it has a plain old DNS interface.

Redis is similar for caching, it's advanced, the interface is so simple to use, and it's so easy to deploy and manage.

On the flip side, I'd like to add that I worked for a startup and we chose a lot of boring technologies because we had a job to get done, and they all just worked. :)


Not sure I agree with this perspective. Many of these companies are started by engineers or product managers who have seen the boring solutions not work for them. In some cases they either see it not work many times or see a future where other organizations get to that point and need a better solution. In digging into what these companies do it becomes clear that they aren't doing it just to push new tech, but maybe I'm the one who hasn't seen enough.

You mention k8s and related stuff isn't always the answer. Seems you think it's somewhat valuable? I'd like to hear what sort of tech you think is creating a bubble by being so useless! Certainly k8s can't be it, because for large companies the automation abilities of such a system are quite clear. Does it intro new problems, of course, and if they aren't well solved yet you'll have to think hard about the benefits of adopting The New Way, but to call this a bubble is hard to accept.


Many solutions are useful in a context, and were built with that in mind. My point is that much of the adoption of them is context-free. Eg. NoSQL solutions were largely about dealing with scalability challenges with SQL databases that most people didn’t have. Also the scalability has nothing to do with SQL per se, that was just marketing.

Kubernetes is a great piece of technology but it’s also hyped beyond the stratosphere. My point wasn’t Kubernetes about thoughtful adoption of Kubernetes in context, it was about buzzword bingo being played by those that want to use a set of newer technologies as weapons for their political battles or career aspirations. This requires active leadership to resist.


"bubble" has a few meanings, or interpretations, and one of them is definitely, in this context, "a technology becomes so popular it gets used for things it probably shouldn't have been used for".

I think the main reason why it gets used for things it shouldn't be used for is that engineers get all enthusiastic about using it, and look for a job that it can be used for. This is especially true when the tech in question is becoming a de-facto requirement in the industry and everyone needs to know how to use it to succeed at their next interview.

Hence the need for this presentation, which wouldn't be needed (or popular) if there weren't so many damn bubbles.


"Many of these companies are started by engineers or product managers who have seen the boring solutions not work for them."

Most of the time (not all, but almost all of the time), if a "boring solution" doesn't work for you, the problem is you, not the solution. There's a reason it is boring.


>if a "boring solution" doesn't work for you, the problem is you, not the solution

Were the people who hated their pagers and wanted cell phones idiots? All the people who were writing GUIs in C and debugging memory leaks...was the problem that they were shitty programmers, or is there something to be gained by spending more time on the business logic and less time on memory management?

It's healthy to see the shortcomings of current solutions, and to wonder if there is a better way. It's not healthy to chase shiny objects simply because they are shiny, or to massively complicate your tech stack because it helps marginally with a short-term problem.


> Were the people who hated their pagers and wanted cell phones idiots?

Tangential comment here. No they are not idiots, but I can tell you from experience (my team and I build and manage a wide area paging network supporting over 40000+ emergency services first responders over an area of 230,000 km²+) that there are still advantages to paging protocols over 3G/4G in certain circumstances.

Coverage is one of them. Sometimes the best tool for a particular job is the older one.


"Idiots?" "Shitty programmers?"

Have you tried some of the modern decaffeinated brands? Some of them taste almost as good as regular coffee.


There’s a certain amount of ladder kicking involved in telling people to choose boring and beige technologies after you started up your career chasing after new and exciting shiny things.

Every developer should spend some time working at the bleeding edge, so they know how it feels to get cut. The best time is absolutely at the beginning, when you’re a fresh grad and have the energy. You have the rest of your life to work on boring stuff that pays the bills, and who knows, if you are successful with a new technology early on, you might just be working on it for a very long time, continually breaking new grounds.

Choose bold technology early, then boring.


The problem is that those people who chose shiny new technology only see the benefit, while all the others in the company will rot in hell because of the stupid choice of a junior engineer (who jumped ships 3 times meanwhile). I don't see how does that benefit the company? The whole point of the article is that you should make wise technology decisions which benefits the whole organization.


Yes, this. At my last job, I was a mid-level developer. I set up a dev ops pipeline. I moved a bunch of stuff to microservices. I did a whole bunch of other shiny things. It looks great on my resume.

Microservices were a mistake. I should have fixed the underlying problem with the DB (we were misusing the ORM) first. It's fixed now, but we still have to deal with debugging microservices which sometimes mysteriously fail.

There's a whole bunch of other cruft I sprinkled throughout the codebase. Good news is I left with good experience, because I made a few mistakes. Bad news is they still have that architecture.

That's why you always need a senior developer on deck to tell young whippersnappers like me "no" every so often.

Increasingly, I'm understanding the benefit of working on projects outside of work -- so you can make mistakes like that on your own time. I wish more people followed the Google model and gave you every Friday to mess around with your own projects.


I wished Google followed the Google model, because I work there.

20% projects are still technically a thing, but not many engineers actually have one.


For NDK users that is what it feels like, a 20% project from Android team, given how slow it progresses.

It only took them 10 years to fix the header file mess.


> The problem is that those people who chose shiny new technology only see the benefit, while all the others in the company will rot in hell because of the stupid choice of a junior engineer (who jumped ships 3 times meanwhile). I don't see how does that benefit the company? The whole point of the article is that you should make wise technology decisions which benefits the whole organization.

Sure, if your interests are closely aligned with the company. But as you noted, the junior engineer jumped ships 3 times already, probably with significant salary increase each time, and in part due to a CV that includes the use of exciting modern tech, which shows he's eager to learn and is passionate etc. Why would he care how the old company is doing?

People often don't stay at the same place long enough to feel the long-term consequences.

People will do whatever the (job) market rewards and is fun.

It's a bit different when you wear multiple hats. Then it may be best to focus on one specialization and keep things boring for the rest. For example at a small company, as a single-person data scientist/engineer/developer you may want to focus less on the IT fads and more on the modeling, because your CV is best padded by fancy new ML models, not by fancy new database engines. Or vice versa.


> Sure, if your interests are closely aligned with the company. But as you noted, the junior engineer jumped ships 3 times already, probably with significant salary increase each time, and in part due to a CV that includes the use of exciting modern tech, which shows he's eager to learn and is passionate etc. Why would he care how the old company is doing?

Sounds to me like this is an endorsement of hiring less-shiny candidates who have been at their previous employers for longer rather than the shiny candidate who keeps jumping ship every time.


True, this puts the onus on the CTO not the junior developer who will make lots of dumb choices, not just shiny-chasing.

But there's no reason you can't work a day-job doing boring technology and do fun stuff on your side projects.

Companies should actively support your side-projects, especially if they involve R&D experimentation that could be useful in the long run to the company. Whether that's paying for you to attend educational events or any books/online video courses/supporting SaaS services/etc or giving you 10-20% of your week to do what you want.


There are too many CTOs and senior developers out there who are too busy working on their pet projects to care.

The foremost role of senior devs (at least in a company with juniors and mid-levels) should be technical oversight. Set up the architecture, set standards, and do the code reviews.

Not sit in the corner and learn React.js.


Why would anyone pay a senior developer $100-200k+ to sit in a corner and learn React.js?

I was talking about junior devs making (predictable) bad choices not senior devs. Your senior developer should know better than to shiny chase and value delivery over cute technology, otherwise he's not a senior developer.

Some startups can't afford a senior developer or simply don't know any better... but even junior can pump out working apps. Then it's just up to the next evolution round when you build a professional team instead of some cowboys.


Once upon a time, there was value seen in the weird concept of "professionalism."


> Why would he care how the old company is doing?

Because at one point one recruiter or HR manager will ask him to provide a reference from his previous manager? And when he looks like an absolute ass he can pad his resume with whatever he wants? Short time thinking will not get rewards in the long run in that case.


Yeah but what are the chances that someone from the hiring team will call an old reference and get told the story about how said developer's shiny thing chasing caused them a lot of pain? What are the chances that the developer gets a big raise from the new job because he has experience with a desirable shiny new technology?

The reason we have this phenomenon is that companies highly value (or even overvalue) experience with hot new technologies. So even if as a developer you know that maybe kubernetes, microservices, and NoSQL [or whatever it is] are not right for your organization, you know that having those on your resume will help you get a big raise and a promotion on the next job. The chances that your new tech chasing misadventures will haunt you in future jobs is very low.


> while all the others in the company will rot in hell because of the stupid choice of a junior engineer

If a team's management can't prevent this from happening, then they've got bigger problems to worry about.


True. But some people will do everything (including ruining team morale) to play with their new shiny toy and it's usually easier to ruin that than taking steps for prevention. But you are generally right.


>But some people will do everything (including ruining team morale) to play with their new shiny toy

Because everyone has different interest/goal. The interest/goal of junior developer is different than the senior developer, even though both working in the same company.

It might very well be in their best interest for the junior develop to play with shiny toy, if that the case then that is what they should do.

While for the senior developer though it might be in their best interest to prevent that.


Surely what you state here can be said of any advice given due to experience. As long as the natural inclinations of younger developers push up against said advice, it's completely sane to give it. Because without it, young developers will have an incomplete mental model, and will lack the intellectual tools to potentially make better choices early in their career.

Also it seems to miss the point somewhat: choosing 'boring' technology has little to do with working on things that are unfulfilling. On the contrary, when tech stack choices are the thing from which developers derive their excitement from, in my view, it's often "empty calories" masking the lack of "emotional nutrition" they are getting from the problem they are solving or the users they are serving. For every whiz-bang neural network getting users to click on ads, there's a 'boring' embedded system running on an ancient C framework somewhere putting satellites in space. I'd rather work on the latter any day of the week.

As Hamming said, "if you're not working on the most important problem in your field, why not?"


Because my family need feeding and housing.


I gather your response here is meant to be a retort to the idea that you should be working on the most important ideas in your field, or that the question by Hamming implies some sort of judgement if you are not. That's not the point of the question. The point of the question is for you to reflect on the reasons you are not, as a means to helping you guide your future choices insofar as working on things that are important matter to you. (Which for some, may be not important at all.)

Many people do not reflect on the reasons they work on the things they work on, or on the weight they give to the import of the problems they are solving, and so need such a question to be asked to help them to understand their needs, motives, and goals. When I first read the question I was in a situation where yes, I had financial needs that were pre-conditions for a job, but I failed to recognize the degree of choice I had over the problem space or system I was working in with regards to focusing on problems I cared about solving (or, more commonly, avoiding working on problems I considered meaningless or superficial.)


The problem is actually finding jobs where the problem itself (rather than the tech stack) is compelling. Personally, I haven't seen much of that outside of academia or the elite of the elite Google X type roles.


I freelanced for a guy who ran a cabinet factory, helping him integrate his automated fabrication equipment with his customer ordering system. Dead boring old technology, but extremely interesting to me learning about wood-working and stuff.

I don't know if I have a point, but I bet if you're not picky about tech stack (at ALL), you can find some really cool work to do in non-tech industries.


Isn't it a bit silly to presuppose that what we have now is the best we are ever going to get. I mean, files were state of the art at one point, but i don't think anyone will suggest we go back to punchcards.

"Boring Technology" is the wrong thing to aim for. Aim for simplicity.


Which JavaScript framework is the simple one?


The simplest is the one that doesn't exist ;)

That is: you're presupposing that a JavaScript framework is actually necessary in the first place. I'd argue that the whole concept of a JS framework is the epitome of chasing after shiny and complicated things. If instead you start with just plain ol' HTML and incrementally add CSS and JS as actually needed, you're much more likely to end up with a much simpler and more maintainable end result (or, at the very least, you will have figured out the actual and specific reasons why you might need the additional complexity of, say, Sass or this week's JS framework).


Mithril? https://mithril.js.org/

The downsides might be (I'm guessing) performance for very complex apps and lack of pre-built third-party components.

There is definitely a spectrum of simplicity among the frameworks. Angular seems to sit on the opposite side of the spectrum. It has many benefits, I'm sure, but in every aspect they've chosen the more complicated solution.


VanillaJS!

But seriously while Svelte is the new kid on the block, they do claim to have minimal API design and no virtual DOM. So I guess the real answer is Svelte.


It does have a compiler, though. So compared to the old-fashioned JavaScript workflow it adds a mandatory build step. And I don't think it would work with languages that compile to JS – or does it?


So much this! I mean the author also worked for 7 years at Etsy which did ship quite a lot of new things (bad and good). For me this is similar to telling a 20 year old, dont go out, stay home, watch tv you will less likely get injured.


The problem is that a lot of shiny new technology is not cutting edge, nor bold. It's just new, and often done more poorly than boring old technology. Also, for someone fresh out of school, you should understand what the status quo is capable otherwise you can't possibly understand what a new technology is really offering.


There's a difference between bold technology and new technology.

A lot of new tech is shiny and provides legitimately useful benefits. Think the new wave of frameworks that started with Rails and Django.

There's a lot of shiny tech that has been a gigantic waste of everyone's time, like MongoDB and people trying to shoehorn MapReduce into every possible computation.


I feel like that's terrible advice for a fresh grad. If they keep getting cut early on, they will lose confidence and start doubting every decision they make, losing motivation to work on anything and burn out.


If they are always overruled when they want to try something new, they will also lose confidence and motivation.


The interesting thing here is that I find many young engineers want to work on the newest tech, however, they also want the 10-5 schedule.

If you want to work on new and exciting, you have to be willing to own the consequences and stay up late digging into the bugs, learning the ins and outs, and committing to delivering what you said would be delivered. </endrant>

TLDR: If you want to work on interesting, it'll take more commitment.


Another case of someone discovering, after 10+ years in tech, that code is a liability and you're supposed to solve problems instead of chasing trends and padding the resume.

Great that he's spreading the word!


> you're supposed to solve problems instead of chasing trends and padding the resume

Yeah I thought that until I had to look for a new job and my resume just had boring old technology on it. Now I make sure I do some padding!


Agree, commenter above you might have thought resumes and interviews are related to the job we ought to be doing and they aren’t.


History is filled with re-discovery and explanation of things that were blindingly obvious to the generation that came before.


"Those who cannot remember the past are condemned to repeat it."

― George Santayana

"I've got news for Mr. Santayana: we're doomed to repeat the past no matter what. That's what it is to be alive."

― Kurt Vonnegut


"Those who can remember the past are condemned to watch everyone else repeat it."

-- I have no idea who originally said this, and DuckDuckGo doesn't seem to be especially helpful in enlightening me on the origin of this particular corollary to Santayana's quote


On an individual level, Vonnegut is wrong. On a societal level, he is right. Santayana is still right.


There's even an expression for it: reinventing the wheel.


No, reinventing the wheel is alright; at best you may find a new and better kind of wheel, at worst you'll still learn something. This is forgetting that wheels can be round.


Our lifetime is finite. Given the choice, I'd rather invent the car than reinvent the wheel.


It's nice that different people have different interests. I, for one, find materials science fascinating and car engineering rather bland.


Thus, reinventing it, no?


Learning from the past isn't to prevent you from reinventing the wheel but from reinventing the pothole.


Nothing wrong with reinventing potholes...

https://www.indy100.com/article/this-man-is-painting-penises...


Only for those who don't understand the euphemism :). That guy actually learned from past mistakes and avoided reinventing them (methods that never worked).

What the original saying is about is pretty clear from the words "condemned" or "doomed" to repeat it (depending on the version). [0]

The saying is not about "reinventing the wheel" as in "refining/reinventing something already positive". It's trying to steer people away from major mistakes. Like ones that lead to great human tragedies and loss. Of course, everything can scale, it can affect an individual or the whole world.

[0] https://en.wikiquote.org/wiki/George_Santayana#Vol._I,_Reaso...


The funny thing about that expression is that wheels have never been reinvented. As far as we know. The Summarians had them, and the technology spread through contract with other civilizations. But not as far as we know re-invented anywhere else.


I think people at the same epoch living in other continents did reinvent them.


That's independent invention, not re-invention. Reinvention requires that you could conceivably be aware of / be using an existing technology, but instead you choose to create it from scratch yourself.


Fair enough, I guess you're right.


No, your mandate is to advance your career, not solve problems. If you’re rewarded for resume padding more than problem solving then keep resume padding.


It is nice to be idealistic, but what you wrote is true. Just like people think companies are doing best things for their customers. Companies do what is best for them, and it happens to align with what customers need, sometimes it is better for company to do something that is not good for their customers. The same applies to employees, most of the time what you do aligns with company goals, but when it does not, you have to change job or do something else.


chasing trends is a problem but suffering java <5 too

You can deliver so much more, so much safer using other techniques...


I think it makes sense to keep up with the latest version of the programming language you use, and keep updating the codebase.

The important part is not introducing an entirely new language that no one knows, or changing around the entire architecture without really being able to understand the long term consequences.

Also, after moving to C#, I don't know why anyone uses plain old Java anymore. C# runs on Linux now. It's not the 90s anymore. Oracle is the dinosaur and MS now is kind to its developers. Plus I'm hoping that client-side C# will take off with WebAssembly.


As a previous fan of C# and a long time Java sufferer, I'm pretty sure it's too little, too late for C#. Sadly, years of being locked to Windows has stunted the C# ecosystem and given Java enough time to get less horrible.

Java 8+ is okay. Lombok helps. There's other great, mature libraries around. There's a choice of IDEs. There's JREs to choose from - there's even JVMs to choose from. Gradle makes build files not suck and makes it easy to add static checking and linting to the build process. AFAIK, C# doesn't have auto-formatting options outside the IDE.

As much as I used to have your view, Java is alive, works fine, and makes money. Or Minecraft mods.


Many consulting shops actually use both, they see it as an asset instead of doing platform wars.

Not only Java vs C#, rather JVM and CLR languages.


Because there are plenty of platforms where Java runs and C# does not, the world is not constrained to Linux and Windows.

Copiers, phone switches, IoT gateways, SIM cards, military control units, drones, Intel ME and a couple of phones.

And all of them have libraries that work across all JVM implementations, regardless of the OS, not the hit-and-miss that still happens between Framework and Core, without a good story how to go cross platform [1].

As for Oracle, they rescued Java while no one else cared [0], kept Maxime alive transforming it into Graal, made several improvements to the language, while allowing jars from 20 years ago to still run unmodified in modern generations of the runtime.

I use both platforms since their initial releases. Each of them has plus and minus.

[0] - Only IBM did an initial offer, that they withdrew afterwards.

[1] - Naturally there is Xamarin, but it isn't without its issues and doesn't run everywhere where Swing, SWT or OpenFX run.


> MS now is kind to its developers

As long as you like telemetry: https://github.com/dotnet/cli/issues/3093


> The important part is not introducing an entirely new language that no one knows

> I don't know why anyone uses plain old Java anymore

You see, if we refuse to change, we can't ever get better.


Sorry, I meant on new projects. I'm not talking about using the latest and shiniest thing, I mean using C# or Python or some other language.

However if youre strongest in Java then you should probably do Java.


Code is not "a liability".

It is a tool that can be used masterfully or utterly abused.

Every craftsman has to invest in their tools. And for someone to truly master their craft they must sometimes hone their tool skills speculatively, without short-term gain, and by sacrificing the time and attention used for other things, like actual projects.

If everyone just obeyed their project manager and always focused 100% on the problem at hand, we would be producing sad, boring things. If you want to see what that's like look at enterprise applications.


Code absolutely is a liability. It is not a tool, it's a means to an end.

A simple tool, like a hammer, is not volatile over time. The house that you build with your tools is. It will weaken and rot, and given enough time it will crumble. It takes care and maintenance to keep it safe to live in.

It is the same with code in production; at some point in time in the future, through no fault of your own, the code can become "wrong" as reality changes around it.

Code is also costly to maintain, sometimes even for small code bases. As the world changes, so must the code.

One of the definitions of "liability" in the Merriam-Webster dictionary [0] is "a feature of someone or something that creates difficulty for achieving success". This definition is often met by code.

[0] - https://www.merriam-webster.com/thesaurus/liability


Strictly speaking, code is both an asset and a liability (as in double entry book keeping, you can’t have one without the other). It depreciates over time.

A simple tool like a hammer is rarely treated like an asset as it is expensed up front.

Code can generate cash flow through its replication and exchange, or its application. Whether this is higher than its depreciation and maintenance costs determines whether you’re losing money on it.

The above is an analogy of course, In actual financial accounting we typically see only see code or “knowledge capital” on the books after acquiring another company, usually as a fat chunk of “goodwill”.


> It is not a tool, it's a means to an end.

You could almost call it "something (such as an instrument or apparatus) used in performing an operation."[0]

[0]https://www.merriam-webster.com/dictionary/tool


If you are going to use my own trick and use the same dictionary against me, at least have the common decency to also use my own words against me!

Definition 1C in the link you provided lists "a means to an end" as a definition of a tool, which I explicitly mentioned code was.

So I must concur, code is a tool. But I'm adamant that it is also a liability.


It may not be sexy, but a good chunk of civilization runs on sad, boring enterprise applications. I for one am proud to build apps that are useful enough that businesses are willing to spend a large amount of effort and money to use them. Even if they are so ugly that they occasionally cause users’ eyeballs to bleed.


Code is absolutely a liability. Every line written must now be maintained which incurs an ongoing cost.

What was missing in the gp comment is that code is also an asset. The goal is to have the value of the asset greater than the liability. This ratio changes over time given a multitude of factors like problem domain and how well the original code was written.


Agreed. A favorite quote: “[E]verybody wants to build and nobody wants to do maintenance.” - Kurt Vonnegut, Hocus Pocus

For a long time, my bias was to write code when faced with a problem, without realizing the debt I was taking on and the other solutions available.

https://www.nemil.com/on-software-engineering/dont-write-cod...


In the real world with limited resources, limited skills, limited budgets and limited brain power, yes code is a liability. Not only a liability, but a liability among other things.


>In the real world with limited resources, limited skills, limited budgets and limited brain power, yes code is a liability.

That's an unbalanced conclusion. Code can be a liability but it can also be an asset.

Do we have limited resources? Maybe writing new code can help. E.g. In cars, the electronic fuel injectors have computer code determining the optimal amount of gas to inject to minimize fuel usage. Fuel efficiency is improved over older "non-computer-code" carburetors. Computer code is an asset that's equivalent to discovering new proven oil reserves.

How about the limited resource of bandwidth on networks? Code and algorithms like h.264/h.265 that shrinks video enough to make it practical to stream to homes maximizes the existing network without having to lay a bunch of new fiber optic cables to every house.

And consider all these YC companies that aren't worth much today. How do they become more valuable? They need to write more code. New features that customers find valuable requires coding. Even if the startup's code does not have patents, it will still be part of the tech portfolio assets that the acquiring company will include in the purchase.

The "code is a liability" seems to be true if one limits their focus to negative situations like npm leftpad and the Boeing MCAS accidents. In contrast, I don't think the astronauts thought the Apollo Guidance Computer was a liability if it helps them get to the moon.


Code may be necessary to write software, but it's still a liability. All else being equal, accomplishing a task in n lines of code is worse than accomplishing it in n - 1 lines.

That's where those aphorisms come from. No code runs faster than no code! No code has fewer bugs than no code! No code is easier to understand than no code!


We all know that. We are not arguing with that. We all understand that code is always written to solve a problem or make something more efficient, or create a new functionality. In that way it is most of the time an asset. That is a given. What we are talking about is a higher-level view which you get from a lot of experience in big codebases and big projects, in which code is almost always also a liability. It means that when you write a piece of code, the wise thing to do is to always assume that it will be a liability (in addition to all the wonderful things it does) to someone, in some situation, in some time. That way one can easier think through all the ways this liability can show itself, and thus somewhat mitigate those ways. Returning to original topic, choose a stable mature language/system which a lot of programmers know, instead of a new shiny one (even though the shiny one might be 5% more efficient in lins of code or something) - is a great way to reduce the liability surface in the future.


> That's an unbalanced conclusion. Code can be a liability but it can also be an asset.

maybe that's why he said:

> Not only a liability, but a liability among other things.


>maybe that's why he said: > Not only a liability, but a liability among other things.

It was my mistake then and I misinterpreted "among other things" to mean : "Code is not just a liability but it's a liability among the other things I just listed like limited resources and limited brainpower" instead of "Code is a liability among other things I won't say explicitly like code also being an asset".

Because I interepreted it the 1st way, I wanted to emphasize the opposite view: that limited brainpower is why code can be an asset to relieve humans from mundane boring calculations to redirect that precious brainpower to higher level creative thinking. Limited resources also motivates new code that can be an asset to expand those resources.


I can understand being confused by that wording, it wasn't the greatest. :)


Code is a liability; functionality is an asset.


If you want to spend time learning new tools, that’s fine. Don’t do it during a critical project. Don’t make your learning projects a liability for your colleagues.

Craftsmen are experts at applying a limited set of tools. That’s why they’re craftsmen. You hire them because they’re the best at doing the thing you hire them to do.


> My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.

~ Dijkstra (1988) "On the cruelty of really teaching computing science" (EWD1036).


Exactly, if it was a liability, you would simply delete it, because unlike real liabilities, it's legal.


You don't delete it because it's a liability in the maintenance account but it's an asset in the features account.


So everything is a liability?


In the double entry bookkeeping sense that GP is using it, yes, everything is a liability. They’re also necessarily equal, which I think is where the disconnect between the analogy in the post and the analogy here comes from.


> If you want to see what that's like look at enterprise applications.

If you think that is an argument against enterprise, you might want to take another look. I spent the early part of my career in enterprise, then moved to startups that serve government customers. Every app I have ever written is boring. I've been working on one boring app for most of the last 7 years. 7-figure ARR, and acquired, but boring.

Yet I consider my career, and those apps, to be great successes.


Coding is a tool.

Code is the result of that tool. If you use a tool to build something you're constantly repairing/maintaining, then its a liability.


I thought this was one of the best articulations, though. All it needs is a CC-BY-SA license so it can be used outside of one blog.


I think the best example of this is IDE and text editor preferences.

They are totally regional, programming-language subculture, and age related

The proponents of Atom/VSCode/Sublime/VIM etc will swear by it to the level of gatekeeping other competent software engineers away from them if they don't use that particular editor, and they all do - or can do - the same thing.


There's a longer term issue that appears to be missing here. At what point do you change? There must be a point otherwise we'd all be here writing COBOL/CICS with some whizzy Javascript interface library.

Over time it becomes harder and harder and more and more expensive to maintain old technology. Because frankly maintaining old technology is pretty boring and career destroying so you need to be paid more and more to do it.

The marginal benefit of the new shiny is indeed an overhead you should try and avoid, but also you have to dodge the trap of being stuck in a technology when everybody else has moved on.

Anyway back to the Adabas support...


Recipe for getting things done:

1. Place every new & cool technology into mental quarantine for 3-5 years. 2. If, after that: a) the tech is still widely used, and doesn't seem to be getting overtaken by something else b) you're about to start a NEW project where the tech would help c) you're not in a rush and feel like trying something new

...then go for it.

Learning complex tech that just arrived is a waste of your life, if you want to accomplish things. It's only useful if your aim is to appear knowledgeable and "in the loop" to your developer peers.


This works, but it does force you to ignore new enabling technologies that make new use cases economical, where most innovation and value creation is.

You'll be very efficient at accomplishing old use cases, which is just as well, because you'll need it - the market for them is most probably commodified, very competitive, and with low margins. Dominated by big actors with economies of scale. Not a comfortable place to be.

You'll probably get into very few new use cases, because by the time your 3-5 years quarantine passes on their enabling technology, they're already explored, if not conquered, by several competitors. The exception to this is new use cases without a new enabling technology, but those tend to ressemble fads, you'll have no tech moat, so again they'll be a very competitive, low margin market.

New techs create value only when they solve some use case more efficiently for someone. This creates value. Not all new techs do this. God knows that especially in software engineering, people deliver new techs for fun and name recognition all the time. Managers allow it as a perk, because of the tight labor market. But it's a mistake to consider all new techs to be in this later category.

New techs are also tech moats, giving you negotiating power to capture some of the created value. Without the tech moat, you better have a non-tech one (regulatory, institutional, market structure, economy of scale) because otherwise the value you capture will quickly be pushed to your marginal cost - if that - by competition.


You're talking about a tech stack, which is not the same as, for example, a modern web application. You can build a perfectly modern web app with 'old' Java/JEE stack, backed by an unsexy SQL-based database. You don't need Node.js with MongoDB.

Techstacks very very very rarely enable new use-cases. They are the equivalent of fashion statements by young developers who haven't learned what it means to support software for 10-20 years.


Do you similarly feel like frontend stacks have seen no meaningful innovation?

I think your argument works fine-ish for backends but it's bananas to suggest that jQuery is the same thing as React or Svelte. I do security for a living and maybe 100% of all jQuery sites have XSS. If I find a React page I can just grep for dangerouslySetInnerHTML and I'm 80% of the way there. (I am exaggerating, but hopefully my point is clear: from both a development perspective and a safety perspective, React is not just a shiny new version of jQuery.)


So I do think front end stacks have come a long way over the past decade... but just to give a counter example...

I have seen a lot of sites get worse as a result of migrating from server-side rendering to client-side rendering. Things like broken back buttons, massive page sizes, and terrible rendering performance on low powered hardware.

An example that comes to mind is Salesforce. They introduced a new “lighting experience” that seems to use a newer stack. But it’s taken them ages to migrate all of the features over, forcing users to switch back and forth between the old and the new. It can also be a real beast to render on something like a surface tablet. It must be costing them an enormous amount of money to support both UIs, and I have to wonder if it was really worth it vs maybe applying some new CSS and more incremental improvements.


The procedural nature of jQuery just makes for buggy as hell websites as well. Manual DOM updates etc. etc.

React being 'declarative' tends to end up with more stability in regards to UX (e.g. complex search forms). Makes the integration of third-party components smoother too.


Sure! I’ve written large apps in it and am familiar with the technology. My point is that whatever the reason, frontend clearly has made strides. So, is it:

1. Frontend was less mature to begin with

2. Frontend has a unique, definitional state management problem in the DOM

3. Actually, we can make real progress sometimes

4. Really, frontend hasn’t made strides, you’re just ignoring $x

5. Several/none of the above?

(I think real progress is possible and disillusionment with some new technologies should not prevent us from trying. But also that the DOMs unique state management problem is so overt that functional approaches almost couldn’t help but dominate.)


What is a new use-cases that React brought in, that couldn't be replicated with plain old JavaScript?

Browser capabilities are game-changers, not a tech stack that runs on top of them. I don't need React or Angular or Node or whatever to make use of them. I can use those capabilities with plain old Java Servlets and JavaScript.

React is a shiny new jQuery - that's all it is. WebAssembly, Canvas, WebRTC, etc. those are something different. Those enable new use cases.


Concepts and abstractions, like the virtual DOM, matter. Just because you could in an abstract sense (of course you could! It’s a JS library) doesn’t mean anyone actually could.

Thought experiment: why does your argument not apply to, say, C? Why bother doing new language or library design? It’s all int 80h eventually.


I'm not saying abstractions are bad. They make writing code easier for developers. It is easier for developers to write a web-app served by Node.js rather than a standalone C program.

I'm taking the perspective of the end-user. From that side, whether the application is written in C or Java or C# or JavaScript makes no difference because the end-user never knows or cares what the underlying language their app is written in anyway. The platforms are game changers; platforms like the PC, like the internet, like the web, like the smartphone, like the browser. Those enable different use-cases. They are the ones that drive broad societal changes.

By the way, I do think the virtual DOM is either a fad or simply an overstatement. What I mean by overstatement is that batching updates is one of the most normal things developers have been doing, from that perspective there's nothing new here.

From a fad perspective, there is no reason why the regular DOM cannot be fast and schedule batch updates to optimize framerate (and with one less level of indirection). The virtual DOM may actually be a problem in and of itself because it makes an assumption that it knows about how to schedule updates better than the actual DOM - even if that is true today, why would it necessarily be true tomorrow?


Doesn't XSS require a backend that can receive and then transmit malicious javascript from a hacker using the site to a victim accessing it? And wouldn't that be the case whether the front end was done with jQuery or React?

I'm very hesitant about my assumptions here, and I am confident I'm missing an important point. So if you can clear up my understanding I appreciate it.


Stored XSS requires some sort of backend, yes, but reflected and DOM-based XSS does not. Furthermore, all XSS is some variant of a contextual confusion where something that wasn’t intended to be interpreted as JS suddenly is.

jQuery makes XSS more common in several ways, and some of them are really just the influence jQuery on the frontend has on how the back end works. Some of those ways are pretty subtle, eg CSP bypass gadgets in data attributes (which are very commonplace in jQ libraries). By contrast, React, by building a DOM, has contextual information that jQuery lacks. Go’s HTML templating is unique on the server side in that sense since it too actually understands if it’s in a text node context, a script context, an attribute context, or an inline JS (such as onclick) context, and hence the correct way to treat arbitrary input.

Of course, it’s not because you use React you’re immune. I got XSS against every site that used ZeitJS for example. But the pattern that lead to that (generated code dumped into a script tag) is a common pattern for pre-React frameworks.


I'll raise you a windows service that communicates with Singleton .NET Remoting hosted with a WebForms 2.0 app or a WCF delivering SOAP service. Both talk via remoting to the windows service which talks an SQL Server choked with materialized views.

Is that boring? It certainly has some issues.


> It's only useful if your aim is to appear knowledgeable and "in the loop" to your developer peers.

Which means you get respect and better job offers.


I spoke once with a Microsoft consultant, he was advising us on upgrading strategy, as our customer had a mandate to be at least on version N-1, that is the customer must be on the latest major version or the version before, so at the time we were migrating off Windows 2003 as Windows 2012 was going through internal validation.

He mentioned that on a bank he'd been advising, the mandate was the opposite, namely at most they could be on N-1 and were in the exact same position as we were, except that they were migrating to Windows 2008 and we to Windows 2012 as the N-1 mandate in practice meant that we'd upgrade services every other release, except when there was a waiver, which was often and explained why when I left in late 2012 we still had some Windows 2000 boxes.

As a techie, it was always a pain going back to an old box as you'd try to do something and realise that it was not possible as that feature had only been introduced in later versions. Even worse, was when it was possible but extremely convoluted and error prone.

It's interesting how everybody thinks that it's career suicide to support old stuff when it actual fact most people are hired for a mixture of their knowledge and their capacity to learn. I appreciate that it's lower risk to hire somebody with experience on the exact product but would you rather have an extremely good engineer with no experience in your stack or a good one with experience in your stack?


In personal life, I do the something similar, actually. I have a state-of-the-art digital camera circa 2012, a state-of-the-art camcorder circa 2011, and similar. I'm always around five years behind the tech curve. The cost is often 1/2 to 1/10th, and I'm not sure I lose anything by being five years behind in terms of happiness or much of anything else.

As with anything, there are exceptions. My phone needs security updates, and 4k displays make me more productive at work, so there were places I bought the latest-and-greatest. And when I need to develop for a platform, well, I get that platform.

But for personal life? A used iPod touch 4th gen sells for $20 on eBay. XBox 360 can be had for around $40. Exceptionally nice digital cameras from a decade ago -- workhorses professional photographers used -- can be found for around $200.

The way I figure, I just behave as if I were born five years earlier. I use the same technology at age 25 as was available at age 20.


> I have a state-of-the-art digital camera circa 2012

This is a good strategy for anything with a high/quick depreciation curve. My DSLR body is pretty old now, but still works great (a D7100). The tech in bodies changes quickly so even waiting just a short period of time can save significant money. Spend money on lenses instead which hold their value and typically can be used across many bodies.

Cars are similar. My truck is a 2011, and I have no plans to buy a new used one anytime soon.


Agreed on camera bodies and lenses, but in my view a 2011 car is still quite new. I guess this depends on the country, taxation etc.

IMHO it makes sense to buy a used car at about 300 thousand kilometers. At that point it's cheap, it's already had a bunch of expensive parts replaced, and if it's survived this long it has a high chance of going another hundred thousand (given proper service, obviously).

Of course another point of view is that getting a car serviced is stressful, so it's best to buy new. But then it's even less stressful to mostly ride a bike and use a taxi or rental car when needed.


It's certainly a novel approach in this era where everybody seems to want to have the latest and greatest.


Counterpoint: I spent my first 3 years at a major programming coding in Java and never actually learned anything there except how to work with people. It was all adding if statements to a gigantic file because no one there knew what they were doing.

I worked in an HR company and didn't learn much.

Then at my last job I worked under a really smart guy who did everything the right way, and I'm way better now. If I had started at a company like that, I would be much farther ahead now.

However, the real think to know is how to architect a project properly with tests/dependency injection/layers, not all the newfangled technologies.


For me, change happens when I see a real improvement in almost every way possible which is usually determined by building a few things and letting my brain simmer on the technology as a whole so I can look at it with a logical and unbiased perspective.

I remember looking at Node when it first came out and got mildly excited, but that excitement quickly went away after writing a couple of small apps with it. It just wasn't for me. The same thing happened with Go. I didn't see enough wins to switch over to using either of them for building web apps.

On the other hand, for me Rails and Flask stood the test of time. Nowadays I'm working with Phoenix and I'm past the hype phase and it looks to be another winner. But in all 3 cases (Rails, Flask, Phoenix) I typically don't switch away from them for good. They just become another tool that I know. Of course I will gravitate towards the one I'm the most happy writing code with, but it's not a black / white progression from 1 to the other.

I don't think there's really a definitive answer on when to change. Like 3 weeks ago I picked up a Flask contract and it was a pleasant experience, even though I'm using Phoenix to build a side project currently. You don't always need to throw the old thing out. You can use multiple technologies in harmony. You change when you start dreading writing code in the old thing, which is very user specific. In other words, write more code and you'll quickly find out what you like and dislike.


Go is a compiled language and Ruby/Python are interpreted scripting languages. There are domains where it's a much more appropriate choice (distributing binaries, performance sensitive code). The type system is also quite nice vs. dynamic typing (in most situations). It's weird to see people comparing Go and Python in this thread as they solve entirely different problems and shouldn't be interchangeable, not due to developer preference but due to fundamental features of the language.


> It's weird to see people comparing Go and Python in this thread as they solve entirely different problems and shouldn't be interchangeable, not due to developer preference but due to fundamental features of the language.

Yes but when Go first came out, a lot of people jumped on the bandwagon and started proposing they would use Go for web applications too. There's definitely some overlap in building web services with Go and Python so I wouldn't say they solve completely different problems.

Go and Python are also pretty related for command line apps too. You could totally use either one to build a CLI tool.


> Go and Python are also pretty related for command line apps too. You could totally use either one to build a CLI tool.

Distribution of Go CLI apps is much easier as you don't need to have your end users install the 3rd party libraries themselves.


Yeah totally, for using CLI tools I much prefer using a Go binary too because it doesn't involve installing anything.

But in practice as a web developer who occasionally writes CLI scripts, I personally didn't think it was worth going all-in with Go for that.

Especially not when for smaller scripts you can have a Python or Bash script in 1 file[0] and it will run on all major platforms without installing 3rd party libraries too. Most major distros of Linux, MacOS and WSL on Windows have both Python and Bash available. For my use cases that's good enough.

[0]: For example just the other day I released a ~200 line self contained Python script to solve a problem I had which was figuring out what changed between 2 web framework versions: https://github.com/nickjj/verdiff


Given the broad capabilities of the Python first party libraries, you can do a lot of work without 3rd party libraries. It’s not in as much fashion as it was 10+ years ago, but it’s still quite doable.


This is true for Python, too, albeit quite a bit harder due to the relative lack of first-party tooling for generating standalone executables.


As far as I'm aware Go doesn't return the exit status of a process the way Ruby and Python do. Surely this is a big disadvantage for CLI scripts?


Sure it does: https://gobyexample.com/exit

If you mean Go can't read the exit status of a command it runs, that's incorrect as well: https://golang.org/pkg/os/exec/#pkg-overview


I don't know about that. I do a lot of web development and Go is really very nice as a web server.

It's extremely simple and pleasant to use. All it needs is generics and it would be my go to for most web services.


> fundamental features of the language

The thing is you’re right, Go compiling to a self-contained binary is different from a folder of .py scripts.

But both can be deployed into production.

The deployment steps are different but the outcome is the same, so they can be used interchangeably.


If you use third party Python libraries, the end user is going to have to install them too. Python really isn't a great language to be building consumer distributed command line apps in.

I think a lot of this discussion is focused around custom software or backend software, but for a publicly distributed binary, Go or any other compiled language is much better than Ruby or Python (and especially Javascript).


That isn't quite true re Python, eg https://stackoverflow.com/questions/2933/how-can-i-create-a-...

Go might be a special case actually, as it was designed to be a "boring" language to reduce the cost of technology choice. But it is completely interchangeable with similar programming languages (like Python) so evaluating the cost of it vs something else is still a very reasonable thing to do.


They do overlap. I don't see how someone can be confused about this at all because the overlap is obvious.

Python and golang overlap for http webapps or apis. They both can and often are used for this purpose.


Go has other problem domains it is appropriate for, but I agree in this domain there is some overlap. Go and Python have very different performance characteristics though, so in that sense they're not really comparable.


> Ruby/Python are interpreted scripting languages

This is not quite right since they both compile to bytecode and execute in a virtual machine

shell scripting is, probably, the very rare example of "interpreted"


Python can be compiled to bytecode, but that's not the default or standard.


It occurs with every execution, if you don’t pre-compile it. That’s what the .pyc files are. It also does it with the “main” file, but it just keeps that in memory instead of writing it to disk.


I'll contradict everyone here: You figure it out on a case-by-case basis.

Generally, risks go down over time and with broad use. SQL, as a technology, is 100% risk-free, having been around forever, and widely used. COBOL is high risk, since while it's been around forever, hardly anyone uses it anymore, at least on modern projects. Moving your Cobol app to Android is fraught with unknown-unknown risk. Something that's been around 2-3 years is generally much lower risk than something which came out last year, and a good 10 years drives risks down further most of the time, but not always. It depend on whether people seem happy with it. Mongo seemed like a really good idea first few years, until people figured out (1) it had wonky performance issues (2) it was really hard to query for some type of queries (3) and what was the problem with postgresql again (it seems to do JSON pretty well too!)? Things change too. Java was the bedrock, stable, safe choice. It wasn't the fastest to code in, it was a bit clunky, but it was THE safe choice, and enterprises flocked to it. That is until Sun died, Oracle happened, and litigation+monetization kicked up to try to treat Java as a cash cow.

The flip side is why would you use it? When I was building an app a while back, I chose React although React Native had just come out at that point. It let me build the app once, and run on web, Android, and iOS, instead of 3 times. I figured cost savings of building and maintaining one codebase outweighed the risks. On the other hand, in most cases, the upsides of switching away from Python -- now three decades old -- are usually negligible, so with the exception of a specific need (run-anywhere, above), I almost never pick something different.

And the final piece is code complexity, abstraction, and modularity. I don't feel bad adapting new numerical algorithms. It's usually a few hundred lines of self-contained code. If a better algorithms comes out, I might swap it out anyways. On the other hand, a programming language or framework is a lifetime commitment.

You work through all the risks and upsides, figuring maintenance is 90% of the cost, and you sometimes end up around the rules-of-thumb everyone gave. But not always.

Trick is to learn probability. It give a good mental framework for estimating expected costs and benefits. You don't usually do this explicitly with equations (what's the probability-Oracle-screws-us times costs-of-Oracle-screwing-us versus cost of upgrading to Python?), but it gives a language to think about risks.


Oracle created a brand new market in Java support contracts which didn't exist before, so that they could enter it and make a buck (wherein FUD is a standard sales tactic for them). They probably viewed their position on the OpenJDK as subsidising a public good, which in general is slightly out of character for Oracle.

Most enterprise vendors have, or will soon have, comparable products for sale. My employers have Pivotal Spring Runtime[0]. You can also get OpenJDK coverage from Red Hat[1], Amazon[2], Azul[3] and so on.

Incidentally I resent that I sometimes wind up defending Oracle's decisions. I think it was globally suboptimal but I can understand their reasoning.

[0] https://pivotal.io/pivotal-spring-runtime

[1] https://access.redhat.com/articles/1299013

[2] https://aws.amazon.com/corretto/

[3] https://www.azul.com/products/zulu-enterprise/


Sun also used to sell Java support contracts.

By the time they went under, Java 1.2 up to Java 5 were only available under support contracts for production deployment.

Somehow Oracle hate ends up excusing Sun for exactly the same practices.


You know, it's the same as with financial advice. Good financial advice is good.... except if absolutely everyone applies it, then it becomes a disaster. Fortunately, there's no risk of that happening.

Same here. No matter what you do, leave others to try the cool new stuff & get burned by it & work to fix it (when/if possible). Stay informed, but don't be an early-adopter. It's sound advice - though it wouldn't be if everyone applied it. Fortunately, there's no risk of that happening.


To quote Terry Prattchet: The second rat gets the cheese.


2-3 years after you read it on Hackernews is a good rule of thumb.


You change when the new tech becomes boring. Boring indicates well known, reliable, and efficient.

Play with the cool new thing in your R&D time. Stick with tried and tested in your implementation time. That's the difference between hacking and engineering.


Well you can sit down and do the maths; you mention Cobol, which you can actually map to the cost and availability of developers. The cost of that technological choice just keeps growing and growing. You can compare that to the cost of converting it to e.g. Java (and multiply that cost by 2-5x because it's very hard to make an accurate guess).

This goes for all technological choices. The cost is not static, but varies over time depending on market forces.


If everybody stayed with the boring tech, cobol developers would be abundant and cheap.


In 2016, Software AG announced that Adabas and Natural would be supported through the year 2050 and beyond. I'm not sure MongoDB will be there in 2050.


COBOL/CICS was supplanted mostly because mainframes were supplanted due to smaller organizations, that wanted to use computer technology, not being able to afford a mainframe. Companies could lease time. But I don't remember it being the norm.

Mini's and then Unix systems allowed us to develop systems with newer technology. Wintel systems expanded it further.

My point is that the new technologies came about via need: it allowed more people to utilize computer technology to solve problems. As needs change, we'll continue to see an evolution of technology to meet the needs.


You never change, at some point a competitor arises that chose different technology and it kills you.


Competitors don't kill you because they chose a new tech. They kill you because they can either:

- solve a new problem

- solve an existing problem better

- solve an existing problem cheaper

A new tech MAY allow that, and it MAY be used successfully toward that objective, but even in that case, that's hardly the core of it for most cases. Not saying it does not happen, but there are much more at play.


In my experience this all boils to the fact that usually it's not the actual chosen technology stack but the missing craftmanship when making the stuff first time.

Then you either end up refactoring the whole setup for years (which usually is expensive, slows down the business development velocity) or rewriting it from scratch (or as a new implementation next to the old one).

If the original implementation would've been sanely made. Then making new features on top of it (or partial replacements or micro-services etc) wouldn't be that big issue. But usually these beasts are more like godzilla-level monoliths with bad architecture etc so it's probably easier to rewrite the whole thing.


Adabas brought back memories. And now when I see all the hype about NoSQL, I'm remember Shirley Bassey's fantastic song - 'History Repeating'


Is NoSQL even still hyped? I thought the hype cycle had moved on to NewSQL at this point.


Probably when there isn't a pipeline of people across the skill spectrum who can understand, maintain and deploy that tech.

If you can't find either a senior or a junior who can both use the same tech you need to perform the business task, then you might be too early or have to think about changing.

The difference in your requirements for juniors and seniors probably tells you about your potential rate of change. If you're based on a recent JS framework, those two will be closer together than a finance org running on COBOL.


Author mentions Clojure a few times as an example of shiny-thing. Ironically, it's one of the few languages that I can reliably do in the browser, on the backend (on real servers or say, Lambdas), and build native binaries for -- which addresses the later "many tools" problem better than a "boring" programming language like Ruby. Unless you don't care about frontend code I guess :-)

(Overall this talk is fine! I think the problem/technology set illustration is particularly useful. I run Latacora's Cloud practice and we own all production infrastructure and "no, more boring, boringer than that, keep going" is a good way to describe what that looks like. We deploy half a dozen projects with exactly the same zero config storage and in ECS Fargate because I'm not managing servers.)


But: - good luck finding a Clojure programmer if your current one quits. - good luck finding answers for your exotic bug/performance issue etc. Code is a liability, it's much more than language/VM/compiler features


I've ran multiple org that are largely or almost exclusively working in Clojure(Script) and trained over a dozen folks to be proficient Clojure programmers, and have not found that any of those to be a problem. For example: you mentioned debugging serious performance problems, but the JVM has some of the most advanced instrumentation on any platform.

This is actually a point subtly made in the talk: your real production problems are probably going to be a lot more subtle than "oh, it's Python's fault". It's "this table ends up getting VACUUMd a lot and a minor version change made the disk size required 3% bigger and that combined with natural growth and a logging bug suddenly caused a catastrophic failure meaning everything started swapping". Yes, one point is what tool you use (a shiny new graph database is likely to be fundamentally less operationally mature than Postgres) but more important is your collective expertise in your toolbox, because a production P0 incident is a bad time to learn what autovacuum_naptime is.

For example: those fancy performance debuggers probably don't work on your Lambda runtime. I don't see that as a Clojure problem, because you would've had the same thing if you wrote it in Java+Dropwizard or Flask+Python or Go (which are presumably in the boring category).

Is the flip side of that argument that you should only write things in PHP, because that is what the market has decided where programmers are the most fungible?


As a counterpoint, I have had a developer who had wrote production ClojureScript tell me it was the worst of all worlds - it doesn't abstract away issues of the DOM and yet you still have to debug what was happening in JS and translate it over to Clojure.

Another thing I noticed is that most developers who had to touch Clojure in my org all pretty much didn't like it at all.


Did you use Clojurescript Dev Tools? https://github.com/binaryage/cljs-devtools

Whilst Clojurescript may not solve all of your JS problems re-frame is certaily an advance over React + Redux. In fact David Nolen and Clojurescript have had a big influence on the development of React.js since Peter Hunt first introduced it.


> I have had a developer who had wrote production ClojureScript tell me it was the worst of all worlds

You know how it sounds? It sounds like: "yeah I like rock music, but I think I don't like this Pink Floyd band. No, no. I never heard any of them on the radio, but Sarah once tried singing their song to me on the phone, and you know what? It was horrible. No I don't think I'm gonna ever listen to them. Not my style."

> Another thing I noticed is that most developers who had to touch Clojure in my org all pretty much didn't like it at all.

Most people who actually try Clojure do tend to like it. There are certain annoyances as a beginner you'd have to deal with (parentheses, startup time, error messages), but once you learn it a bit - they all become really insignificant. And what's there not to like? Clojure is extremely practical, has a nice core library, it is very predictable and stable. Yeah, it is not a silver bullet but it is for sure much better than Javascript and Java (and many other popular languages).


ClojureScript compiles down to JavaScript, so yes, it is entirely possible to spell the same buggy jQuery site you would in regular JavaScript but with the parentheses on the outside and it will solve no problems for you. I don’t think it’s reasonable to say eg reagent does not solve any DOM problems. Similarly, CLJS uses the same source map standard as every other JS targeting language, so you can debug in CLJS with the tools you already use just fine.

Their negative experience is based on facts not in evidence so I can’t really comment specifically. The StackOverflow community survey results suggest their experience is at least unusual.


Clojurescript on its own doesn't give you much for web programming and, not worth it over Javascript. It shines when combined with a react wrapper.


> not worth it over Javascript

It absolutely is. As someone who's dealt with JS/TS/Coffeescript and a bunch of other "scripts" that compile, transpile to Javascript for very long time, I can honestly say: Clojurescript today is the only truly production-ready viable alt-js PL.

and you don't have to use it with React. React with immutable data structures just makes sense. Once it stops making sense for any reasons, Clojurescript community will move on to using something else.


The supply of good Clojure developers far outstrips the demand. There are a large number of us who do Java at our day jobs, and hack Clojure on our hobby projects.

I can't imagine why anyone would have any difficulty finding and hiring good Clojure (or really, any functional programming language) programmers.


> good luck finding a Clojure programmer

Why nobody ever says: "good luck finding good Javascript developers"? Or good Java programmers.

Why it became a default norm: every programmer is a Javascript expert now?

In my experience: hiring Javascript developers is hard - most applicants don't even know the difference between null and undefined or can't explain how prototype chain works. And even when you find someone with good experience, there's always on-boarding period: "we're using OntagularJS 1.5, oh you heard about it, but never used it? It is similar to BrokenboneJS, but with slight differences.". And then they have to learn your conventions, debate over enabled/disabled rules in your linter, etc.

Whereas if you specifically try to hire programmers who previously tried and experimented with Clojure (even if you're not using it) - chances to get better candidates increases dramatically.

If I had to choose between hiring five JS/TS/PHP/Java/C#/Go/etc. developers with $100K/Y salary per each or hiring only three Clojure developers for $200K/Y - I would go with the latter. ROI in that case would be much, much higher. Yes, code is a liability - one messy, inexperienced coder can do so much damage that it may take months to fix it.


Haha, spot on! The emphasis on learning frameworks instead of learning the ins-and-outs and quirks of Javascript can't have been good either. It's interesting being an interviewing candidate when that emphasis shines through. Knowing Javascript, getting up to speed on whatever framework is used on the job is trivial in comparison.


I have found debugging in Clojure is much easier as you could test out each function in REPL and the fact that clojure code is stateless. Debugging/refactoring a complex business rule in Java or Javascript is painful because developer end up writing humongous methods with generous sprinkling of state .


As the other commenters said, I think you're very wrong.

My anecdotal experience with posting a Clojure job online (especially in the Clojurians slack) has been several decent applicants applying in matter of a few days.


Generally, "shiny-things" have some sort of appeal over the "boring" technology, or nobody would choose them at all.

One of the places I'd say they are appropriate are in places where you have some problem where some "shiny-thing" stands head and shoulders above the "boring technology" in some particular and you can ram home those advantages well enough to overcome the other issues. For instance, if you've got a server where you're going to have several thousand clients opening a socket connection and babbling away at each other in some custom manner, starting with Erlang/Elixir is quite possibly a very good idea, because none of the "boring" tech is anywhere near as good at that, even today.

But I do agree that when doing the cost/benefit analyses of these choices, far too many developers plop an unprofessional amount of weight on the "benefit" side that amounts to "it's fun to play with the pretty-shiny". (I'm still recovering from some old "NoSQL" decisions made in a context where a relational database would have been more than adequate, at most backed by something that even at the time was tested like memcached, instead of a combination of what are now completely defunct and semi-defunct databases.)


> I'm still recovering from some old "NoSQL" decisions made in a context where a relational database would have been more than adequate

People jumped to NoSQL because of how awful a relational database actually is to operate. I guess it's easy to forget.


True, but in this context, they had a relational database, and the relational database is still there, and isn't going anywhere, so not a big win there.

Plus, the real problem is that the correct solution was something that was already boring at the time, memcached. If all you literally are using a "NoSQL" database like MongoDB for is a key-value store and literally nothing else, you don't need NoSQL, you just need a boring key-value store.


To operate? Do you mean for a developer to work with? Because most RDBMs systems I have worked with are much easier on the operator than most NoSQL systems for HA/DR.


No, not to work with a running system someone else operates, but to run it in production yourself.


Doesn't that rather depend on which SQL database you are using?


Indeed it does. There's a huge difference in ease of maintenance in production between (e.g.) PostgreSQL and MS SQL Server.


I'm only part way through the presentation, but I loved this line.

"And when people succumb to this instinct they tell themselves that they’re giving developers freedom. And sure, it is freedom, but it's a very narrow definition of what freedom is."

Yup. You get the freedom to choose a language and/or database with unknowns but you lose the freedom to leave work at a reasonable hour to see your family, pursue a hobby, or just veg out. Experimenting with new (to your organization) languages and infrastructure pieces is something you do between projects, not during.

It reminds me of something a jazz musician said in an interview, "The audience doesn't pay to see you rehearse. Rehearsal is done on your own time away from audiences." Rehearsals are where you try new techniques, harmonies, instruments, etc.


I'm uncomfortable with this rehearsal analogy. I'd rather have a company pay me to learn new things than spend my free time coding when I could be seeing my family, pursuing a hobby, or vegging out. From my employer's perspective, I agree that boring is usually better, but as an individual, I'd rather learn on the job than having to feel the need to "rehearse" for my job.


I didn't mean no learning on the job. The time to learn is between projects. If your employer keeps people on projects constantly, then learning happens between tasks.


If you work on a Node backend with Javascript, Where does the idea of switching to Typescript fall into this discussion? Is it a boring technology, or a shiny new technology? It is still using the same tech in Node, which you already know the benefits and pitfalls for, but it isn't like there is no overhead to start consuming Typescript if you haven't used it before.

My hope is for those who agree with the author's premise, that it is still using the same "boring" technology and more people switch to it, because over the last two years when we switched to use Typescript, it has gotten significantly more valuable as more and more people have used it because now a majority of the dependencies I take on have type definitions provided via DefinitelyTyped, or are included with the package.


A wonderful thing about TypeScript is that if you decide you don't like it or need it, you still have recognizable and refactorable JavaScript. On the other hand, if you have a ton of relational data and get excited about the latest nosql DB, you're going to have a hell of a time unravelling that mess.


I’d say talk about it with the team, see there’s enough buy in from the existing team and enough awareness of the extra stuff to learn for both current and future team members. Then try it incrementally on some of the most hairy or type bug ridden parts of the codebase (so you can quickly prove utility) or on some smaller / newer parts of the code (so you can quickly prove compatibility). Then a wider rollout should build up its own momentum if it’s anywhere as helpful as you’re hoping it’s going to be.


I would go for it, but just the fact that you're asking shows that you're less susceptible to injecting needless technology. Typescript has also been used for years, and it has a place as Javascript is used for more complex products than it was intended.

I wrote a short bit in "Handling Hype" here arguing that you need to dig into your problem, assess claims, and weigh tradeoffs:

https://www.nemil.com/mongo/1.html


Per the presentation, you would need to weigh the benefits/costs. Since TypeScript is a superset of JavaScript, I would argue the cost of adding is very low. The benefits will be naturally high due to type definitions reducing bugs and cognitive overhead.


It fits in the ruby part here: http://boringtechnology.club/#33

It would be something that you are adding to the stack. Yes, you are intending to replace something, but in practice there will still be legacy nodejs hanging around.

But crucially this one: http://boringtechnology.club/#43

If you are spending time (and therefore money) changing from one language to another, you are not making features for the business. Yes, you might get more speed re-implementing features, but its almost never going to make up for the hit you took in porting everything.


It ain't just about speed, though. If TypeScript is able to prevent entire classes of bugs, then that lowers the ongoing maintenance costs relative to the costs of maintaining the non-TypeScript version. It'll also make implementing new features easier and faster (and therefore cheaper) in the long run specifically because of that avoidance of bugs interfering with delivery of those features. Both of those factors can (and often do) easily outweigh the upfront cost of porting everything.

Even speed alone, though, can help substantially reduce other more tangible costs; if your TypeScript backend is faster than your non-TypeScript backend, then that translates to needing less server resources to achieve the same result (and/or being able to handle heavier loads without needing to upgrade or expand your server resources).


I see those arguments but:

o the new feature speed is only gained after a complete re-write

o in 80% of businesses people cost >> than hosting

o re-writing always introduces more bugs

The only time that I would agree to something like this, is to get all my teams onto the same platform. Again, thats not a technical decision though.


ES6 to TS is more refactoring than a re-write. Just start with the core library and some TS linting and fix things as you go. The time it takes to 80-20 a microservice project into TS is usually less than it takes to hunt down a confused type bug (ie. someone thought this was an array but it's a string). This is not a "re-write":

function(param1, param2) {}

function(param1: string, params2: number) {}


See that slide that shows the bipartite graph of Problems on the left and Technology on the right. Typescript would be an extra edge in that graph and thus would impact the equation in the subsequent slides. So it would be a shiny new technology.


Can you elaborate? I don't quite understand.

(I also think TypeScript is a great example of a category breaker here since it's a lot closer to a linter than a database. Sure, you have a new compiler stage so it's not quite free :-))


Like many developers I have a strong desire to work with more exciting technology, even when there's no business case for it. Strangely enough, I've found the best outlet for that energy (other than personal projects when I get free time) is configuring Emacs.

It's less disruptive than putting a new language or database into production, and makes me significantly more content to work with a boring stack. Plus I get a more comfortable computing environment out of it.

Maybe someday I'll get Emacs just the way I like it, and need to find something else to distract me from chasing the new and shiny, but I kind of doubt it.


I am the same way. I can tell when my main job gets boring because I start to tinker with emacs more and it provides that intellectual stimulus I need. Not to mention helping optimize away some of the more rote parts of my job.


I haven't used emacs so why is configuring it so interesting?


> "You can’t do poetry if you’re worried about what you’re going to eat today"

Well let me just add this:

> "Shortly after his release from the prison at Meung-sur-Loire he was arrested, in 1462, for robbery and detained at the Châtelet in Paris. He was freed on November 7 but was in prison the following year for his part in a brawl in the rue de la Parcheminerie. This time he was condemned to be pendu et etranglé (“hanged and strangled”). While under the sentence of death he wrote his superb “Ballade des pendus,” or “L’Épitaphe Villon”, in which he imagines himself hanging on the scaffold, his body rotting, and he makes a plea to God against the “justice” of men." - I wouldn't call that the highest step on the pyramid


Somewhat playing devil's advocate, but if everyone joins that club, we keep the status quo and there's no more progress. If everyone joined this club in the 50's/60's we'd still be writing assembly. It's the guys pushing new shiny things that allows our domain to go forward, we just need to accept that 90% of the shiny new things eventually turn out to be crap. It's about the other 10%.


You can argue there has been a lot of progress in web development since 2000, and you can argue it hasn't. I think we did a lot of circles around ourselves, creating new frameworks that do the same stuff as the ones before them. The real progress was in browsers and CSS which allowed things like access to device cameras/inputs, rtc technologies and adaptible layouts.


At the end of his shpeel he talks about how to adopt new technologies intelligently. He's not saying that no one ever should but that you need to go about it a certain way to reduce risk.


We are all writing assembly. There are just increasingly powerful levers between us and the assembly. That's why sticking with a technology for so long is powerful - the advances in tooling keep compounding. Imagine if we switched to a completely different paradigm every few years and had to start compiler research over from scratch.


I am in complete agreement with this. (Sing it, brother!)

But,

"My friend Andrew wears the same brand of black shirt every day. He thinks that if he conserves the brainpower it would take to pick something to wear, he’ll bank it and be able to use it later for something else. [...] I like to think about it like this. Let’s say that we all get a limited number of innovation tokens to spend. [...] These represent our limited capacity to do something creative, or weird, or hard. We really don’t have that many of these to allocate. Early on in a company’s life, we get like maybe three. Not too many more than that."

I don't buy this. If you try to save up your brainpower, innovation tokens, intellectual capital, or whatever, what you'll find when you come to try to use it is that you don't have it. Sort of like if you try to save up your upper body strength for a couple of years before you try to climb El Capitan. But I am just saying that is a poor example.


The anecdote sounds like Andrew is trying to avoid decision fatigue which is somewhat related but not really what the author thinks it means.


The meme that you can 'save up' or 'train' your mental reserves has seen a bit of a come-back lately. However, the scientific evidence of such a thing is scant to none. In general, people that claim you can train your focus and eureka-moments are not looking at the evidence. Brett at AoM has a good intro article on it here: https://www.artofmanliness.com/articles/motivation-over-disc...

Tl;DR: Be disciplined, not motivated. Don't worry too much as well.


It is not just about saving "brain power". It is also about saving time.


I am more and more resonating with these feelings, for the good and bad. In fact, my own writing reflects that over the years: https://joaodlf.com/ , I started my blog with information regarding technologies like Spark, Cassandra, Go, etc, and I am now writing about Postgres and Python web dev and Celery.

Boring is boring but it simplifies so much. I still love new technology, but I am now much more likely to try it out extensively and privately before even dreaming of bringing it to my company. An example: A few years ago I got very excited about Go, I started introducing it at work and solved some difficult issues with it. I could not get the rest of the developers to gain an interest in the language, though. Effectively, no one else but me could touch any of that code, this is not good for business. I have now revamped that same code, using old technology, and learned a lot along the way - so much so that I now feel like this old tech behaves as nicely as the more complicated, flashy, new language implementation.


I literally had this conversation yesterday as a "why I'm not sure web development is for me, long-term."

The level of familiarity that can be achieved with a tool in the timeframe allotted before the "oh no, we're behind-the-times!" fervor strikes just doesn't seem sufficient to me. I'll have co-workers coming to me with the tool-specific roadblocks they're hitting, and have reached the point where I can easily say "yeah, I've been there, you'd never guess but the problem is with [X], just do [Y]." And just as I'm getting really efficient, I've got to throw it all out because that tool isn't cool anymore, and nobody wants to work with not-cool tools, and we've all got our resumes to worry about.

I wonder if there are some cultural changes that could help mitigate this. If there really is an endorphin rush when working with a fancy new tool, why is that there and what can we do to replace that? Is it resume-building, is it enjoyment of that stage of knowing-nothing, is it happiness when you easily do that one thing that was annoying the shit out of you with the old tool?

Can you pick apart why you were excited about and wanted to adopt Go?



The phrase "Bleeding edge" was coined for a reason...

The HN audience typically lives at the bleeding edge and assumes the world does too.


That's why I keep choosing Rails for my new projects. Boring as hell. Gets the job done.


I choose boring technology because it less likely to waste my time. If I choose exciting technology, I will inevitably run into a cryptic error, even if I follow "Getting Started" perfectly, and it will be either difficult or impractical to resolve properly. That happens to me all the time, and I imagine it's due to insufficient testing and developers relying too heavily on their local environment.

Better to work on something boring, but was carefully constructed over time and doesn't require 1/10 the dependencies of today's newfangled thing.


I proudly used "boring" to describe a massive simplification of my work's architecture that I have been working on for the last few years. It has been a successful culling of tech down to the bare minimum (as always- still a work in progress).

Anyway my boss got so so offended and angry that I would use the word "boring". I spent the rest of the day trying to explain it and calm him down.

Anyway use caution with the word "boring". It's more toxic than nuance would suggest.


I don't think the word is the toxic one in your story.


It's fun to dive in and "modernize all the things" - and certainly you can learn a lot in the process (perhaps at the expense of the business).

I don't believe that the only reasonable alternative to that is to "choose boring tech" though. Like so many things in life, it's very contextual, and this is where you need someone with a lot of background and war wounds who can discern valuable improvements from yet-another-CADT-library-rewrite. Sometimes they're hard to differentiate from an old crusty cynic though.

I'm resistant to either pole - I think there's often a 'middle way', where you can leverage proven, high-quality software (think PostgreSQL) while still benefiting from modern approaches (pragmatic functional idioms, etc). I'm old enough to remember when the industry was generally more entrenched in ossified approaches - it definitely has its downsides too!

The hard part is when a (supposedly) proven technology turns out to be a turd. Much of the job of choosing good tech appears to be using heuristics to avoid low quality tech with good marketing. I'd love to have better tools to guide that problem!


Great points. I was reminded about a period of close to ten years when “my stack” was Apache Tomcat with servlets and JSP. I would handle background tasks in threads initialized in servlet init methods. For me it was a universal platform for anything I was required to develop and deploy.

I was in Chicago last month and had breakfast with an old customer (I had never met him in person even though I did a ton of work for him between 2000 and 2006). One of my tasks for him was writing a Sharepoint clone that ran for five years totally unattended by his ops team. After five years they ran out of disk space and did a quick migration to a larger server. My customer thought that in five years they had never restarted to system or rebooted the server (yikes!, no security updates?).


Whenever somebody brings up this "unknown unknowns" thing, I'm reminded again why we ended up in such a shitstorm in Iraq:

Rumsfeld listed three out of the four possible combinations of "known" and "unknown" that he had his minions thinking about for him.

But, evidently, neither he nor his minions thought of or spent any time on the fourth one: unknown knowns. Those are the ones you think you know, but you're wrong. And that's what knawed 5 trillion dollars out of our guts (and killed hundreds of thousands of men, women, and children) over the next N years since.

It is not a new observation. Usually Mark Twain gets credit for "It ain't what you don't know that gets you, it's what you think you know but ain't so."


(1) Discussion from 2015: https://news.ycombinator.com/item?id=9291215

(2) «You can't pay people enough to carefully debug boring boilerplate code. I've tried.» (Yaron Minsky; via [1]).

"Boring" is a wrong word, I think. Choose hassle-free technology that is known to work without surprises. Don't chose simplistic technology that produces busywork, even if it's rock-solid and bullet-proof. Choose a reliable technology that operates on the right level of abstraction, or closest to it.

[1]: https://news.ycombinator.com/item?id=9161366#9161917


One thing I don't see mentioned is: why are people not honest with themselves?

We're using hyper-scalable databases, "serverless", layers upon layers of caching, microservices with dozens upon dozens of interconnects, all built out from the start for the odd chance that the thing we build will be the next unicorn startup.

But what is the reality? Many fail before the tech stack ever becomes a bottleneck and those who do not fail rarely reach the scale that requires the absurdities we use today.

A good old LAMP stack on a decent dedicated server is, honestly, way more than enough for 99.9% of startups and doesn't need highly expensive dedicated AWS and other specialists to get up and running in nearly no time.


Such strongly opinionated statements should be taken with caution. In our profession context is key and this theory does not always apply in every context.

Imagine if the Netflix guys were told to stay boring and not invent new shiny things? Ofcourse not everyone is in that scale.

I do not want to use the latest shiny tech but I do not want it to be boring as well. We are confusing KISS and boring here. A company I used to work at was running a job server at 90% utilization all the time and it even worked as a jump box, this is simple and boring but its also plain stupid. Invest some time and make it proper.

Maybe when I am 50 years old I will want a boring setup, but until then no. I want to be engaged and active in what I do.


One thing is to "use boring technology" and a different thing is to "build boring products"... I can build shiny new things with boring technology.


by new shiny things in the netflix scope i meant like sidecars not the service itself


> Imagine if the Netflix guys were told to stay boring and not invent new shiny things?

For what it's worth most of what Netflix is doing is delivering video files. This in itself does not require rocket science, a matching amount of Squid or any other caching reverse proxy is sufficient - and CDNs at this scale have been around for far longer than Netflix was.


I would change that last phrase, "I want to be engaged and active in what I do" to "I want to build something that solves user needs and ships and is maintainable." Or at least a mix between the two.


I would not change it no. Let's say you are working on solving world hunger but you have to press 1 button every 24 seconds for 8 hours. Yes the goal is great, but I wouldnt be able to do it for more than a week.

I need to be engaged and active in what I do 40 hours a week in which 40% is taken by the Gov as taxes.


I wanted to disagree with this article, but it's all perfectly reasonable. It makes a lot of sense and it's well written. It's also not an absolutist viewpoint, which makes it even more reasonable.

The danger I see here is that when the places I worked in chose "boring, well-tested" technologies they picked COBOL or ColdFusion. It's not true that I would be happy to ship software that worked if done using these tools. If I had to work with them again I'd shoot myself in the... face. I don't think I ever learned anything valuable from them, except maybe "sometimes the tasks you're given will be boring, and that's how life goes".


This is great and I agree 100%.

The problem we faced is hiring people. We found that in order to attract talent, we had to let them use the shiny new technology. Otherwise it would be hard to attract anyone.


I think the real punchline there is just "hiring is hard", because elsewhere in the comments someone has made the argument that you shouldn't use shiny technology because it's too hard to find people (specifically, you'd be screwed if that one person left). Maybe that's a company maturity thing: fancy tech is good for attracting talent early, boring tech is good for attracting a sufficiently large ops team later.


Yes, this is the one addendum to the presentation. An organization must periodically refresh itself in order to continue to attract talent. As an organization ages, it acquires more and more code to the point that rewriting becomes unfeasible except on epochal timescales. Thus some natural expansion becomes inevitable. The key is to contain this expansion.


I can see that. It's a shame though. Personally, I prefer to work on interesting projects. The tech stack used is an afterthought for me.


Interesting… most of the motivation to use new tech is that it enables me to solve problems more efficiently. When you use older, but more established technology, its limitations still hold back the problem solving a bit. This is especially hard to take when I know there are better solutions out there. And it also costs more time & money.

There probably is a pain point somewhere, where it makes sense to switch. I did like the point about reducing the cost of new technology – that may be quite often easier said than done though.


But the pain points of older technology are known whereas those of newer tech aren’t yet.


>> "My friend Andrew wears the same brand of black shirt every day. He thinks that if he conserves the brainpower it would take to pick something to wear, he’ll bank it and be able to use it later for something else."

I do something similar, but in a way that keeps it interesting. I always pull a shirt off the right and move the hanger to the left for clean ones out of the dryer. Pants are the same. Stacked stuff always comes off the top. Clean ones go on the bottom.

I also went with Blogger in 2019 rather than fuss over blog engines. It's exportable, free, runs under your own domain, has few restrictions on front-end customization, and is unlikely to go out without warning. At worst, I defer thinking about it until later when I have a better sense of my needs.

Related to that, I started on a PHP-based static blog engine. I reasoned that PHP was made for smushing data and HTML together, so it was perfect!

https://github.com/CondensedBrain/blogengine

But I also saw a future where I had to maintain all the bits and pieces. I really just wanted a blog, not a blogging engine. I hadn't read Choose Boring Technology yet, but it affirms the thinking that led me to pick Blogger over a custom engine and other pre-made options.


Dan's “re-use the technologies you already have deployed instead of adding redundant technologies” point is what resonates most with me.

> The interesting thing here is that none of the tools that you pick may be the “right tool” for any given job. But they can still be the right choice for the total set of jobs.

Even if that "one choice" isn't totally boring, at least you only have one thing to figure out all the kinks of and support.


>The grim paradox of this law of software is that you should probably be using the tool that you hate the most. You hate it because you know the most about it.

I think I'm the opposite way. I tend to like technologies when I know all the quirks, e.g. CSS:

"Well of course you have to make the parent element positioned in order to get the child's absolute positioning to work, how else would you do it?"


While I see the point, I also see that all "boring" technologies were new kids on the block at some moment, and often succeded at replacing previous "boring" thing. So maybe the proper advice should sound less absolute. Something like: Be cautious, and don't discard less exciting technologies, just because they are older.


That's entirely the text of the article. But your headline is a lot less punchy than his.


This is a fantastic piece! I’ve been advocating for “complexity points” for years now. MBSA (make boring sexy again!)


I agree pretty wholeheartedly with using tried-and-true tools instead of chasing the new shiny.

I somewhat disagree, however, with the idea that a diversity of tools in use always necessarily leads to higher costs. Sure, using the right tool for each job increases the number of tools you have to maintain, but using the wrong tool for a job increases the maintenance costs of that tool while also introducing an opportunity cost from the resulting inefficiencies of using that tool for something it's not designed or optimized to do.

As an analogy, you could use a screwdriver to turn screws or to pry something apart. Sure, this is cheaper than buying an actual prybar to use for prying, and you only have to have one tool in your toolbox instead of two, but this is a surefire way to break your screwdriver in a warranty-violating way.


It seems like a lot of this is just the tension between what management wants to be focusing on (big picture product or business-level questions) vs what it makes sense for individual engineers to be focusing their efforts on.

As an individual contributor, it's vitally important to keep up with new technologies in order to remain employable. Maybe that isn't in fact the best strategy for your current job, but it's crucial to getting your next job.

Put another way, there are probably a lot of businesses that sensibly standardized on ColdFusion in the early 2000s. Engineers who buckled down and focused on "So what’s your company trying to do?" using CFML could easily find themselves in a really bad spot--unless they were also sacrificing their home life to teach themselves something else.


If it's vitally important to keep up with new technologies in order to remain employable why are the 7 most employable programming languages (C,C++,Java,PHP,JS,Ruby,Python) at least 25 years old? The job market says the opposite - that new programming languages hardly get a look-in.


Programming languages alone aren't the only things you have to know in order to be a professional developer. It's also important to keep up with the ecosystem around that language, and industry-wide trends in application design, testing and deployment/ops.

Good luck getting a Javascript job if all you know is jQuery, and you've never touched Webpack/React/whatever else JS devs need these days. Maybe JS is an extreme case, but some version of that same phenomenon exists for all the languages you mentioned.


I get so excited when I see a page that renders immediately with server side markup, it’s a rare old treat.


An older piece that I loved is Dan's point on just allowing a certain number of technology tokens in any project:

https://mcfunley.com/choose-boring-technology

I think one big part of the mistake is using Hacker News, Reddit, conferences as a sign of what companies are actually using. Instead, I liken it to a bazaar:

https://www.nemil.com/on-software-engineering/beware-enginee...

Oh, and remember marketing is not reality, even though marketers try awfully hard:

https://www.nemil.com/mongo/3.html


> But anyway, despite that, I’m mostly not a tool-obsessed engineer.

I didn't think there was many of us out there ;)

I like cool new tools, but my obsession is solving the problem in the most effective manner. Too often I see people starting with the tool and then trying to shape the problem to fit the tool.


Working on new hardware technology, it's really interesting for me to think about OP's call to only use "proven technology," but how do you get a technology to the proven state if everyone is only using previously proven technology? You can force it if you control a platform (e.g. stuff wireless hotspots everywhere and have wifi capability attached to new laptops), but that's pretty rare in hardware. https://www.siliconvalleywatcher.com/intels-centrino-and-how...


Related: Lateral Thinking with Withered Technology https://en.wikipedia.org/wiki/Gunpei_Yokoi


As a meta observation, I noticed this was submitted by user luu and his website also has a blog post with a similar theme ("boring languages"[1]). Therefore, I wonder if the "boring vs exciting" advice somewhat depends on the personality. I.e. if person has a tendency to prefer conservative technology, it means external advice that advocates "boring tech" will resonate with that person.

I think choosing boring technology makes sense but I'll give a contrarian view anyway.

A lot of people like new and shiny things because it's interesting and spurs the imagination for future work.

For example, in 1974 when the new Altair 8800 made the cover of Popular Electronics[2], an excited Paul Allen showed the magazine to a 19-year old Bill Gates. The "boring tech" advocates would tell them they're wasting their time with the "new fad of microcomputers" and they should look at boring IBM 360 mainframes instead. Those IBM machines were around since the 1960s and were more proven. The problem is that Bill and Paul weren't interested in those mainframes. The choice isn't proven/unproven. The choice was really interested/uninterested.

Similar situation with WhatsApp in 2009. Apple just released the iPhone SDK in 2008. The older tech was PalmOS, Windows Mobile CE, and Symbian. Those legacy mobile operating systems were around since the 1990s. It didn't matter to Jan Kuom that the new iPhone didn't have a track record. It was the more interesting platform to work on and he made a bet to write an app for it.

From what I read, Google's first AdWords server in 2000 was built on MySQL. MySQL was released in 1995 so using a relatively new 5-year old technology may have been more risky than picking traditional Oracle RDBMS which had been around since the late 1970s.

It's ok to choose new and unproven tech if you have a coherent thesis of why it makes sense to try it. You also have to be willing to pay the price if your bet turns out to be wrong. You can also deliberately choose boring tech in as many areas as possible so it frees up brainpower to aggressively make a bet on new unproven tech in a particular area where you think it will make a difference.

Also, your role dictates the freedom to use new and unproven tech. If you're an employee at a mature and conservative company, you'll be constrained to choose boring technology to minimize risk. (This reinforces the saying, "Nobody ever got fired for buying IBM.") On the other hand, if you're an entrepreneur, there's a good chance you'll need to make a calculated risk with new exciting technology that is (1) unproven, (2) has non-existent documentation, (3) does not have much tooling to make implementation easy.

At one point, even all the "boring technology" was new and interesting. In 2019, what's some new and unproven tech that we should look at?

[1] https://danluu.com/boring-languages/

[2] https://www.google.com/search?q=altair+8800+magazine+cover&s...


> For example, in 1974 when the new Altair 8800 made the cover of Popular Electronics[2], an excited Paul Allen showed the magazine to a 19-year old Bill Gates. The "boring tech" advocates would tell them they're wasting their time with the "new fad of microcomputers" and they should look at boring IBM 360 mainframes instead.

I remember reading a book around the late 90s (the book was written in the early 90s) about Microsoft's history and that is exactly the sentiment that computer people had towards microcomputers at the time.


That's a false analogy, because the micro was a category changer. It was literally the basis of entire new markets.

A lot of people saw that coming.

A lot of other people - like DEC and DG - didn't. IBM mostly didn't, but was lucky enough to have a small division that did in spite of the culture around it. (And they lost it. Too bad.)

Node, Mongo, Clojure, etc were never category changers - they were solutions made by people who like to tinker for other people to tinker with. For the sake of tinkering. Hyped into orbit with industrial quantities of hypeology.

If you're trying to Get Shit Done, they're mostly a disaster. (Clojure less than the others - but it's still not solving the problems that actually matter in a production context. Not really.)

Bottom line is micros had an obvious business benefit - low cost, ease of installation, cheap software - that was never going to be matched by the mini and mainframe markets.

Node etc don't have an obvious business benefit at all. They appeal to a certain set of developers. But for most applications they don't cut development time, minimise bugs, simplify maintenance and bug fixes, de-complexify system architecture, simplify new hire onboarding, streamline the development process, scale with minimum effort, or any of those other boring requirements that actually matter when you want to get good working software out the door.


There is a benefit of hindsight here: The micro, in the beginning, was a bit like 3d pronters now. Nice toy for enthousiasts but that's it. It took the GUI, wordprocessor and spreadsheet to get them out of the toy category and in the usefull category.

Mainframes etc could also deliver this, the main differentiator being price. IBM was dreaming of a mainframe for each city, and phonelike terminald in the company. Minitel and internet proves the vision was right, only the price wasnt

The category killer needed to be cheap as well ad usefull.

Node also has an obvious benefit: same tech in frontend and backend, hence less devs needed. A.k.a. cheap, at least it seems so in the short term. History will judge us both on the long term, I guess.


> Node, Mongo, Clojure, etc were never category changers - they were solutions made by people who like to tinker for other people to tinker with

Rust is the killer micro of software development. It's all about getting rid of pointless hype and helping you "ship good working software out of the door". That's why people don't get it, and dismiss it as a "toy".


I'm not commenting on the analogy, my comment was specifically the part i quoted since it sounded to me like the poster assumed it was how "boring tech advocates" would react: i confirmed that this is exactly how they reacted.

Outside of that i sit on the fence of "it depends" with a bias towards conservatism. After all I do mostly write desktop software in C and (Free) Pascal, i do not care about web development at all, i see smartphones as a neat distraction but overall having done more harm than good to the stuff i like (desktop software and UX, primarily) and of all the new languages popping up here and there i find Go the most interesting (i'd also have an interest towards D but Free Pascal provides pretty much everything i'd need from D outside of the crazy metaprogramming, which i'm not sure it is a good thing in the long term).

Ok, actually i might sit a bit further towards conservatism than i initially thought :-P.


If you're trying to Get Shit Done, they're mostly a disaster. (Clojure less than the others - but it's still not solving the problems that actually matter in a production context. Not really.)

Hrm, as someone that has written lots of Clojure code over the years, including a few business critical systems, I'm wondering what leads you to believe it's not a language for getting shit done.

I can spin up a web service in just a few minutes, drop in the middleware I need, route requests, spit out JSON, talk to DBs, and in a pinch, leverage libraries from the Java ecosystem for any missing pieces to the puzzle.

Deployment, dependency management, testing, etc... are all fully solved problems, and very mature technologies underpin the pieces that matter regarding battle tested services (jetty, jdbc adapters, etc).

Not trying to be a zealot here, but I also hate to see misinformation permeating the thread.


> But for most applications they don't cut development time, minimise bugs, simplify maintenance and bug fixes, de-complexify system architecture, [...] scale with minimum effort

I honestly have yet to see a Java or C++ codebase that is better on any of these fronts than a Clojure one.

> they were solutions made by people who like to tinker for other people to tinker with. For the sake of tinkering. Hyped into orbit with industrial quantities of hypeology.

This seems to lack context of how we were doing development at the time these technologies picked up.

One should be careful not to apply the "just keep doing what we did in the 90s/80s" argument to every piece of technology not taught in academia. I've used NoSQL databases in cases where traditional RDBMS would have fallen over and exploded.


> From what I read, Google's first AdWords server in 2000 was built on MySQL. MySQL was released in 1995 so using a relatively new 5-year old technology may have been more risky than picking traditional Oracle RDBMS which had been around since the late 1970s.

well they could've also used postgres/ingress. but at the time it was way less used than mysql and way more conservative. mysql was probably used because in the 2000's mysql was already a huge community. way bigger than most shiny graph databases.


An ex-googler mentioned somewhere else that Oracle was the "real commercial db" in this story about the early AdWords server:

https://web.archive.org/web/20120305152003/http://eldapo.blo...


i remember mysql being all the rage when it came out because it was "fast". I don't remember what it was compared to but the reputation was a fast database. I remember the postgres crowd bemoaning mysql's lack of ACID compliance (this was even before innodb came out) and the response was "yeah but it's fast".


iirc, MySQL already had built-in replication support around that time. This alone may have been a deciding factor. Postgres didn't provide built-in replication until a decade later.


I'm not sure I agree with some of the examples... For instance, since Redis can replace Memcached in addition to other chores, what would it have taken for a migration strategy to eliminate an older tech in favor of the newer one?

In the end, I understand how each tech adds costs and values... however, one shouldn't be afraid to update/change things. It's too easy to be stuck on a decade and a half old tech if you are and don't at least try to keep up a little. Then the upgrade is exponentially more costly.


I don’t think the point is not to migrate. The point is to discuss and make the decisions globally not at the whim of “this makes me happy” or “we have a new problem”.


There's a big movement in the Web community to move everything to JavaScript. Js on the server side with Node, html generated by Javascript, and the latest trend CSS-in-js.

On one hand, this can be argued as centralizing on one technology that will reduce long term cost because the operational stack around it can be reused: tests, build tools, deployment. However it very much can be seen as adding another technology to manage since it's adding a shiny new library to maintain with all the bugs and kinks along the way.


This is one of my favourite tech talks ever. So very, very true. I agree with everything he says. On the other hand, writing software the same way without changing is hard to do and there is the temptation to try something new for the sake of trying something new. It's very strong with me and I've learned to indulge my urge to play with shiny new toys in a way that doesn't blow the main project to kingdom come. I need to write something that will go nowhere just to get it out of my system.


"The long and short of it is that you have to talk to each other.

Technology has global effects on your company, it isn’t something that should be left to individual engineers. You have to figure out a way to make adding technology a conversation.

This might be underwhelming if you were expecting a big insightful secret here, but trust me, this is beyond many technology organizations."

Well said. Also, clutch french revolution references.


> The grim paradox of this law of software is that you should probably be using the tool that you hate the most. You hate it because you know the most about it.

The presentation talks about "innovation" tokens. You only get so many to spend.

There are also pain tokens. And you must spend a threshold of them to understand the quoted statement. It's not fun, but you will be enlightened.


Pretty much this, and if shinny technology is actually worthwhile it will stick around, and then it is time to bother learning it.


Probably too late to contribute to the discussion, but I chose Windows Batch for the Tron project (https://old.reddit.com/r/TronScript) because it's super reliable and works on every version of Windows.


During the 20th century, the Mig-21 was tried and true. It was rugged and dependable. One US test pilot described it as, "Fuel it up and go!" It would fit all the qualities of "boring" tech in this article. However, as a spectator, there's really nothing boring about it.


I wrote a MySQL/python sandwich ordering app 20 years ago. It's still in use- when they moved to a new site, they had to do a data update. I offered to rewrite using modern web tech but they said they were happy with it as it because it was boring and simple and got the job done.


This is why I stick with Django, DRF, Postgres, Reactjs, linux vms and REST over HTTP. Once the business problems are solved we can try out new tech. With this stack I've built many web apps and there are fewer surprises.


I think it's much more important to choose a minimal set of technologies, than to choose ones that are boring. If one 'shiny new' tech can replace two boring ones, I am first aboard that hype train.


This is an excellent article, but it's almost advocating a doctrine, when perhaps the best approach is to have no doctrine at all.

I appreciate the idea of mastering a language or technology. It is often much easier than adopting something new.

But the main reason for adopting something new, is to make doing something much easier. That 'something' is often a call to solve a new problem.

I feel you generally come to understand why you should adopt a technology, after mastering your current set of tools.

If we were to really take to this idea of maximising the tools that are already available, then we probably all still deploy applications over FTP.

I think the two most valuable take-aways for me, are the importance of discussion, and the desire to have as small a number of dependencies as possible.


> when perhaps the best approach is to have no doctrine at all

A general rule of thumb in life is you should follow rules until you understand when to break them, and despite the ageism in this industry I feel a decade under your belt of high quality experience really helps.

KISS is a great general guideline .


1 million times yes. We used to have a late adopter group in Amazon for people who do not get excited about new and shiny and keep on rolling with stable and battle tested tech.


McFunley is awesome! Been an inspiration since "Choose Boring Technology" was released! It's a shame not many ppl follow his advice.


Yeah but its sometimes hard to figure out what are those boring technologies. What is a boring webrtc or websockets server for example?


One step ahead of you! I'm a C# developer.


Solutions are like referees, good ones goes unnoticed, bad ones inspire hate.


... wait, is MongoDB considered a boring technology?

... but Cassandra is "shiny"?


This will be largely lost on most practitioners of software delivery...

Unfortunately.


I was hoping for an article about choosing drilling equipment


Nice, but too long. I'd hate to sit through all of it.


This so depends on the talent of your team. If you are normal then play it safe as most of us are. But if you want to make something great then you must resort to exploration, discovery and invention.


Part of the point of the article is that you probably don't need new technology to create a new _product_.


Right, for an incremental product for sure. Slack is an incremental product but now is worth billions. Point taken but Slack is not SpaceX.


Your example of a sector where we can’t use conservative technology to make progress is space travel? ;-)

But look, I see your point, but the “counterpoint” doesn’t actually disagree: the point is that a lot of incremental improvements in infrastructure feel transformative, but you get to pay the full cost regardless.


Wow so serious the comments in this thread. I immediate thought of the Boring Company and how they want to un-fuck traffic in ways that governments can't. Then I got let down a lot. Boring.


is there a video of this being presented somewhere?



One of the best articles have seen recently!


what if the shiny new technology is optimized for lower maintenance cost relative to older technology?


When reading the headline, I felt as if this was a logan/advertisement for Elon Musk's company.


Same. I blame it on the capitalization. "Choose boring technology" reads like a statement. "Choose Boring Technology" does not.


So many words. So few ideas.


I agree with this, but from a different angle, and possibly different reasons.

I don't think we should use boring tech for sake of it. I think we should use boring ideas, and sometimes boring ideas come in new packaging that brings better tooling and community support to the table.

Single server architecture, relational databases, are an idea. Postgres is an implementation of such idea. What we understand about the failure modes of something like Postgres, are not actually specific to Postgres, but to the ideas behind it dating back to Codd himself, and most other databases that came along the way.

Same thing can be said from DBs that derived from the BigTable paper (Cassandra, Scylla, etc). Of course they have specific operational differences, but they have at least twice as much common ground. The way we understand the failure modes of these projects, is by looking at the ideas behind it (Gossip, Paxos, AP, SSTables, LSM-Trees). We understand things by stripping away their specifics, and seeing how their core ideas interact with each other. The specifics of each solution are merely tie breakers.

This is why I would never call something like Clojure or Elixir "shiny new hype technology".

Clojure is nothing but a modern implementation, backed by an excellent community, of mature ideas that have stood the test of time. Lisp itself is as old as the dinosaurs, and functional programming is not as new as people think. It has just been "rediscovered" recently, once the hype/aversion passed and we were able to look at it with sober eyes.

Elixir is also a modern implementation, with better tooling/community, of things that have been in BEAM for 20+ years. Apache's Samza/Storm for stateful stream processing are reimplementations of things the Erlang community has been able to do with Mnesia since at least 1999. It's also something we sometimes circumvent by using things like Redis to deal with stateful computations, and Google recently seems to have realized this and published a recent paper, trying to go back to that old idea[1].

In short, I think it's a really good thing that we have new tech that tries to make the old ideas better, and I generally feel pretty safe experimenting with those. You just have to learn to recognize these values, and adequately investigate solution candidates before jumping into what could very well be just hyped up crap, because sometimes, it isn't.

[1] https://ai.google/research/pubs/pub48030 discussion https://news.ycombinator.com/item?id=19823022


Not having the site https-enabled is also a tenet of boring technology? /s


Thanks for the advice dad! It all makes sense now!


> The failure modes of boring technology are well-understood

Well-understood failure modes can't get you to resilience and are not all that well-understood to begin with. I guess it's just a false justification people use to stick to familiar broken old technology and broken ways or maybe to avoid relying on people with rare expertise, I don't know. If things were as simple as this we would just handle all the failure modes of "boring technology" in software and get perfectly working systems. But we can't, all the unknown unknowns and not understood failure modes are still there in boring technology, and reliability is a "do backups" afterthought. Systems really have to be designed for reliability if reliability is important and currently software reliability is too cutting edge to be boring.


> I guess it's just a false justification people use to stick to familiar broken old technology and broken ways or maybe to avoid relying on people with rare expertise, I don't know.

This sounds a bit nose in the air to me.

> Systems really have to be designed for reliability if reliability is important and currently software reliability is too cutting edge to be boring.

And yet, people have posted examples of "boring" software running for over a decade.


Depends what you mean... it is often easy to wrap fallible dependency calls with exception logic, and even workarounds sometimes, as long as the logic and failure are clear.

It's mostly a matter of choosing boring software, because it's well documented, both in terms of (semi)-official docs, and things like SO answers, vs. newer software that might have modern advantages and also more undocumented or untested edge cases to be fixed.


If you want your business to succeed, you need to have multiple edges against the competition. Simply using what other people are using isn't going to give you a technological edge.

Look at Paul Graham with Viaweb. He used lisp because it gave him a significant advantage against other players stuck with C++. Or Jane Street, started by three ex-Susquehanna guys using OCaml.

There's a lot of technology that isn't boring that also isn't unstable either. These are technologies that probably should be more boring at this point (Common Lisp, Haskell, OCaml, Scala) but are rock solid and stable. These are the kind of technologies you should be using.

On the otherhand, if all your "tech" start-up is doing is running a CRUD website, it probably doesn't matter what you're using.


Everything you think is boring was new at some point.

Ultimately, shiny is what other people like, boring is what you like. What other people like is complicated, what you like is simple. Et cetera


(2018)


Is this about BoringSSL or The Boring Company? There's so many new and exciting boring technologies to follow that it's hard to keep track.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: