Hacker News new | past | comments | ask | show | jobs | submit login
The Unreasonable Effectiveness of IT (highscalability.com)
76 points by brown-dragon on July 6, 2015 | hide | past | favorite | 42 comments



I attributed a lot of this 'effectiveness' to developers who are lazy.

There have been times when I automated things and that was because I wanted to avoid repetitive work. I have never been able to automate something I had not manually done 2-3+ times, but once I did I knew what I needed to do to avoid manual work and make the process error free for next time same operation was needed. If someone else was dictating what they needed, I would not have been able to build my tools as effectively.

To me, a good developer is a "maker", who builds tools. For us the raw material is a computer and some free time, which makes experimentation and evolution of our tools easier compared to other professions.


That's quite a good point. If I have to write code for someone else that I don't really understand I first do the steps manually to understand what is happening. Then it's much easier to write code for it.

It's hard to apply often though, because there are scenarios where you can do nearly nothing manually, like optimizing a logfile parser that handles hundreds of servers every day.


He neatly explains why traditional, say, "reasonable" management does not work in IT:

- Objectives are a way of saying prove to me first that you know where you are going before you go.

- We’ve come to believe too much in metrics and accountability.

- There’s no systematic way known how solve ambitious problems. (Otherwise they would have been solved already.)

For example, the original Google Search project succeeded, exactly because none of this was around. Traditional management only works for repetitive processes. Organizations that revolve around traditional management are incapable of innovating.


> There’s no systematic way known how solve ambitious problems. (Otherwise they would have been solved already.)

Such as social networks for cat videos and the like?

Come on, let's stop embellishing IT. Most of the times, we choose not to be systematic or thorough because we are divas. We like to think we're more special than we actually are, that we're too "conceptual" for metrics and that our horizons aren't clearly defined. This is - most of the time - BS. We just happen to have been dealt a good hand (working in IT, nowadays) which allows us to harness the large demand to our egos. IT professionals aren't artists or the like; they should be as systematic as other engineering fields.


The physical world is much less flexible and forgiving than the world of software. A civil engineer can't say "this beam will break every week, so I'm just going to put a robot next to it that will attach a new been when it does". But you can do the equivalent with a process that crashes.

The greater flexibility of software implies a different set of management practices.


> There’s no systematic way known how solve ambitious problems.

I think engineers and scientists who worked on the Apollo program or the Manhattan project might disagree.


I think that when the Manhattan Project was started, physicists knew that it was possible, in theory, to build a nuclear bomb. It's just that no one knew how to manufacture it. Going from "we know it's possible" to "built" is more amenable to a systematic process than going from "is this even possible?" to "built". I think the article is talking about the latter. iPhones were firmly in the realm of science fiction fifteen years ago. It would have been very different to systematically try to create one. Whereas in 2007, it was apparently possible, just no one knew it yet.


It's very rare that in our industry someone goes in without even knowing if a thing is theoretically possible.

Uber knew they could do GPS tracking and send messages to phones.

Apple knew they could integrate a mobile phone chipset with a PDA one, and put some software on top.

Google knew you could index web pages.

None of these things were fundamentally hard problems on the scale of nuclear fission or landing men on the moon. There was little risk beyond some money, and it was largely a case of taking existing technology and putting it together in new ways. I'm not trying to claim these things didn't take some imagination and skill to develop, but to claim they were potentially impossible is overstretching, sometimes as an industry I think we just need to get over ourselves a bit!


Are you arguing that the iPhone required a larger leap in science and engineering than the first atomic bombs?

I can't say that I agree.


No, I'm not arguing that. I'm arguing that they knew an atomic bomb was possible before they started. From what I've read, no one in the phone industry thought something like an iPhone was even possible. I read a story that said that when Microsoft's phone engineers heard the iPhone announcement, they said, "that's impossible! It would have to be practically all battery." They disassembled one when it came out, and sure enough...


I think the argument is that the iPhone (and other innovations) required a leap in creativity, not science or engineering; it's those leaps that power many of today's technology companies.


Did they actually have a system, or where they throwing large numbers of incredibly bright people at it and giving them pots of money to play around with?

Management Accounting techniques are generally based on knowing what the process is, or what the output looks like, ideally both. I would suggest that the atomic and space programs have involved many problems, at various recursive levels, where the process to follow, and what success looked like, where unknown going in. Research & Development is one of the classic cases for where Management Accounting techniques are consider useless or counter-productive.


Please don't use projects such as the Apollo or Manhattan program as examples. These projects are huge and have considerable amounts of money and man-power behind them, which most IT projects nowadays don't have.

I did a quick check on Wikipedia and the Apollo program cost in 2005 dollars $170 Billion dollars.

I believe the problem with IT is that vendors have sold management that their solutions will solve all their problems with little effort or man-power. The do more with less mantra has been oversold.


The Manhattan project also came to my mind as a counter example and I was disappointed it wasn't addressed in the book.

It could be the exception that proves the rule. Internally you might be able to say it was heavily research driven. And the global resources and brain power attached to the project were unprecedented (https://en.wikipedia.org/wiki/Manhattan_Project). Plus there was the motivation of a truly existential level threat. Even given all that it still could have failed.

For some reason I didn't think of Apollo in the same way. Maybe I'm missing the boat on that one.


"As Schopenhauer says, when you look back on your life, it looks as though it were a plot, but when you are into it, it’s a mess: just one surprise after another. Then, later, you see it was perfect."

Weird. When I look back on my life, I see blunders, randomness and missed opportunities. There's a great deal that I would change if I could do it over. I'd have thought that was the same for most people.


No action is truly perfect, or without blunders. Every choice one makes is a unique expression of one's being in this universe, and all actions have consequences. You alone are the actor of that fantastically unique path. So the "perfection" is to be seen, later, in retrospect, how no one else but you traveled that path that led you to this moment.

I hope I did justice to it. I myself am retraining my thinking to be more aligned with the Buddhist and absurdist philosophy. 30 years of an Abrahamic religion can have some serious and long-lasting hang overs.


So a long series of blunders and mistakes results in perfection, because that was the only way that the final state could have been achieved? Regardless of how imperfect and undesirable the final state may be? I think this is a use of the word "perfect" that I'm not familiar with.


I certainly disagree with this bit, "There’s no systematic way to know how solve ambitious problems. We just don’t know what will lead to what. The future is a fog. An ambitious problem is not deciding what to have for lunch, it’s more like curing cancer or creating an AI."

I have always started on the solution to an ambitious problem by asking "what would have to be true for this to be solvable?" and from each of those things working backward until you get to things that are all true now. Then you can start solving the "almost" true things and work forward to the ambitious solution at the end.


> I have always started on the solution to an ambitious problem by asking "what would have to be true for this to be solvable?" and from each of those things working backward until you get to things

Can you give an example of an ambitious problem you solved backward this way.


You could see a (perhaps incorrect or ultimately misleading) example of it:

1. What would have to be true for us to have reliable transmission of messages over an inherently lossy network? 1a. some kind of method to ensure that a message got to its intended destination

2. what would have to be true to ensure that a message got to its intended destination?

2a. some kind of method of ensuring that failures don't occur, or if that is impossible, some kind of method of making intermittent failures not prevent the system from eventually functioning.

3. what would need to be true to make intermittent failures not prevent the system from functioning

... etc...

Slightly misleading because I'm obviously working back from a familiar example, but I think the technique is applicable in some cases. At the very least your solution is born out of trying to make the impossible just barely possible rather than seeking to build layer by layer to achieve something possible.

However, with TCP/IP you can also see how using traditional methods also work.


Perhaps one of the more practical uses was figuring out how to scale storage performance when I was at NetApp. Since processors had stopped scaling like they had been in 2001 their CTO challenged me to scale storage performance in terms of IOs per second and total available capacity, without improving the speed of the processors.

The challenge there ended up localized around a couple of really hard sub-problems. One was that Netapp's storage model was file based (rather than block based) and so part of the problem was scaling the operation of a file state machine, and the other serious sub-problem was the creating a reliable way to keeping protocol/process state that was in "flight" in order to recover from failures. Working backwards led to what I ended up calling the "three layer cake" model which split the storage stack into parts that were loosely coherent and tightly coherent.


I'm currently listening to an account of the de-centralized command structure Gen Stanley Mcchrystal put in place to combat Al Quaeda in Iraq and the taliban in Afghanistan. One thing that is clear is that in a fast paced environment (be that the theater of war or a start up) top down rigid command doesn't work. All this is really saying that the the organization that gets through a feedback loop as fast as possible wins. You can call that the OODA loop, lean, holocracy or whatever you want.

I'm strongly in favor of giving up centralized control in the interests of allowing emergent behavior to tackle opportunities as fast as possible. Leadership to me should be:

* Set the direction

* Gather smart, motivated people

* Create an environment that gets everyone on the same page

* Get the fuck out of their way.


Germans figured this out a long time ago with their "Auftragstaktik". In a scenario of asymmetrical warfare, you don't wanna be rigid and stringent with command control, you have to allow capable and independent teams to carry out specific missions and then review their performance afterwards and I don't see any prob extending or porting this to the business domain and to startups in particular.


I guess I'm more amazed at how much energy goes into relatively minor advances or even in going backwards. Like the proliferation of frameworks, the difficulty in cross-platform development, how hard scaling remains, etc.


I can see progress.

Better languages: {C#, C++14, Go, Rust} > Java, {ES6, TypeScript} > JavaScript, and Python keeps getting better thanks to Python 3 and an better libraries.

Scaling: OpenCL and CUDA have both taken off, delivering everything they promised. Hadoop matured and spawned variants, and now Spark is starting a new cycle. Thanks to Amazon and Google we now have cheap computing cycles on tap with APIs usable by ordinary developers.

It seems to me that only Web Development and Mobile Apps are a morass. But web development was bound to be a step backwards given its strange history, while mobile app development can only be as good as the underlying platforms, which started from really rudimentary to get where they are now.


Cross-platform development is not like the other problems (too many poor quality frameworks, unsolved problems like scaling, etc.) The main problem with cross-platform development is that the creator of the platform usually has no incentive to work with other platforms; quite the opposite. That road leads to commoditization. Users and developers want commoditization, but sellers do not.

Another problem is that often if you create a platform, it is because you want to do something different than established platforms. This inevitably makes cross-platform difficult.


I'm not sure if you picked the best examples to validate your opinion. in fact, I'm not sure how your examples relate. care to expand?


Proliferation of frameworks is a great example.

Think of what EJB, EJB2 and Spring did to Java. Think of Struts. Instead of fixing the real problems of the language, we went in a lot of side paths that barely got us ahead of where we started and, if nothing else, slowed progress down. Java in 2002 vs Java in 2012: Not really that much better. Working in Java before was like working with a broken bone, and instead of fixing it, the frameworks were painkillers. So we still had a broken bone, and then we were addicted to Vicodin.

We have been in a similar boat for a while in Javascript land. A few specific calls make web development to be a worse experience than using Delphi to build native apps back in 1998. This problem of repetitive code + css + a dom has led to lots and lots of libraries and frameworks, moving constantly. But it feels like they are not really moving, as far as productivity is concerned. FP inspired tools, like Elm, or React, might finally get us out of this mess, but the jury is still out.

And let's not get started about the state of build systems in javascript land. Lots of things are being written, but it's really hard to figure out why. The rate of actual progress is pretty slow.

We are also in a similar boat in cloud and virtualization. Lots of little tools that are supposed to make our life easier, but most are extremely fragile, because we build the infrastructure and sell it to the world before we understand the problem. The entire ecosystem around docker and coreos is the wild west. And then there's the distributed databases. Dozens of options, less than a handful that handle a network split in a sensible way. Just read some of the posts in the Jepsen series.

Sometimes we solve a problem well, quickly, and it's solved for good. Other times it feels that all this effort from the community is the same as being stuck in the mud. In those cases, the tools don't feel like stepping stones, or foundations to build upon. It's a bunch of different people making infrastructure that fail to learn any lessons from their competitors.

If anything, what is amazing is that we manage to get systems that work in some fashion, given that we are using so many broken tools.


Well, I don't really agree with any of this, frankly. Every single thing you cite as slowing progress WAS progress, just not the progress you feel "should" have been. I think you hold your opinions based on a fallacy that with your hindsight now, you could solve the problems of 20 years ago.

And I certainly don't agree with your assessment that any problem has been solved "for good."

What I see happening is a natural evolutionary process informed by the independent actions of millions of people, and some hindsight-fueled speculation of what the value of central planning would be if it could be done with perfect knowledge. Basically, one works, has worked, is working, and will continue to work (and not just in software), and the other is a fantasy brought about by the human brain's biases, which has led to things like the USSR and China's current gov't, amongst other travesties.


I'll take a stab at it, at least my perspective on it from the "enterprise application" trenches.

It's amazing how complicated many multi-project, multi-levels-of-indirection rube-goldberg-machine-like solutions are to accomplish the most trivial of tasks. The ratio of actual "doing the actual required functional work" code to various "architectural best practices" and other plumbing is I'd speculate often in the neighborhood of 1:20.


Yeah. The main reason that there has been any progress at all is that countless thousands of people have been working at it, and occasionally somebody will come up with an idea that will stick. Countless other (possibly brilliant) ideas will be ignored and die.


Isn't a big part of information technology's rapid pace the fact that much of it happens in a virtual space instead of the real world? I wonder if civil engineering would move as fast if buildings could be built and torn down as fast (and as cheaply) as computer programs, and if building materials' efficiency could be improved at the same pace transistors can be miniaturized?


"Information technology" is over a century old. IBM has been in business for more than a century. FORTRAN is over 60 years old. C and Unix are over 40. So is the Intel 8080 microprocessor. It's not like this happened overnight.

Both electric power and railroads were deployed faster.


I think of anyone identifying with "IT" and particularly the "IT Industry" as a snake oil salesman.

Technology stocks are staying hot, but nobody thinks of IT as effective. In anything that isn't a technology company, the IT department has rapidly become the slowest moving part of the company. Why hold on to the moniker like it means something?


Nice essay. Love the premise of how working around a problem can lead to a direct solution instead of working directly on a solution. It's a very important point.

But there is an exception to his argument that he should have acknowledged for the essay to really take hold: there is one area where we can work towards clear objectives that actually help solve problems we don't understand. That area is making existing solutions more efficient.

Moore's Law has carried a huge load in the advancement of IT. Ideas that were simply speculations become easy-to-accomplish when technology gets commoditized. Things that used to take months can happen in seconds. That opens up all kinds of new possibilities.

IT is full of folks making stuff more efficient. That stuff can be hardware, software, or just "things people do", like hail a taxi. The more efficient we make everything, the easier it is to create and combine stepping stones. That takes a good idea and makes it even better.


Great, now I also have a justification for not living an objective-driven life


title naming in hackernews is so copy-cat. this is the 3rd time I see a title "the unreasonable effectiveness" of X. After the first very successful post.


It goes beyond posts. In case you are not aware: https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

It happens a lot. "X for fun and profit" and "Y, and so should you" are other examples.



It has a name! Neat. Thank you


"X considered harmful" is my personal pet peeve


I like all of them. This way I don't just know the topic (X) but also what kind of discussion is to be expected. Makes parsing the mainpage much faster for my brain.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: