I can't take this Agile crap any longer. It's lunacy. It has all the hall marks of a religion. A lot of literature, a lot of disciples, hoards of money grabbing snake oil selling evangelists, and no evidence at all that it works. In fact, as far as I can see, there's more evidence that it doesn't work.
And the next person that says to me "derp derp derp, well it's better than waterfall herp derp" can go fly a kite. If you think we live in a world where there's only two ways to manage/build software, then you're a fool and have no business being in this business. Either start applying critical thinking or go sell crazy some place else.
Maybe there is a perfect project out there, somewhere, and they're doing Agile right, and it makes the product better, and everyone on the team is happier and more productive. But I've never seen a project like that. In five Agile projects I've had the misfortune of working on or alongside, All I've seen is stakeholders making implementation decisions in ivory towers, developers being told to not implement that "delete" function because it's out of scope (it's out of scope because the designer and UX people forgot to put it in, but fuck it, we'll pretend you're right and it's meant to be like that), and teams pretending stuff works and everything is ok when in actual fact, it doesn't, and it isn't and if people could just stop looking a Jira and start looking at the code then maybe we could start to fix this hunking pile of crap and actually start being proud of our work.
And before anyone weighs in with "Well those project clearly weren't doing Agile right", please, save your breath. Every project has been infested with people who have stacks of books on the subject. They love it, they suck it all up, they obsess over the minutiae of how to implement scrum, retrospectives, and sprints. They can talk of nothing else at work, at home, or in the pub. If they can't do it right, who can?
And herein lies the point, there is no way to do agile right. What there is, is good project managers and bad project managers. There are good devs and bad devs. Good designers, bad designers. The good people will make good things in spite of having Agile forced upon them. Then the agile non-thinkers will jump on the bandwagon and lap up the good peoples success as if it were their own. Then someone will write a blogpost about how Agile made it all possible and it will all be lies piled on top of lies.
Do I sound angry? Good, because I am. Agile is a con. People like agile because it absolves them of the responsibility to think and do their job. It makes my job a misery and it sucks money out of a clients pockets and to my mind is no better than theft.
If you think I'm wrong, prove it. And I mean PROVE AGILE WORKS, WITH EVIDENCE or shut up. Extraordinary claims require extraordinary evidence. I didn't come into your life and tell you how to suck eggs, but if you're going to come into mine and do that, you'd better have data to back you up.
Lots of people make the mistake of thinking there's only two vectors you can go to improve performance, high or wide.
High - throw hardware at the problem, on a single machine
Wide - Add more machines
There's a third direction you can go, I call it "going deep". Today's programs run on software stacks so high and so abstract that we're just now getting around to redeveloping (again for like the 3rd or 4th time) software that performs about as well as software we had around in the 1990s and early 2000s.
Going deep means stripping away this nonsense and getting down closer to the metal, using smart algorithms, planning and working through a problem and seeing if you can size the solution to running on one machine as-is. Modern CPUs, memory and disk (especially SSDs) are unbelievably fast compared to what we had at the turn of the millenium, yet we treat them like they're spare capacity to soak up even lazier abstractions. We keep thinking that completing the task means successfully scaling out a complex network of compute nodes, but completing the task actually means processing the data and getting meaningful results in a reasonable amount of time.
This isn't really hard to do (but it can be tedious), and it doesn't mean writing system-level C or ASM code. Just seeing what you can do on a single medium-specc'd consumer machine first, then scaling up or out if you really need to. It turns out a great many problems really don't need scalable compute clusters. And in fact, the time you'd spend setting that up, and building the coordinating code (which introduces yet more layers that soak up performance) you'd probably be better off just spending the same time to do on a single machine.
Bonus, if your problem gets too big for a single machine (it happens), there might be trivial parallelism in the problem you can exploit and now going-wide means you'll probably outperform your original design anyways and the coordination code is likely to be much simpler and less performance degrading. Or you can go-high and toss more machine at it and get more gains with zero planning or effort outside of copying your code and the data to the new machine and plugging it in.
Oh yeah, many of us, especially experienced people or those with lots of school time, are taught to overgeneralize our approaches. It turns out many big compute problems are just big one-off problems and don't need a generalized approach. Survey your data, plan around it, and then write your solution as a specialized approach just for the problem you have. It'll likely run much faster this way.
Some anecdotes:
- I wrote an NLP tool that, on a single spare desktop with no exotic hardware, was 30x faster than a 6-high-end-system-distributed-compute-node that was doing a comparable task. That group eventually used my solution with a go-high approach and runs it on a big multi-core system with as fast of memory and SSD as they could procure and it's about 5 times faster than my original code. My code was in Perl, the distributed system it competed against was C++. The difference was the algorithm I was using, and not overgeneralizing the problem. Because my code could complete their task in 12 hours instead of 2 weeks, it meant they could iterate every day. A 14:1 iteration opportunity made a huge difference in their workflow and within weeks they were further ahead than they had been after 2 years of sustained work. Later they ported my code to C++ and realized even further gains. They've never had to even think about distributed systems. As hardware gets faster, they simply copy the code and data over and realize the gains and it performs faster than they can analyze the results.
Every vendor that's come in after that has been forced to demonstrate that their distributed solution is faster than the one they already have running in house. Nobody's been able to demonstrate a faster system to-date. It has saved them literally tens of millions of dollars in hardware, facility and staffing costs over the last half-decade.
- Another group had a large graph they needed to conduct a specific kind of analysis on. They had a massive distributed system that handled the graph, it was about 4 petabytes in size. The analysis they wanted to do was an O(N^2) analysis, each node needed to be compared potentially against each other node. So they naively set up some code to do the task and had all kinds of exotic data stores and specialized indexes they were using against the code. Huge amounts of data was flying around their network trying to run this task but it was slower than expected.
An analysis of the problem showed that if you segmented the data in some fairly simple ways, you could skip all the drama and do each slice of the task without much fuss on a single desktop. O(n^2) isn't terrible if your data is small. O(k+n^2) isn't much worse if you can find parallelism in your task and spread it out easily.
I had a 4 year old Dell consumer level desktop to use so I wrote the code and ran the task. Using not much more than Perl and SQLite I was able to compute a large-ish slice of a few GB in a couple hours. Some analysis of my code showed I could actually perform the analysis on insert in the DB and that the size was small enough to fit into memory so I set SQLite to :memory: and finished it in 30 minutes or so. That problem solved, the rest was pretty embarrassingly parallel and in short order we had a dozen of these spare desktops occupied running the same code on different data slices and finishing the task 2 orders of magnitude than what their previous approach had been. Some more coordinating code and the system was fully automated. A single budget machine was theoretically now capable of doing the entire task in 2 months of sustained compute time. A dozen budget machines finished it all in a week and a half. Their original estimate on their old distributed approach was 6-8 months with a warehouse full of machines, most of which would have been computing things that resulted in a bunch of nothing.
To my knowledge they still use a version of the original Perl code with SQlite running in memory without complaint. They could speed things up more with a better in-memory system and a quick code port, but why bother? It's completing the task faster than they can feed it data as the data set is only growing a few GB a day. Easily enough for a single machine to handle.
- Another group was struggling with handling a large semantic graph and performing a specific kind of query on the graph while walking it. It was ~100 million entities, but they needed interactive-speed query returns. They had built some kind of distributed Titan cluster (obviously a premature optimization).
Solution, convert the graph to an adjacency matrix and stuff it in a PostgreSQL table, build some indexes and rework the problem as a clever dynamically generated SQL query (again, Perl) and now they were realizing .01second returns, fast enough for interactivity. Bonus, the dataset at 100m rows was tiny, only about 5GB, with a maximum table-size of 32TB and diskspace cheap they were set for the conceivable future. Now administration was easy, performance could be trivially improved with an SSD and some RAM and they could trivially scale to a point where dealing with Titan was far into their future.
Plus, there's a chance for PostgreSQL to start supporting proper scalability soon putting that day even further off.
- Finally, a e-commerce company I worked with was building a dashboard reporting system that ran every night and took all of their sales data and generated various kinds of reports, by SKU, by certain number of days in the past, etc. It was taking 10 hours to run on a 4 machine cluster.
A dive in the code showed that they were storing the data in a deeply nested data structure for computation and building and destroying that structure as the computation progressed was taking all the time. Furthermore, some metrics on the reports showed that the most expensive to compute reports were simply not being used, or were being viewed only once a quarter or once a year around the fiscal year. And cheap to compute reports, where there were millions of reports being pre-computed, only had a small percentage actually being viewed.
The data structure was built on dictionaries pointing to other dictionaries and so-on. A quick swap to arrays pointing to arrays (and some dictionary<->index conversion functions so we didn't blow up the internal logic) transformed the entire thing. Instead of 10 hours, it ran in about 30 minutes, on a single machine. Where memory was running out and crashing the system, memory now never went above 20% utilization. It turns out allocating and deallocating RAM actually takes time and switching a smaller, simpler data structure makes things faster.
We changed some of the cheap to compute reports from being pre-computed to being compute-on-demand, which further removed stuff that needed to run at night. And then the infrequent reports were put on a quarterly and yearly schedule so they only ran right before they were needed instead of every night. This improved performance even further and as far as I know, 10 years later, even with huge increases in data volume, they never even had to touch the code or change the ancient hardware it was running on.
It seems ridiculous sometimes, seeing these problems in retrospect, that the idea was that to make these problems solvable racks in a data center, or entire data centeres were ever seriously considered seems insane. A single machine's worth of hardware we have today is almost embarrassingly powerful. Here's a machine that for $1k can break 11 TFLOPS [1]. That's insane.
It also turns out that most of our problems are not compute speed, throwing more CPUs at a problem don't really improve things, but disk and memory are a problem. Why anybody would think shuttling data over a network to other nodes, where we then exacerbate every I/O problem would improve things is beyond me. Getting data across a network and into a CPU that's sitting idle 99% of the time is not going to improve your performance.
Analyze your problem, walk through it, figure out where the bottlenecks are and fix those. It's likely you won't have to scale to many machines for most problems.
I'm almost thinking of coming up with a statement: Bane's rule, you don't understand a distributed computing problem until you can get it to fit on a single machine first.
And the next person that says to me "derp derp derp, well it's better than waterfall herp derp" can go fly a kite. If you think we live in a world where there's only two ways to manage/build software, then you're a fool and have no business being in this business. Either start applying critical thinking or go sell crazy some place else.
Maybe there is a perfect project out there, somewhere, and they're doing Agile right, and it makes the product better, and everyone on the team is happier and more productive. But I've never seen a project like that. In five Agile projects I've had the misfortune of working on or alongside, All I've seen is stakeholders making implementation decisions in ivory towers, developers being told to not implement that "delete" function because it's out of scope (it's out of scope because the designer and UX people forgot to put it in, but fuck it, we'll pretend you're right and it's meant to be like that), and teams pretending stuff works and everything is ok when in actual fact, it doesn't, and it isn't and if people could just stop looking a Jira and start looking at the code then maybe we could start to fix this hunking pile of crap and actually start being proud of our work.
And before anyone weighs in with "Well those project clearly weren't doing Agile right", please, save your breath. Every project has been infested with people who have stacks of books on the subject. They love it, they suck it all up, they obsess over the minutiae of how to implement scrum, retrospectives, and sprints. They can talk of nothing else at work, at home, or in the pub. If they can't do it right, who can?
And herein lies the point, there is no way to do agile right. What there is, is good project managers and bad project managers. There are good devs and bad devs. Good designers, bad designers. The good people will make good things in spite of having Agile forced upon them. Then the agile non-thinkers will jump on the bandwagon and lap up the good peoples success as if it were their own. Then someone will write a blogpost about how Agile made it all possible and it will all be lies piled on top of lies.
Do I sound angry? Good, because I am. Agile is a con. People like agile because it absolves them of the responsibility to think and do their job. It makes my job a misery and it sucks money out of a clients pockets and to my mind is no better than theft.
If you think I'm wrong, prove it. And I mean PROVE AGILE WORKS, WITH EVIDENCE or shut up. Extraordinary claims require extraordinary evidence. I didn't come into your life and tell you how to suck eggs, but if you're going to come into mine and do that, you'd better have data to back you up.