Hi! I'm really excited my post made it to Hacker News, and am glad to discuss it.
Personally, I'm interested in ways to avoid the systemic problem next time around. But people here might benefit more from ways to distinguish the useful bits of Agile methods from the sea of bullshit, and how to defend against the latter
I'm not sure you can. It's a natural response to people trying to make money selling a process. All processes are succeptible to it.
The only real avoidance technique is to have certification of trainers and organisations, and THEN you're going to run into the prevaling (and mostly accurate) IT wisdom that "Certs are Bullshit".
Excellent point. Most certs are indeed bullshit, but since most certification-creators have a financial interest in more certified people, that's not surprising. It happened in the Agile world with Scrum.
Some certs, though, are actually worthwhile. The Cisco CCIE is very well regarded. CS degrees from certain schools also have value. And a lot of other fields do well. E.g., there's a clear division between real doctors and fake doctors, one that goes well beyond the legal enforcement.
So I don't think it's hopeless, even if most of the examples are bunk.
OK, so maybe we need an Agile 'Score'. From 10 as "I eat requirements and shit index cards", to 1 being "We got a guy in with a whiteboard once".
The core team who define a concept get to vet the initial applications of the concept and give it a score. They also indicate who they think "Gets" the concept, and when those people are approved, they're publicly listed as "Getting it".
Similar to the PGP system... A Trustee of yours is trusted by me. Except competance in lieu of identity.
Basically, anyone can say "I give you a 10 out of 10 for hiring me as a coach I MEAN BEING AGILE", but only people elected by the community as a whole's opinions are likely to be respected.
If Paul Graham said you were likely to succeed with a stup, that doesn't mean you WILL succeed, nor does it mean that everyone will agree with him, or even then those he doesn't believe in will fail. But if you trust his judgment given a set of parameters, you should be able to trust his recommendation.
Maybe a simple rule: What does the certifier gain or lose from incorrectly certifying this entity? If the answer is gains money AND/OR loses credibility, don't trust the cert. This conflicts with the idea of a certification as a business, which kinda sucks because no-one wants to spend their time for free certifying people and businesses.
Maybe a group of peers review, freely given and freely used. Everyone wanting a cert throws in a small amount of cash for pizza, you walk through your operations, the community takes note of who's there, then votes.
This is a insightful post, but I think many of the problems with Agile have been apparent since the beginning. A lot of the early Agile proponents went straight into selling snake oil to corporations with deep pockets and dysfunctional cultures.
Who's old enough to remember the "crisis" around Object Oriented? There was a time when the majority of mainstream programmers didn't really know what Object Oriented was, there was a lot of marketing noise confusing the situation, and skeptics declared it all to be snake oil.
I think Objects won in the end, though. Or is it still common for someone to write a big heap of procedural code, use polymorphism once, then call the whole thing, "Object Oriented?"
It seems that the inventor of object-oriented programming was one of those skeptics. Alan Kay apologized for his poor choice of words, for the resulting confusion, and for the widespread misapplication of his ideas.
Unfortunately, many of today's common practices sound a lot like his description of how the term object-oriented is often misapplied. Most programmers that I have met still misunderstand object-oriented programming, and use it as an excuse for complicated or bloated code.
Objects won the popularity battle, to be sure. I think the jury is still out on whether OO is inherently a better form of programming in any other sense than approaches that are not built around objects.
There's plenty of anecdotal evidence to suggest either way, but unfortunately true comparative research based on significant real world projects is always going to be hard to find. Most of us don't write industrial-scale code twice, and even if we did, we couldn't do it with the same team and starting from the same knowledge base in both cases.
I do find it interesting that as other non-trivial abstraction tools have become more widespread, the emphasis on using objects as the tool for every job seems to have diminished substantially, though. When all you've got is objects or plain old procedural code in otherwise much the same context, objects are a potentially useful extra tool in the box. On the other hand, now that we have everything from convenient array/dictionary types in many modern dynamically-typed languages to awesomely powerful and expressive type systems in various functional languages, things are becoming much more interesting again.
I wonder if functional programming can escape the traps Object Oriented programming fell into? For one thing, the barriers to entry are seemingly higher. How much half-baked functional programming is out there?
Most of us don't write industrial-scale code twice, and even if we did, we couldn't do it with the same team and starting from the same knowledge base in both cases.
Big companies are pretty short-sighted. Above a certain size threshold, companies should implement much of their critical user-facing software at least twice. Basically, they should be engaging in A/B testing. Why? Isn't this wasteful? I don't think so. From my experience, both inside companies and as a customer, software can have an impact on employee effectiveness well beyond a factor of two. There is always room for improvement. The best way to discover places to improve is through experimentation. Not doing this, is to leave these potential improvements on the table.
Perhaps a level of granularity below that of an entire application would be better.
> I wonder if functional programming can escape the traps Object Oriented programming fell into?
I'm not sure I would agree that OO has "fallen into a trap". I think OO, whether the Kay version or the C++/Java/whatever version, takes a certain approach: essentially, that one entity in any operation is considered "special". This approach can work nicely in some applications, such as a lot of GUI design work, but doesn't fit into all contexts as neatly.
That means sometimes building your design in an OO fashion is helpful, and sometimes it gets in the way. The interesting question is whether, for any given project, the benefits of the helpful side outweigh the hassle of working around the structure when it isn't helpful.
I think in the early days of OO, a lot of people assumed without evidence that the balance was usually on OO's side. Today, as other approaches with other abstraction tools are more widely known and once trendy ideas like Singletons have had their practical weaknesses revealed through experience, many of us wouldn't take the same bet.
I don't know what to think of Singleton being a trendy idea. Then again, I guess screwdrivers were a trendy idea once. (The hardware thing, not the drink, though it applies there as well.)
> I don't know what to think of Singleton being a trendy idea.
I suppose, like Agile processes and OO, Design Patterns introduced some ideas with real merit, but some people got way too hung up on the less meritorious ideas that came along for the ride.
I think the basic problem with Agile (with a capital "A") is that it has always been a marketing term and not a technical one. Thus it has been drowning under the weight of its own hype for so long that it no longer really means anything.
I'm sure a lot of practices that are commonly considered Agile have genuine merit. Indeed, good developers have been using many of these practices since long before anyone started naming them. Equally, bad developers probably won't adopt them whatever they are called.
Other practices that tend to get bundled into various specific Agile processes are, IMNSHO, 100% pure snake oil. After so many years, they have a pretty poor track record, when surely they should be dominating the software development field by now if they were as effective as their advocates claim and everyone else is doing such horrible things.
I always find it rather ironic that despite the basic tenet of the Agile Manifesto that individuals and interactions are more valuable than processes or tools, we still have consultants and authors pushing processes like XP or Scrum or whatever the next one is, almost invariably using all-or-nothing arguments about how you need to use all the components of their favourite buzzword method or it just doesn't work, and if you don't and your project fails then that must be why.
What would I do differently next time, in the sense of promoting what actually works and not some diluted buzzword bingo entry? I'd single out individual elements, such as using shorter iterations or emphasizing ongoing communication with a client over formal up-front specifications. I would describe them in plain English, I would consider their merits and risks individually, and perhaps I would consider how some of them interact. But above all, I would not artificially bundle up a whole bunch of distinct ideas that can be understood perfectly well in isolation (whether or not they are actually effective) and create some artificial super-idea and a black-and-white approach to using it.
And I'd execute anyone who started naming practices, overall processes, or the roles within those processes in a way that would make anyone outside the Kool-Aid drinkers burst out laughing.
Personally, I'm interested in ways to avoid the systemic problem next time around. But people here might benefit more from ways to distinguish the useful bits of Agile methods from the sea of bullshit, and how to defend against the latter