As a community we need to get away from [making shit up]. If we don't know, we don't know, and we need to say it.
All too often, when we do say it, managers just get annoyed and demand it be done by an arbitrary deadline; since we don't know how long it will take, we're not in a position to justify why it will take longer than that arbitrary deadline. The decision gets made either by "this is when it's needed" or "make up something so he'll stop pestering us", the date gets etched in stone, the particular scenario is forgotten, and when the date is missed everyone looks bad and gets annoyed.
Managers need to know how long things will take. Fair enough, there's more to most projects than just development, and it all must be scheduled to come together in a sensible workable timeframe.
The problem is it's still a fast-changing discipline, with the unknowns dominating the knowns. The biggest known is construction time: most disciplines have a construction/distribution period long enough that it can cover/buffer for variation in the design phase; in software, the construction/distribution period is nigh unto instant (compilation), so design can't hide behind any other phase. It's all design, usually entailing something significant designers have never dealt with before. It's not, say, a bridge where all the basic materials & mechanisms are well understood, predictable, and have a long construction period which can run parallel to at least some of the design phase.
> The problem is it's still a fast-changing discipline, with the unknowns dominating the knowns. The biggest known is construction time: most disciplines have a construction/distribution period long enough that it can cover/buffer for variation in the design phase; in software, the construction/distribution period is nigh unto instant (compilation), so design can't hide behind any other phase. It's all design, usually entailing something significant designers have never dealt with before.
This: an estimate to build software before the development has started (even assuming that the business requirements are frozen) isn't equivalent to an estimate to build a building once the design is set, its equivalent to an estimate of how long it will take to design and build a building once some of the basic objective parameters are set (like how big the footprint it can have, how much office space it needs to provide, etc.) but before the actual design has started.
In this situation I tend to follow the Montgomery Scott Miracle Worker Rule(tm). Come up with a nice, comfortable number in your head, pad it out a little if necessary so it's a nice round number, then multiply by 4.
The only thing that worries me with this technique (as tempting as it is), is that I've seen teams where the work will dramatically slow down because they're trying to "fit" to the deadline. Getting it done as fast as possible and early only leads to more work for more people.
My solution to this: research. When I don't know how long something will take, I go find the basic ingredients, talk to people with experience and get back to management in an hour.
I find it's not necessary to push back like that on every project, and the more discovery about these things you do, the easier it gets.
This is a wonderful overview of the science of estimating, and far be it from me to argue with ToC and the revered Product Development Flow book. I'm a big fan of both.
But that's all horseshit in many contexts.
The problem is not that these ideas are false, it's that variability is so great that in small projects you never separate signal from noise. Over hundreds of user stories? Over months or years? Dozens of teams? Sign me up for flow, Kanban, and the goodness of ToC. Coincidentally, this is why these conversations always end up with some kind of histogram and discussions around long tail tasks.
Small scale, however, the reason I can't tell you when I'm going to be done with making that button the right color is that the danged user can't make his mind up. The biggest part of local variability is not competence as much as the social nature of human interaction. As technologists, we always do exactly what this author does -- we completely ignore the human part of development and pretend it's all just a systems game. We're all little factory robots processing little pieces of stuff at our station, moving it on to the next station.
It's important to remember that these generalizations are only true at the macro level. In fact, for really small teams in a startup environment, it's all social-interaction induced variance. That's probably the one most important thing to know about estimating for many of us. For the few of us who are responsible for macro-level systems, it's still good to know if you're going to understand individual team and developer performance.
I don't think I've ever seen a software project hit a deadline, that includes early delivery as well as late.
I found the constant focus on time estimating in management disciplines incredibly contrived and frustrating. It's an incredibly wrong headed effort at taking mass production estimate techniques (where zero creativity is required) and applying it towards a creative endeavor. It makes as much sense as asking "how many innovations per hour can the team accomplish?" "or how many paintings can a team of painters crank out per day?".
What I think bothers me the most is how this is treated in management education. At its simplest, you start with a roll of the die, because any estimate is essentially a Wild Ass Guess (WAG). But then, realizing that software deadlines are never met, the management discipline tries to use various structured techniques for estimating, like complex work breakdown structures and critical path analysis blah blah blah, which of course simply use WAGs at a finer level of granularity and do little but generate busy work for the managers. Instead of one WAG, we now have dozens or hundreds of small WAGs.
Of course none of this provides a better estimate of time, but might include gems like when aggregating your WAGs from a work breakdown structure to derive a total project estimate, add 20% to each small WAG to build in "slack time" which is another way of saying "we don't really know how long it'll take, so we're going to over estimate as a Check Your Ass (CYA) maneuver, if we estimate out long enough, all work will get done early!". Brilliant. Modern management practices have now turned from coordinating and organizing complex efforts into professional CYA endeavor.
I think there's light at the end of the tunnel, most of this nonsense is a result of trying to scope box projects, which makes sense when building a bridge, or making millions of widgets, but makes little sense when building new software. There's obviously a need for setting deadlines, as anybody who's ever managed a "when it's done" software project knows...that only asks for "Duke Nukem Forever" problems. Setting deadlines keeps people on task and motivated.
So the solution? What do your business needs dictate in terms of delivery time? Then timebox to that. If features don't get built by that time, too bad. Software, unlike building a bridge, is a continuous practice. You can build, deliver and then rev to fix, extend, addend software that didn't hit the desired scope. It eliminates lots of the CYA and WAG nonsense, and forces people to try and get as much as possible done in the allotted time anyways. It keeps the development team from going into (what I consider) largely unethical extended crunch periods near or after a WAGs arbitrary deadline. They'll simply focus on what they can deliver. It's descriptive of their job instead of proscriptive. It implicitly creates the understanding that full functionality is not the goal for a milestone.
"Build as much as you can by x" makes more sense than "How long will it take to build all of Y?".
> "Build as much as you can by x" makes more sense than "How long will it take to build all of Y?".
As a developer this makes total sense and I love it.
The problem is when the rest of the company that exists outside of the development world (i.e. creating new things), like sales and marketing, is that they would rather the whole company align under the "build all of Y" type of scenario because it makes their jobs fantastically easier. At the expense of the devs, of course.
As "agile" development practices continue to spread, I am hopeful that this re-framing of development around short development iterations and continuous delivery begins to transform the processes of the "clients" of development teams as well.
In my experience, when sales and marketing hear 'agile' they think it means you can deliver everything in a week. They don't hear 'iterations' or 'a certain amount of functionality' they hear 'completely done, faster'.
You're absolutely right and I have no idea how to best solve this communication problem. The rest of the world is simply trained to scope box. They probably don't even know what their technique is called.
I've found time boxing works much better the deeper in the bowels of an organization you work in.
If you have a receptive customer, try and start with a regular release cycle as part of the overall project management. "During development, we'll do regular releases every x-number of weeks. We'll meet every 2 release cycles to see where we are with respect to overall scope vs. the last release(s), spot bottlenecks or readjust scope and requirements as needed". This gives them lots of input into the process without getting too involved in the day-to-day and let's you timebox. They get regular insight and feedback as the product progresses, and you can emphasize or deemphasize requirements as things go on.
Oy, this again. OP might as well ask, "Why doesn't my buggywhip make my Camry go faster?"
Who cares whether or not anyone can estimate? These days, if you're estimating, you've already lost. Your competitors have already built and deployed something while you're in the endless analysis phase.
Estimating presupposes that you have an accurate spec which presupposes that you're going through some kind of development cycle. And after 40 years, we know that rarely works.
We know that what works better is combining analysis and development: prototyping quick cycles and stepwise improvement. It rarely matter how good it is; it only matters that is exists on time and that it's good enough.
I propose an alternative: what we need to do is teach to junior devs the meaning of done.
I propose an alternative to OP's alternative: what we need to do is define "done" as "whatever you've got at the deadline".
When you don't know what you want (who does?), it's better to have something on time that you can improve upon than nothing late that wasn't right anyway. You don't need to estimate to do that; you just need to keep building and sharing.
>> These days, if you're estimating, you've already lost.
Well, if you're in services and are pitching to win a project, not estimating will virtually guarantee that you've lost the pitch to someone else.
If a potential client tells you they've got ${insert amount here} to spend and asks you what you think you can deliver to them for that price, "whatever you've got at the deadline" is usually a non-starter.
>If a potential client tells you they've got ${insert amount here} to spend and asks you what you think you can deliver to them for that price, "whatever you've got at the deadline" is usually a non-starter.
But isn't that the core of the agile software development philosophy? You, the client, turn your requirements into (somewhat independent) stories and prioritize them. We, the developers, take those stories and implement them in priority/dependency order. We ensure that the software is working at the end of every sprint (via extensive automated unit and integration tests). This means that you can cut the project off after any $n number of sprints and know that you're getting a level of functionality proportionate to the time that was spent on the project.
Yes, but you need a special client for that. I'm starting to get one to accept sprints, but it's taken 2 years. The rest think sprints are a great excuse for me not to deliver and need another $X000 to finish the work (fair enough, I wouldn't trust a car mechanic/builder/plumber working on this system)
Besides, clients always underestimate the time something takes. Expectation management is an oft-forgotten sibling to estimating. And when you're pitching for work, if others are promising to deliver everything for $Y and I offer a sprint-based system... as someone else commented, I've lost the project by then.
>> This means that you can cut the project off after any $n number of sprints and know that you're getting a level of functionality proportionate to the time that was spent on the project.
Telling them $n sprints can be meaningless -- in practice, things just get pulled out of a sprint and pushed into the backlog, never to get done within $n sprints.
I worked on one project where the client specifically asked for "agile". Meanwhile, their behaviour during the execution phase was that they wanted to see waterfall style results.
It wasn't until the PM finally switched back to a waterfall that the project started seeing meaningful results-- i.e., the "let's move it to the backlog" tactic used by the technical team could no longer take place.
Of note is that this is the classic story of a company that adopts "agile" despite the fact that the culture of its employees did not suit the methodology.
This is great for consumer web/mobile apps, but it doesn't work so well for medical software or avionics.
Don't kid yourself; this is exactly what works for medical software or avionics. I never said we didn't test; I just said this is how we get stuff done: We use stepwise refinement instead of waterfall, so estimating is suddenly much less important.
(For the record, I'm 100% enterprise and 0% consumer web/mobile apps. As we speak, in my other session, I'm building an aerospace prototype I intend to deploy within 48 hours.)
It doesn't work well for management with MBA background. These guys want to have all excel spreadsheet cells filled in upfront. And more important, there must be someone to blame for bad estimates in case of any problems.
In corporate environment it is more important to have your ass well protected, than to speed shareholders many reasonably.
I am always blown away by the fact that we are terrible at estimating and are convinced that it is something that is just not possible, instead of acknowledging it as a skill that we simply have not developed.
Consider, restaurants have to build in an estimate of how much service they expect at a given location. Before it even opens. Once open, they have to keep a steady supply of estimates tracking against how stocked the supplies should be. Do we truly think this is a skill people just have? Or one that is developed and has been refined through the years?
My personal belief is that we are too busy learning to program, such that we never dedicate time to learn to estimate. This works for the stars that can keep pace programming at the top of the field forever. However, as soon as things adopt a maintenance perspective, being able to estimate the amount of work you will have becomes key.
Restaurants are able to identify customer patterns over a set amount of time. With these patterns, they are able to forecast the demand of what kind of food they need to stock. And that being said, my friends who have worked in restaurants say that ingredient inventory management is possibly the hardest thing in a restaurant's operation. So it's not like the restaurant industry has completely figured it out, though they do a lot better than many. I cannot count how many times I've walked into a restaurant hoping to have some specialty dish, only to find out that they've run out of ingredients. It happens on a regular basis to me.
For devs, it's slightly different. The customer patterns may not be consistent over time because devs are not always asked to build the same thing over and over again. Meanwhile, restaurant customers do order the same thing over and over again, so it's a lot easier to confirm the recipe timing and build order (just finished too much SC2) down to an exact science.
Yes, some devs may do nothing but build simple CRUD apps all month, and maybe they can get good at estimating what a simple CRUD app will cost based on their experience. But a lot of devs also work on things that have new and unique requirements for every single project, so the comparison with restaurants is not fair. And nothing against simple CRUD apps. Whatever brings home the bacon brings home the bacon.
I had not meant to imply it was a solved problem in food. More that it was not one that they ignore. As you pointed out, they can be wrong sometimes. Probably more often than folks would realize. (Also, it is not just the daily operation, but again, the estimates in moving into new territory. I don't have numbers, but my hypothesis would be that is the most risky estimation that happens in food. Not the least because most small chains do not have the staff to adequately run the numbers.)
And, yes, it is hard. That is part of my point. It is hard, but they address it and give it an effort. With luck, they make progress at getting better.
Contrast with devs. We spend a lot of time blaming others for why it is hard. (Requirements change, etc.) Ironically, this is something that day to day people have to deal with and adjust for accordingly. Based entirely on the historical perspective of dealing with similar items. That is, all of these are metrics that are trackable and reportable. With luck, they can be anticipated through estimation.
I cannot count how many times I've walked into a restaurant hoping to have some specialty dish, only to find out that they've run out of ingredients. It happens on a regular basis to me.
Sometimes that's actually expected. Some places allow a shortage to make the special scarce. Some places simply can't get ahold of enough of a key ingredient.
Using three estimates can help. Create optimistic, likely, and pessimistic estimates for each significant development task. Plug the values into the Program Evaluation and Review Technique (pioneered from building Polaris nuclear submarines in the 1950s) to get a reasonable estimate:
I think it is best to do incremental estimates. "How long until you have the service deployed to staging with a single 'ping' operation?". When that step is complete: "Ok, now how long to get operation Foo working end-to-end?". This requires trust along the way that people aren't wasting time, but estimating a big project can really screw you. If the estimate comes back for 6 months, and it could have been done in 3, then the project will still take 6 months.
If devs are estimating in hours, days or some other measure of time, the game is already over. As a species, we're pretty terrible at estimating absolutes. But we're pretty good at relative comparisons. If I hold up 2 buckets, 1 small and 1 large,and ask you their precise capacity in liters, quarts, etc., most guesses would be off, many quite a bit. If instead I asked you approx how much larger the large bucket was compared to the small bucket - 2x, 3x, 4x, etc., the majority of the guesses would be pretty accurate.
The better option when estimating is using a system that builds on relative comparisons between work items. We use "points" on a sliding scale. Not only does estimating get much more accurate - "We think that enhancement is about 2x as big as the one we just finished", but the estimation process is quicker.
What brings the point estimation system back to calendar time measures is mapping it to how many of those points of work are typically accomplished in a fixed amount of time - basically typical throughput. Knowing that throughout number, it's a quick calc to figure out how long it will take to complete the work. The throughput number also takes into account all of the little things that go on during the day that no one ever remembers to account for - restroom breaks, emails, phone calls, chit-chats with co-workers, etc.
I've never worked on a project that estimated things with the absolute (days, weeks, months, years) scale that was accurate and didn't require death-marches at the end. Conversely, I haven't worked on a relatively estimated project (points) that has been late or required anything like a death march to complete.
The difficulty of accurate time estimates is not limited to developers. Anyone who's ever ordered food for delivery intuitively knows this, but many people in our industry fail to make that connection. It's the same problem.
The cause of the problem is you make estimates by running a simulation and no matter how complex, the longer the time / the bigger the scale of the project, the more entropy reality will introduce which your simulation did not account for.
Rather than embracing delusion that you can give accurate estimates -- or driving yourself insane trying to reach deadlines you created with incomplete information -- developers should follow the lead of other professions, and when we give estimates make it clear that it is an estimate.
House contractors don't have this problem. Plumbers don't give this problem, and in 2013 even pizza drivers no longer have this problem. Why are we still pulling our hair out trying to give perfect time estimates and follow through on them as though they are gospel?
Here's my rules of time estimates:
1) Under promise, over deliver. Be conservative in your estimates (that means expect it to take longer than you initially think.)
2) Make it clear to whoever this estimate is for that it is only an estimate, and that it is only good for the current parameters of the project. As problems come up, and when the paramters change, the estimate will become insufficient.
3) Don't treat them like gospel. If you do a good job of setting up expectations, sometimes you will finish ahead of "schedule", and unfortunately no matter your best efforts, sometimes you will finish behind. That's OK. This is reality, not a simulation. Things aren't perfect. If you're working with someone who says it's not ok, stop working with that person in the future.
If you say something will take an hour, you're almost guaranteed to be wrong, at best you might fall somewhere near that, but you're almost certainly not going to take exactly an hour.
I think that estimating in best case / worst case ranges is useful for two reasons. First it lets you convey uncertainty. Instead of just double a number you made up, you can let people know where you're very certain (narrow range) and totally guessing (wide range). The second valuable thing about ranges is that you can use them to make estimate about projects as a whole. If a project contains a lot of uncertainty, it's results are likely to be uncertain.
We use this technique at LiquidPlanner to generate more realistic schedules, which is pretty damn cool.
The second best thing to do when estimating is update your estimates whenever they change. If you learn something that is going to affect a ship date, speak up!
In my experience the only thing that improves estimates is being radical about breaking tasks up into small, well defined pieces. If my estimate is 8 hours I'm thinking a workday, which might mean 3 days or more. Avoid the 8 hours thermocline :-)
This has a cost though - most of the big tasks can only be broken up semi-honestly (as opposed to a totally made up division into subtasks) after a significant initial part is done, perhaps to be thrown away. This means they cannot really be estimated, go away, it's done when it's done. But if the iterations are reasonably short then at least the management can watch the estimates grow exponentially in real time. Maybe it sounds pessimistic but at least it minimizes the incentives to make fantasy estimates.
I am simply horrible at estimating. I get paid to work on difficult (at least to me) problems; things I haven't done before. A typical project involves trying 9 things that don't work before I find the 10th that does. There is no way I can estimate that sort of thing.
Thank goodness I'm pretty good at managing (i.e. reducing) expectations. When pressed for impossible-to-estimate estimate I fall back to giving a range: "2 weeks to 2 years"
Every time I have given a "I do not know, I have never done anything like this before, it will require research, it may take a few days or it may take a few weeks" the response from the manager asking has always been:
"So can you do it by X". Where X seems to be totally arbitrary from a dev perspective.
I have never had a manager say "Ok, get to it and keep me informed".
If a developer put in enough time to correctly estimate every quote, he would spend more time estimating than working. I tend to be too optimistic in my estimates, but at least its consistent such that "double it and add a bit" turns my optimistic estimates into accurate estimates.
I have always found it interesting how people think things like this are unique to their line of work. Estimating is difficult in any non-trivial operation. Think of roads, buildings, bridges... almost any new product.
Estimating construction projects is much easier than software.
Roads? In any developed country, they are a known quantity. Their cost and time can probably be estimated to with a small delta, calculated by the foot (or metre). I think every single street in my city is pretty built the same way.
Buildings? Ask a developer in a new subdivision how long House Model B will take to put up after they've already built 5 of 'em. A new office tower? We've built a few of those.
Bridges? Choose one of the three designs. It's not like you're going to bother trying to invent a new kind.
Your particular piece of software? Unfortunately, still much more like a unique little masterpiece than a construction project, and much more difficult to forecast.
And yet construction projects are notorious for going over budget... for small off-the-shelf construction projects (2 miles of road for example) it's feasible, but a new motorway?
And if the weather's bad, the contractor will be claiming for more down days in that 2 miles of road than expected - so there is still an expensive variable.
Off the top of my head, two reasons construction projects are notorious for going over budget would be because of labour (union) costs, not to mention corruption (cf. Quebec's Charbonneau Commission).
Why single out weather? Lots of human activities are impacted by weather. What's the analogy to software?
> Why single out weather? Lots of human activities are impacted by weather. What's the analogy to software?
To point out there's still a large variable in the construction projects isted, and it's not as cut and dried as you made out.
Here in the UK (land of rain) the weather is a major source of extra costs in small (under £1M) projects. Source: chief surveyor of a county council's highway department.
The contracts are well negotiated by the construction firm (and I understand is an industry standard clause, so you can't just hire a different firm), so if work can't be done on your project due to inclement weather, they will put in a claime for their labour costs for thse days, plus hire of plant that's on-site. AFAIK they won't get the full cost, but they're good at putting in frequent claims :)
1) Scope bloat can come from all sides. I've sat in meetings with eager-to-please devs who insisted on getting clients excited for gizmo-flavor-of-the-month x ("hey, how about we add a Facebook button to have customers share what they bought with their friends!") to the scope of a project just to make themselves look good to their bosses, only to lament about the enormity of how much they had (promised) to do at crunch time.
2) Too often, there is a lack of communication between clients, devs, and project managers. If the project manager is 1) unfamiliar with development (happens a lot, actually) and 2) is unable to correctly walk a client through every nook and cranny of their requirements, then it creates a situation where the client can have expectations in mind that are at odds with the requirements that devs are given.
A great example would be a 'simple' payment system as part of a larger e-commerce site. The client says they have store gift cards, credit cards, and PayPal. Perfect. The dev builds the payment system for the client, and, upon demo, the client sees that there's no option for tax-exempt purchases. "Well, 75% of our business comes from non-profits!" "Amazon lets you do it, it shouldn't be that hard!"
Then, the client notices that nowhere can you process refunds. "Refunds weren't in the original scope," the dev says. "Well, great, but now we can't use this product at all," says the client. Who is at fault? In the client's mind, they're so used to using websites with built in refund and tax-exempt options that it never crossed their mind to include it in their requirements walkthrough with the project manager. On the dev end, devs are very "to the letter" when it comes to requirements. They see credit card, PayPal, and gift card, and implement all three and call it a day.
In my mind, when the requirements bloat, it's a failure on the part of the project manager, whose job is to make sure that nothing gets lost in translation and to make perfectly clear the process for change management. It's a blessing when a client is tech-savvy enough to understand that you cannot miss a single detail.
You've left out one role: the developers including the project manager being familiar enough with the domain to ask "Would you like fries with that?" ^_^. In general the model of developers as being "very 'to the letter'" without such a feedback loop is to be avoided, one of many reasons the Waterfall model is very poor.
Your examples are good ones: it's not unreasonable to expect them to know about refunds, but non-profits are a less common example. Although B-to-B is often sales tax free, when you're buying for something to resell (that's where I'm familiar with it, from working for a VAR where I was often the one adding serious value).
All too often, when we do say it, managers just get annoyed and demand it be done by an arbitrary deadline; since we don't know how long it will take, we're not in a position to justify why it will take longer than that arbitrary deadline. The decision gets made either by "this is when it's needed" or "make up something so he'll stop pestering us", the date gets etched in stone, the particular scenario is forgotten, and when the date is missed everyone looks bad and gets annoyed.
Managers need to know how long things will take. Fair enough, there's more to most projects than just development, and it all must be scheduled to come together in a sensible workable timeframe.
The problem is it's still a fast-changing discipline, with the unknowns dominating the knowns. The biggest known is construction time: most disciplines have a construction/distribution period long enough that it can cover/buffer for variation in the design phase; in software, the construction/distribution period is nigh unto instant (compilation), so design can't hide behind any other phase. It's all design, usually entailing something significant designers have never dealt with before. It's not, say, a bridge where all the basic materials & mechanisms are well understood, predictable, and have a long construction period which can run parallel to at least some of the design phase.