One of the best things you can do in your organization is create a culture that treats obstacle discovery as forward progress. There is nothing more disheartening for an engineer than banging her head against the wall trying to debug some race condition only to get flak from management about why it isn't fixed and why it's holding up some milestone. The root of this problem is management that doesn't understand the problem domain, and is suspicious about whether the engineers are really working diligently. The proper response to obstacle discovery is to treat it as forward progress. Firing off an email to a superior saying: "I discovered an unknown race condition, let's see what we need to do to fix it," should be treated no differently than "I finished this feature." The presumption should be that the engineer used due diligence and the situation was unforeseen. Now, that doesn't preclude separate improvements to diligence procedures to provide better foresight in the, future but under no circumstances should anyone be blamed for discovering obstacles or defects and raising them. When your culture does that, all you'll get is shitty product.
Just to play devil's advocate, there is also the type of person who tends to get stuck down rabbit holes more than they should. There's a skill not just to solving problems but avoiding them in the first place. Usually this requires not just engineering talent but the ability to see the bigger picture, in terms of product and business goals. If someone is banging their head against the wall, it may be because they are trying to solve a necessary difficult problem and are discovering valuable knowledge, or it could be because they are foolishing running into a wall that they could easily go around if they stepped back for a second and considered what they had set out to do in the first place.
In your race condition example, it would be the difference between an engineer debugging a race condition in mission critical code, and debugging one that is incidental complexity in a unit test or ancillary tool that could be refactored to be single threaded, or just omitted altogether. Heck, even in the mission critical code case, it's worth considering how many users it effects, how it effects them, and at every moment considering if the time spent so far on the problem (and its associated opportunity cost) is still justified. (Taking into account the fixed costs of switching context into the problem again.) It's very easy to get wrapped up in an interesting problem and forget about the eject button.
This is where "The presumption should be that the engineer used due diligence and the situation was unforeseen" clause enters into play.
I've seen many folks in management just always assume incompetence or rabbit holing on the part of engineers. It's good to ask clarifying questions. But as a manager you should be aware that adopting a style of second guessing and interrogating everything your reports do comes off as insulting - after all, you likely hired them, so why don't you trust them?
That being said there are no hard and fast rules here, and being aware of the different histories of different folks is important, but defaulting to a critical attitude is likely to lead to all types of efforts to hide things from you just to avoid questioning, feeling insulted, having their time wasted, etc.
> it could be because they are foolishing running into a wall that they could easily go around if they stepped back for a second and considered what they had set out to do in the first place.
Note to self: Remember this, whenever you feel stuck and look at the problem in perspective.
My experience, tells me this is one of the biggest guidance a technical/engineering manager must be able to provide to his team. I am not sure how hard or easy or for that matter even makes sense from the manager/lead's perspective, it is to provide this, but have found that whenever my manager is asking me to speed-up i run into these problems and lose the ability to judge whether i'm running against a wall or not.
This is not incompatible with treating obstacle discovery with forward progress. Often the outcome of discovering an obstacle is making a decision at a higher level to go around that obstacle, and either changing the product concept or architecture so that it's a non-issue. You can't make that decision unless you have the information that what you're trying to do is difficult or impossible.
And that's why management needs to be both responsible in instilling this attitude in themselves, and in the developer leads that are responsible for overseeing the developers. And it is the developer lead's responsibility to ensure that their developers aren't going down to many rabbit holes unnecessarily, specifically because that's too technical a decision for management to make.
If, by default, you assume the worst in your employees, they will eventually ALL start to behave in the worst ways. Why? Because every time they're honest with you, they get screwed when you assume they're just trying to get the better of you. So they'll eventually either be fired, or learn to lie to you.
When your manager is annoyed and replies "oh man, how much will I have to push back the next release deadline for your bug?!" even though you just discovered it and didn't necessarily cause it, it's much more disheartening than something like
"Good! Fix it! I'm giving you two weeks until the next release (instead of releasing tomorrow) and if it's still an issue we'll get you some backup."
Race conditions can in some cases break the entire product or render some of its results useless (depending on scale and location of the bug, naturally). I definitely think in most cases they shouldn't be taken lightly.
> even though you just discovered it and didn't necessarily cause it
Whether you caused it or not is irrelevant. The only issue on the table when confronted with a defect should be: 1) how do we fix it so the customer does not get shitty product; 2) how do we improve the process so that such defects are not introduced in the first place.
If there are people who do careless work and consistently introduce defects, that's a wholly separate issue to be handled through separate channels. If you mix the engineering issue, with the HR issue you will create a culture where people would rather let defects get to the customer than get in trouble for raising them internally.
The problem with many companies is that they're too focused in profits that they disregard other areas. Their inflexibility can reach ridiculous levels, and they will understand what they want from development philosophies like "Release Early, Release Often".
But I completely agree that a shitty culture leads to a shitty product.
Angry managers are those who are not doing their job. I find that the effectiveness of the manager is directly related to just how angry they get. Usually, the very angry manager, is an incompetent junior - as in, cannot do his juniors job. This is typical. But when a team decides to support its own organizational structure, and thus support the manager in doing his/her job, then things do work.
Its just that people are often-times so bad at communicating with each other. Anger rarely helps there, either..
if our colleague gets into an unexpected issue and he takes time to solve, we pat him for it. however it's important that the learnings have to be shared. they should be tangible enough to be reapplied if we get into a similar situation in future.
I've been developing for 30 years. When asked for an estimate I spend time thinking about it and then when I come up with a number I multiply it by 3. Usually this works but sometimes I can be off. My typical process is to score stuff from "totally get it" to "don't have it all in my head". I then guess on hours and in the end multiply by this 3 factor.
When my clients complain about the estimate I often say we "write software":
1) The way the client thought it should be written
2) Again the way the developer thought it should be written
3) And finally the way is should have been written in the first place
I don't remember where I picked this up at, it was eons ago, but the axiom seems to work.
When my teams give me estimates I almost always fudge by this 3x factor when working with my stakeholders and explain how hard it is to really estimate development without wasting tons of time doing massive waterfall charts. Then they ask for the chart and I laugh "but then it'll be wrong two seconds after we publish it."
In all my years of coding there have been "wow that went fast" times on projects and "oh man I'm dying here" and the more room I leave to reduce stress in the "dying" phase the easier it is to break free and get back to "fast" mode.
When I'm writing a contract I usually put this in the milestones I'm paid by. E.g. one week for feature 1 delivery, next week for client requested adjustments to it, next week for feature 2 delivery, week after for client requested adjustments, etc..
Generally I only pick something I could finish in half a week for the 1 week milestone too, so there is plenty of time if it turns out tougher than I thought.
This is also a good way to communicate that you won't tweak a feature every week for months after without adding another milestone (and more pay) to compensate, something every client always asks for in the end, because you don't really learn anything until a feature is put in front of an end user.
I multiply by three. My justification for this is that coders think only of coding time, and that the rest of the time is taken up with communication etc.
I tested this against an expert management estimate once. The difference was less than ten hours over several months.
But you risk your manager (if you have one) telling you you are "sandbagging." And you might see others on your team tell your manager "it's no big deal - an hour or so of work". And they slap some crap in there that whacks the mole down long enough for the manager to make a mental note to give your team mate a bigger raise than you.
I agree. This can be problem. Even if they don't slap some crap together. They may in the end take just as long (or longer) than your estimate, but by that point nobody remembers who estimated what. Just that you were the pessimistic one who wasn't confident and they were the eager go getter who solved the hard problem.
If you can't have a good relationship with your stakeholders then you should find a new stakeholder not sweat what someone else is willing to do to get ahead.
In the end if you're reasonable and can show you've hit the nail on the head often enough to be trusted then you won't have conversations about sand bagging.
It's exactly the same thing I have been doing. This is a common project management practice. If you translate this practice in the terms of the original article. This is a kind of double thinking. You know the time it's going to take you, but you also know that you need to multiply it by 3 for it to be correct.
Every project has 'gotchas'. And I don't mean just every coding project, I mean every project anywhere, ever.
Part of the key of navigating those 'gotchas' is to immediately communicate it to your stakeholders when you run into one.
Sit down and watch HGTV for a an hour or two, and you'll see that carpenters, plumbers, electricians, and HVAC contractors all run into problems -
'Whoops, this wall is load bearing, we'll need a beam here and it's going to take us another 3 days to add it.'
'Your main drain is made out of clay, and we'll need to spend 25% of your budget to replace it, looks like you won't be getting that powder room after all'
'The main of your HVAC stack is right where you want to put that doorway, we'll have to spend another $5k on this project to re-route it.'
People that typically finance projects are used to hearing these kinds of problems. Don't assume they won't like hearing them and will fire you - that's a huge mistake. Be honest and upfront with them. Problems like these are not your fault, just be clear on what it's going to take to fix them.
The difference between programming and construction projects is that property owners have usually already invested huge amounts of money into the property, so an extra few thousand doesn't seem like a huge deal and the extra work is often financed as part of the mortgage. Construction contractors also don't promise "magic" like programmers do.
No I would rather argue that the difference is that software is not visible / tactile. When a construction project runs into a problem, you show the physical problem to the client and he gets it.
When a software project runs into trouble, the client has to trust what the developer tells him and immediately the client starts to doubt the developers' competence.
One of the biggest mistakes I see developers (and development companies) make is to define the edges of their project too narrowly.
Almost any piece of contract software development is intended to fit into a much larger system that has already seen heavy investment. Yet, often developers treat the software as an independent "thing" that they are creating.
For example, redesigning/rebuilding a corporate website is not (just) a web development project. The corporate site is one (often comparatively small) component of an operation that might include press relations, investor relations, social media, advertising, partner relationships, retail relationships, supplier relationships, etc.
Looking holistically at what a company invests in their corporate identity and marketing does two things. First, it provides very valuable guidance on the web site project itself. The new site is going to have to fit in with all these other activities. That's a set of very useful constraints.
Second, it puts the website project in its proper perspective. The company is not "replacing a building." They're replacing one component of a large multi-million-dollar marketing operation.
80% of the projects I've worked on already had code written for them when I showed up to work on them. There's no difference between the existing investment in legacy code and the existing investment in a property.
New home builds also have gotchas.
Programmer's promising "magic" goes back to my original point of communicating with the stakeholders. If you're promising "magic", you're just asking for the project to fail. Be clear an upfront, and your customer will realize you're an engineer and not a miracle worker.
Property investment is a little different because you can usually recoup at least some of the money spent at a later date by selling the property. A building without a wood rot problem will always be worth more than one with one, especially when the problem is likely to get worse over time if it's not dealt with.
A software project on the other hand is typically worth close to nothing if it doesn't fulfil it's requirements , so the sunk costs are different.
To the second point, the problem with programming projects is that you are often competing against more enthusiastic and less experienced devs who are naive and under estimate.
I spent (I think) about two years walking by a sign in the south loop that said something like "coming in October of 2005, Target" and I thought, well, that's a pretty bold estimate for completing a building. Damn if that isn't exactly when they finished. Maybe they were changing the sign periodically and I failed to notice.
To estimate the time required to design your system you need to estimate the variance of the time required. A well known result is that the variance of a sum of independent variables associated to independent problems is the sum of the variances of the subproblems.
What this mean in practice? It means that you should try to decompose your problem in independent and dependent subproblems (perhaps associated to features). Independent subproblems their variance, the formula for dependent problems depends of the correlation between them.
What I am trying to say is that if you know a little math and have an intuitive knowledge of the structure of the problem divided in sub-problems you could make a much better estimate of the variance (and this mean there is much less uncertainty in the delivery time). One could design a program that constructs a graph and a knowledge base of previous programs that you have designed. The graph edges should be weighted in accordance with the correlation factor of dependency of those components.
Now that I am writing this, it seems very likely that someone has implemented such a scheme to estimate time delivery. Perhaps there are start-ups using this system to estimate delivery time, since all this is basic math.
This is why I've learned to always under promise and (try to) over deliver. Whenever I think a project may take me 4 hours, I tell them 3 days. 1 week = 1 month, and so forth.
Another valuable technique is the Pomodoro one, dividing one's work into intense 25 minute chunks of time. One then starts to think about work in terms of number of Pomodoros, giving oneself better a better idea of quantities of work. There are other productivity and health benefits as well.
It's an excellent article, at least three different times I caught myself shaking my head in agreement. I really liked, "Congratulations, you’ve just invented Waterfall.".
Why does programming have this problem? No unions.
Really. Compare film scheduling and construction scheduling, which are well developed disciplines. That's because, in those industries, there's paid overtime, and at rates higher than straight time. Thus, "crunches" caused by overoptimistic scheduling add labor costs, which come right out of the company's profit.
This forces scheduling discipline on management. There's a tendency to overestimate, rather than underestimate.
It is possible to become better calibrated for long-term predictions. This takes a longer amount of time, but is still doable. It's an intuitive process, and the way I do it relies on discovering the true probabilities associated with various feelings of confidence I have in a given domain.
The best tool I've yet found to do this is PredictionBook.com. It could use things like tags support (so you could see your accuracy for sports separately from your accuracy for development), but it's still useful overall. I've gotten much better at being accurate, especially for 90% predictions, which is a useful benchmark.
The old maxim goes like 10 lines per day on average (of fully "developed" and tested code).
90% of time should be spent on studying textbooks, reading other people's code (only the best authors, like Joe Armstrong or Simon Marlow or Rich Hickey, you know), [wishful] thinking, visualizing, drawing diagrams (using a pen and paper, no UML and shit), writing pseudo-code (like they do in AIMA), [unit]tests (before code!) and then just 10% of time for coding, when you know what exactly you are doing and why.
Interestingly, I seem to have the inverse problem. People telling me how easy it is.
I don't do web dev as a matter of course these days (I did nothing but back in the day though), but once in awhile I help a friend out with their portfolio of sites. Usually easy updates. But the conversation almost inevitably starts out with, "Do you think you could take care of it? It should take a few hours".
> “We should just be more careful at the specification stage”. But this turns out to fail, badly. Why? The core reason is that, as you can see from the examples above, if you were to write a specification in such detail that it would capture those issues, you’d be writing the software.
Not true; a specification can precisely describe the properties of the solution of a (sub-)problem, rather than the process of solving it.
You must have much smarter users reading your requirements than I do.
But seriously, though, YMMV, and the mileage on this one is all too often very low. If the requirements are detailed enough to be accurate, the users don't understand them, or simply agree to blatant inaccuracies.
Iterative development really is The Way, but The Management simply doesn't want to hear it. "What part of 'Loser' didn't you understand?" said the Sociopath (as long as hiding requirements = free extra labor).
The closer are the abstractions in the coding language to those in the specification language, the closer the spec resembles the code. If the two abstract in different ways, then not.
I usually give "them" a shocker estimate, and my rationale is pretty simple - unknowns. And for the most part my estimates are usually right. If asked to trim it, I simply state that OK, I'll deliver what may not be the right product but then it will become a matter of maintenance instead of development. Your choice.
One thing I didn't see mentioned is moving goalposts. If you think something is easy, you're more likely to accept adding some bells and whistles. So it will take at least the time you set aside for it. Or more.
i read this as a break from writing a functional spec. And i've already read Thinking Fast and Slow. And I know that I'm going to be wrong. And I know that no matter what i say to the client, they will say 'OK' and then wonder why i missed the estimate. And i also know that this is what normal looks like and when i'm in the middle of it and it seems as though it's a spaghetti diagram of dependencies, it will work itself out so long as i keep chipping away at it. this is a strange profession - i wouldn't build a bridge like this...
The inaccuracy of your plan is equal to the amount of things you planned in the first place. It's a lot more important to focus on a goal and react according to what the journey throws at you.
One unaddressed problem with estimates is: if the developers finish before the project is due according to the estimate, there is no motivation to start doing new work.
While it only happens on a rare occasion, whenever I finish my projects early I'm always excited to show the client/management that we've been able to make better progress. However, I'm extremely cautious not to let their excitement alter their perspective about future estimates being completed early.
Another issue is that often clients/management will jump the gun into the next phase after an early completion without allowing the devs a much needed break. Perhaps this is one of the fears that keeps devs silent when (if ever) they finish early, or stretching their work till the end of the original estimate.
Either get better developers, or get better management. Usually the second, because if your management doesn't know how to reward your devs for good work then they are going to cause more problems than just this one.
Asking for an estimate is often a power play or "microaggression". It's more often about showing dominance than any legitimate business need to know this unknowable quantity. It plays a programmer's present self against her future self.
Her future self wants an accurate or even pessimistic estimate to be made, so there are no unpleasant surprises to management and painful conversations resulting from inflated expectations. But her present self wants the guy standing at her desk to go away so she can get back to work. An unreasonably optimistic estimate gives the powerful, annoying people what they want so they go away. A realistic estimate is going to lead to obnoxious follow-on questions. "Why's it going to take so long?" "I don't know, but experience leads me to think..." "But you see no specific reason why this can't be done in 2 weeks?" "Well, no, because there are unknown unknowns..." "Great! Two weeks!"
This degenerates for two reasons. First, managers tend to believe that, even if overly optimistic estimates are more inaccurate, the work is done more quickly with unreasonable estimates (which become "milestones", then "deadlines"). So they take this as an incentive to be more irritating because it "makes people work faster". Second, it leads to poor planning and brittle schedules and much more undesirable variance.
Obviously, the good software managers don't play these games, but they're probably 1 in 10. Because there is so much money in software, and because programmers refuse to organize and allow themselves to be underpaid, that creates lots of room for mismanagement. Developers are also to blame for some of this; because they've been in mismanaged environments for years, many of them have developed a distrust of management that has led them to miscommunicate and (inappropriately) simplify, hence the "estimates" that evolve into deadlines. Overconfidence and miscommunication are, for sure, substantial components.
Sometimes there are real deadlines, such as for regulatory changes.
But, yeah, often it's just bullying to drive a bad bargain. Most management HATES iterative development, because it's too honest.
One of my favorite Ed Yourdon quotes: "Vote with your feet" (from "Decline and Fall of the American Programmer, which scared the hell out of me, rightly so in some cases - and after working with off shore development, that scared the hell out of me)
When management has no development experience, it's even more impossible. At pretty much every place I've ever worked, features get added and the scope changes many times before the project is finished. When it's not finished on time, management likes to blame the developers and can't figure out why nothing is finished on time.