Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nobody should make estimates. They're always wrong.

Do your best to break tasks up so that all tasks are the same size. Then work on tasks. You'll find a stable average of tasks per amount of time and that will let you forecast how long things will take, how much they'll cost, etc.

That's how you figure out when things will be done.



That is bull.

Some tasks have a high degree of uncertainty. Others don't.

Back when I was using a ruby-on-rails style framework (in PHP) I would frequently get 20 hours of work estimated properly down to 15 minutes when it came to adding simple features to a web application.

If on the other hand you are trying to figure out the gap between what the documentation says should work and what actually works, that is hard to estimate.


And nevermind the technical issues. If you need to work with others, that's where estimating gets hard.


Do your best to break tasks up so that all tasks are the same size

That is estimation, in the Scrum sense.


That act of breaking up and organizing tasks of the same size, that's what estimation is. This feature has these tasks, historically we complete these tasks in this time, so here's a hard minimum for a completion date (which implies cost). Slap on an appropriate fudge factor for dealing with other teams, testing and burn in, and general error bounds as needed.

You've described scrum, what you're doing is scrum.


It's not scrum. There's no sprints or commitments.


Don’t make estimates, make... forecasts?


This kind of word game in agile drives me crazy.

It’s like points aren’t hours, but if I estimate (er, forecast) how many hours it will take, its pretty easy to turn that into points, and vice versa.

I suppose it’s like one of those, “There is no word for it in English, but it essentially means...”


Points are a team-internal measure of a task. You can after the fact convert points to time, and then after a few sprints (when you have a fairly stable average velocity) then you can convert points to hours and do a forecast/estimation.

// In my mind you forecast the date of completion, but you estimate the amount of work. It's probably just mindless semantics, after all saying you can estimate the date of completion sounds just as natural, but saying you can forecast the amount of work sounds a bit unnatural.


Sure, just throw some marketing style word play at it. Who knows, maybe this time it will stick?


That's making estimates.


This is the perfect recipe for never delivering anything. Without release dates developers will continue building and gold plating and building and gold plating. Create your tasks, estimate your tasks, put a date out there, and hit the date. If the product has bugs, unfinished features, then so be it. Users understand flaws. They don't understand missed dates.


Not all developers are gold platers. That seems to be a more common behavior among inexperienced developers.


That's interesting as in my experience is exactly the other way about. It's the experienced developers that try to gold plate to avoid the issues that they had in the last project or two projects ago, whereas the juniors ship code quickly but unfortunately often incomplete and certainly lacking a reasonable amount of test coverage


> It's the experienced developers that try to gold plate to avoid the issues that they had in the last project

Literally learning from the past and applying it to the present is “gold plating”?

I’m stunned.


I don't think the parent poster was saying that the learning itself was bad.

I think the gold plating they're referring to is a form of over-correcting. The learning itself is good, and correcting prior issues is good. But over-correcting and over-learning can be problematic and can lead to gold plating.

I don't think there's any easy indication of the line between the right-amount of correction and over-correction, but I don't think it's unreasonable to state that one can over-correct based on prior experiences.


This is exactly what I was trying to say.


The second-system effect is definitely real and is a trap most developers will fall into. I think gold plating introduces liability and it would be better to ship early. I am, however, conflating my own behavior and anecdotal observations to paint broad strokes.


Estimates are always wrong only if they are concrete estimates as if they were certain.

If, instead, they are 90% ranges (I am 90% sure that this will be done between x and y) then it is much easier to estimate accurately. It is also easier to spot bullshit (if the spread doesn't go up as your time to completion moves further from now, it is probably a bullshit estimate).


I really love the method from The Clean Coder, where you make best/median/worst estimates for tasks and then add them up to a mean and standard deviation. This helps capture the truth of “we don’t know how long it’s going to take, but it’s likely between x and y”.

Where this still falls apart, for me, is knowing how many hours per day I’ll be able to spend on each project. I have multiple clients, and things come up. This method has gotten me very good at the budget aspect of project estimation, but the scheduling aspect still slips some (it’s rare that more hours in a day become available)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: