I am convinced that the amount of routine work in software development is – and should be – relatively small, if not negligible, and that this is the fundamental problem of software estimation.
Let me describe how I come to this conclusion and tell me if the argumentation has any serious flaws:
All that can be estimated with high accuracy is routine work, meaning things that have been done before. All other kinds of work involving research and creativity cannot really be estimated, at least not with an accuracy of, let’s say, +/- 20 percent.
Software development is all about avoiding repetitive tasks. One of its basic principles is DRY (don’t repeat yourself). Whenever a programmer finds himself doing repetitive stuff, it’s time to find an abstraction that avoids this repetition. These abstractions can be simple things like extracting the repeated code into a function or putting it in a loop. They can also be more complex like creating a domain specific language. In any case, implementing them will involve research (has anyone done this before?) or creativity.
From these two points I draw the above conclusion.
Actually I have been wondering for quite a while why this relationship is not mentioned in every other discussion, blog post or article about software estimation. Is it too theoretical? Are my assumptions wrong? Or is it too trivial – but then, why do most developers I know believe that they can do estimates with an accuracy of +/- 20 percent or better?
On any single given project this may be true. However if you work on multiple, similar projects for different companies over the years you may find yourself ‘solving’ basically the same problem many times with only slight variations.
For example I’ve written data access layers so many times I now prefer to do it ‘long hand’ rather than use the popular ORM of the month. It’s quicker and easier for me to deal with the ‘routine problems’ with known solutions than find and solve new quirks in 3rd party components.
Obviously I could write my own ORM to simplify the repetitive code without added the unknown quirks in someone else’s system, but this code would belong to the company I happened to be working for at the time, and other developers would find it just as quirky as any other 3rd party ORM.
Similarly, in my experience, most programming is the automation of business processes and although each business likes to think that their processes are unique to them; In reality they are not.
Not to say that estimation is easy! It is easier, but I find that these days the estimation problem is due to the inadequacy of requirements rather than the time spent coding.
Requirements tend to fall into three categories:
- Vague, details left to developer.
“Make me a website, it has to be cool and sell my widgets”
These tend to be the easiest to estimate, as when a hard unexpected problem occurs you can simply change the requirements to something functionally equivalent and avoid the problem.
- Very Specific
“Make the header background colour #ff1100”
Super quick to do and, again, easy to estimate. But! the requirement is bound to change. “Hmm no on second thoughts, try this other red” or “Wait! I meant only on that one page!” so the real time span of “how long until I am happy with the header colour” has nothing to do with the coding estimates
- Vague, details assumed
“Make me a website, (just like facebook)”
Here the multitude of unstated assumptions, “of course you will want a different logo”, “it should have infinite scrolling”, “must be scalable to 1 billion users!” effectively control the estimate. Either the dev thinks of everything and pushes the estimate up beyond expectations “1 meeelion man hours”, or they think of/assume only the base features are required and give a too low estimation. “oh a week or two, I assume you just want to put facebook in an iframe right?”
With experience coding is very fast, but designing requirements is (usually) the hard bit, and this is more and more pushed back on to non-coders. With agile methodologies increasing coding velocity by moving this responsibility to ‘the business’ rather than developers.
why do most developers I know believe that they can do estimates with an accuracy of +/- 20 percent or better?
Because we estimate our patience with the problem far more than the actual problem.
If I’m going to animate a ball bouncing I could spend a day, a week, a month, or a year on it and still just have an animation of a bouncing ball. Hopefully it will look better the more time I spend on it but at a certain point I’m being ridiculous.
How much effort I put into making the ball bounce is a function of the time that is reasonable to spend on it. If my skill level isn’t cutting it I may end up with a ball that just sits there. But when the deadline comes should I let the deadline slip or at least get a ball on the screen? Waterfall insisted the ball bounce and so the schedule slipped. Agile says just get the ball out there. At least we’ll find out how much people care about bouncing. So the quality slipped.
I try to be sure my balls bounce but when the deadline comes around it’s better to produce a static ball than nothing at all. So I estimate time based on what seems a reasonable amount of time to spend on a problem before talking about alternatives. Sometimes the ball just isn’t going to bounce. Sometimes that’s OK. Disappearing for a month is not OK.