Skip to main content

How To Work a Sigmoid

Software Development in Really Big Steps
  1. How To Work a Sigmoid
  2. How To Work a Sigmoid - Part Two

I've written about my use of FogBugz, driven by their great time tracking and estimation features. Using these, I've come across what I think is probably common and should be a goal for estimating the time of a project.

There are two estimations of a project. When you start, you can make some wild guess, pulled from the ether, weeks or months ahead of when you think it will be complete. This is the number that is notoriously and unequivically wrong. This kind of prediction is simply an invitation to make a smart person look dumb, since so few of us realize that he never was able to make that estimate. The larger the project, the greater the exponent on your chances of being able to make this estimate. This is not new to any of us.

The second estimate is the running estimate, compiled from the tasks the project has been broken down into. Now, the pro of this running estimate is that it is bound to be more accurate than the wild guess you started with, especiallly if computed with some of the fancy number working FogBugz does to account for how good different developers actually are at estimating their time. However, to every pro is a con and this one has a big one: the running estimate, although more accurate, is incomplete. You can only estmate for the tasks you've broken the project into and that is a fluxing set of tasks. As you develop you break larger tasks into smaller ones, learn new things you need to do, change requirements, find bugs in the work you've done already or the dependencies you use, and continue to iron out the design. This is even more true if you use agile techniques, so you didn't design a lot of upfront, but design on the go. Not to say this isn't a good thing, but it is a thing to be aware of.

The project starts at 0.0 and it ends at 1.0. Your guess is somewhere below or above 1.0, but mathematically cannot be equal to it (because, you can't guess!). As you accelerate your collecting of tasks to do the running estimate begins to increase toward 1.0 very quickly, until you start to level out and complete more tasks than you create. We work on the running estimate of a sigmoid curve, winding up from nothing and leveling off at the best real estimate that can be given with the real data at hand. Now, I grabbed this image from some place and I didn't add the flat line that represents your initial guess. This is both because I didn't have the time and because that guess is completely useless.

Great, so we work a sigmoid. So what?

The world is flooded with useless information and I don't want to contribute, so this is the part where I make my revelation somewhat useful. At least, theoretically. A good estimation system, like the Evidenced-Based Scheduling from Fog Creek, is really great. But, what if we included estimation of estimations? Oh, that sounds recursive, for sure. Suppose that, in addition to computing the weighted estimates and the running estimates of release after compiling all the information that can be taken usefully into account, we also track the running estimates as they change over time. If we graph these, I suspect they would roughly follow the curve of the sigmoid. If we find this or any other pattern to be true, we can estimate the estimations. The further along a project goes, we can estimate the future of the curve and make moderately intelligent guesses about where the estimation will go in the future. Weighting for how different teams and individual developers estimate, the system can train itself for accuracy.

I'm already into my current FogBugz tracked project, but my next will be setup to grab the estimate data periodically and I'm just itching to test out my theories. We can't predict when a project is going to be complete, if it ever is, but we can damn sure do better than pulling numbers out of the air.

Comments

Popular posts from this blog

CARDIAC: The Cardboard Computer

I am just so excited about this. CARDIAC. The Cardboard Computer. How cool is that? This piece of history is amazing and better than that: it is extremely accessible. This fantastic design was built in 1969 by David Hagelbarger at Bell Labs to explain what computers were to those who would otherwise have no exposure to them. Miraculously, the CARDIAC (CARDboard Interactive Aid to Computation) was able to actually function as a slow and rudimentary computer.  One of the most fascinating aspects of this gem is that at the time of its publication the scope it was able to demonstrate was actually useful in explaining what a computer was. Could you imagine trying to explain computers today with anything close to the CARDIAC? It had 100 memory locations and only ten instructions. The memory held signed 3-digit numbers (-999 through 999) and instructions could be encoded such that the first digit was the instruction and the second two digits were the address of memory to operate on

Statement Functions

At a small suggestion in #python, I wrote up a simple module that allows the use of many python statements in places requiring statements. This post serves as the announcement and documentation. You can find the release here . The pattern is the statement's keyword appended with a single underscore, so the first, of course, is print_. The example writes 'some+text' to an IOString for a URL query string. This mostly follows what it seems the print function will be in py3k. print_("some", "text", outfile=query_iostring, sep="+", end="") An obvious second choice was to wrap if statements. They take a condition value, and expect a truth value or callback an an optional else value or callback. Values and callbacks are named if_true, cb_true, if_false, and cb_false. if_(raw_input("Continue?")=="Y", cb_true=play_game, cb_false=quit) Of course, often your else might be an error case, so raising an exception could be useful

How To Teach Software Development

How To Teach Software Development Introduction Developers Quality Control Motivation Execution Businesses Students Schools Education is broken. Education about software development is even more broken. It is a sad observation of the industry from my eyes. I come to see good developers from what should be great educations as survivors, more than anything. Do they get a headstart from their education or do they overcome it? This is the first part in a series on software education. I want to open a discussion here. Please comment if you have thoughts. Blog about it, yourself. Write about how you disagree with me. Write more if you don't. We have a troubled industry. We care enough to do something about it. We hark on the bad developers the way people used to point at freak shows, but we only hurt ourselves but not improving the situation. We have to deal with their bad code. We are the twenty percent and we can't talk to the eighty percent, by definition, so we need to impro