Budget and schedule are derived from estimates, so if an estimate is not accurate, the resulting schedules and budgets are likely to be inaccurate also. Given the importance of the estimation task, developers who want to improve their software estimation skills should understand and embrace some basic practices. First, trained, experienced, and skilled people should be assigned to size the software and prepare the estimates. Second, it is critically important that they be given the proper technology and tools. And third, the project manager must define and implement a mature, documented, and repeatable estimation process.
To prepare the baseline estimate there are various approaches that can be used, including guessing (which is not recommended), using existing productivity data exclusively, the bottom-up approach, expert judgment, and cost models.
SOFTWARE PRODUCTIVITY LAWS
These laws of software productivity help explain the dynamics of an engineering development project, and they illustrate some of the reasons that just using productivity to estimate is inadequate.
Law 1 – Smaller teams are more efficient. The smaller the team, the higher the productivity of each individual person.
Law 2 – Some schedule compression can be bought. Adding people to a project, to a point, decreases the time and increases the cost as larger teams work together.
Law 3 – Every project has a minimum time. There is an incremental person who consumes more energy than he or she produces. Team size beyond this point decreases productivity and increases time. (Law 3 is also known as Brooks’ law.)
Law 4 – Productivity is scalable. Projects of larger software size can use larger teams without violating Law 3.
Law 5 – Complexity limits staffing. As complexity increases, the number of people that can effectively work on the project and the rate at which they can be added decreases.
Law 6 – Staffing can be optimized. There exists an optimal staffing function (shape) that is generally modeled by the Rayleigh function.
Law 7 – Projects that get behind, stay behind. It is extremely difficult to bring a project that is behind schedule back on plan.
Law 8 – Work expands to fill the available volume. It is possible to allow too much time to complete a project. (Law 8 is also known as Parkinson’s law.)
Law 9 -Better technology and stable processes yield higher productivity.
Law 10 – There are no silver bullets. There is no methodology, tool, or process improvement strategy available that yields revolutionary improvements in project efficiency.
In order to determine the effort that will be required to complete your project, you will need information that describes the personnel who are available – in terms of their qualifications and the optimal composition of the team – and you will need to develop an initial estimate of how long it will take them to fulfill the project requirements. Productivity, in its most basic sense, is a measure of software production expressed as the SLOC or function points one person can produce in an hour, a week, or a month.
Productivity can improve dramatically if you have a team that jells and continues to produce at a high level, and you will see other benefits as well. However, even a jelled team can experience chaos. If you are forced to compress your schedule because you assumed a level of productivity that did not occur, excessive overtime and further attrition will result and morale will become a joke. The point is that you cannot foresee early in a project how your team dynamics will play out as the project proceeds. Although management often likes to flatter itself and its customers on the ability to form a smoothly functioning team and assumes a high level of productivity in an initial estimate, simple prudence dictates that you should estimate normal productivity factors based on historical performance.
Bottom-up estimating, which is also referred to as “grassroots” or “engineering” estimating, entails decomposing the software to its lowest levels by function or task and then summing the resulting data into work elements. This approach can be very effective for estimating the costs of smaller systems. It breaks down the required effort into traceable components that can be effectively sized, estimated, and tracked; the component estimates can then be rolled up to provide a traceable estimate that is comprised of individual components that are more easily managed. You thus end up with a detailed basis for your overall estimate.
SOFTWARE COST MODELS
Different cost models have different information requirements. However, any cost model will require the user to provide at least a few – and sometimes many – project attributes or parameters. This information describes the project, its characteristics, the team’s experience and training levels, and various other attributes the model requires to be effective, such as the processes, methods, and tools that will be used.
Parametric cost models provide a means for applying a consistent method for subjecting uncertain situations to rigorous mathematical and statistical analysis. Thus they are more comprehensive than other estimating techniques and help to reduce the amount of bias that goes into estimating software projects. They also provide a means for organizing the information that serves to describe the project, which facilitates the identification and analysis of risk.
Despite their proven benefits, they can have certain disadvantages. For example, they allow unscrupulous estimators to enter inaccurate information to justify an unachievable plan and can give a false sense of security when poor size ranges have been entered.
A cost model uses various algorithms to project the schedule and cost of a product from specific inputs. Those who attempt to merely estimate size and divide it by a productivity factor are sorely missing the mark. The people, the products, and the process are all key components of a successful software project. Cost models typically use a historical database calibrated to the organization to derive the estimates, or, if this information is unavailable, they use typical information that is derived from industry or vendor sources. Cost models range from simple, single formula models to complex models that involve hundreds or even thousands of calculations.
Numerous well known models exist to estimate software costand effort, including Boehm’s COCOMO suite of models, Putnam’s SLIM model, and Galorath’s family of SEER models. Generally speaking, these models estimate effort by making effort a (predefined) function of one or more variables, e.g., size of product, complexity, or available staff.
Organizations that want to use more than one technique to arrive at a comparative estimate should develop and embed cost estimation processes. If your organization uses cost models as its primary method of estimating effort and duration, using two different models, a single model with built-in cross checks, or multiple sizing techniques can give better results than a single estimate.
ORGANIZING THE ESTIMATING PROCESS
While a rigorous, repeatable estimation process will most likely result in an accurate range projection of the size and cost of an application, estimator inexperience or bias and varying experience levels among estimators can undermine the potential for achieving a valid and accurate estimate. To overcome this basic fact of life, you must use a documented and standardized estimation process and apply standardized templates to collect and itemize tasks. Doing so will help ensure the information you gather is complete and that the subsequent analysis follows a proven process. It will also help you document, for historical purposes, the processes and assumptions you have used to develop the estimate and to record the results of each estimation activity.
DELPHI AND WIDEBAND DELPHI
Following a rigorous process is essential to arriving at a useful estimate that is relatively free of the bias that results from estimators who have predetermined opinions or agendas, who are inexperienced, or who have divergent objectives or hidden agendas. You can further offset the effects of these biases by implementing the Delphi estimation method, in which several expert teams or individuals, each with an equal voice and an understanding up front that there are no correct answers, start with the same description of the task at hand and generate estimates anonymously, repeating the process until consensus is reached.
Another way to estimate the various elements of a software project is to begin with the requirements of the project and the size of the application, and then, based on this information, define the required tasks, which will serve to identify the overall effort that will be required.
The major cost drivers on a typical project are focused on the non-coding tasks that must be adequately considered, planned for, and included in any estimate of required effort. Of course, not every project will require all of these tasks, and you should tailor the list to the specific requirements of your project, adding and deleting tasks as necessary and modifying task descriptions if required, and then build a task hierarchy – that usually takes the form of a WBS – that represents how the work will be organized and performed. The resulting work breakdown structure is the backbone of the project plan and provides a means to identify the tasks to be implemented on a specific project. It is not a to-do list of every possible activity required for the project; it does provide a structure of tasks that, when completed, will result in satisfaction of all project commitments.
Step Four: Software Sizing
Step Six: Quantify Risks and Risk Analysis