COCOMO (COnstructive COst MOdel) has been designed in 1981 by Barry Boehm to given an estimate of the number of man-months it will take to develop a software product and it is referred as COCOMO 81.
A new model called COCOMO II was designed in 1990 and the need for this model came up as software development technology moved from mainframe and overnight batch processing to desktop development, code re-usability and the use of off-the-shelf software components.
COCOMO used statistical returns to calculate project cost and duration within a given probability. The model sought to provide a tool for predictably estimating cost, and continues to evolve today under the sponsorship of the University of Southern California. The model was/is interesting and produced worthy merits in applying statistical analysis to the problem of cost estimating. However, a major defining point in statistics is sample set size. The underlying assumption for COCOMO (like FPA) is that a statistically significant historical database exists to drive the statistical factoring. This will become a common theme through many attempts to create estimating models. Software engineering teams are typically very good at collecting list of bugs, but notoriously bad at gathering meaningful historical statistically significant metrics useful in predicting future projects.
COCOMO consists of a hierarchy of 3 increasingly detailed and accurate forms:
- Basic COCOMO - is a static, single valued model that computes software development effort and cost as a function of program size expressed in estimated lines of code.
- Intermediate COCOMO - computes software development effort as function of program size and a set of "cost drivers" that include subjective assessment of product, hardware, personnel and project attributes.
- Detailed COCOMO - incorporates all characteristics of the intermediate version with an assessment of the cost driver's impact on each step in SDLC.
Comments