When you can measure what you are speaking about, and can express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: It may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the stage of science. -Lord Kelvin
Software metrics used to measure attributes associated with a software application or a project. The purpose of measuring is self evident- one measures to improve. In a software development project, one measures attributes that help to improve quality, speed up execution and reduce costs, in other words, they should be useful to make software development faster, better and cheaper. This series of posts discusses some of the software metrics that are particularly important from a testing perspective and the benefits and challenges associated with gathering, analyzing and using these metrics.
What is to be measured?
“The risk with any metric is that people will come to see it as a description of reality, rather than a tool for a conversation about reality… one metric or another can function well only when managers know why they are measuring and for whom… In the world of social value-creation, context is king.” (The Economist survey of wealth and philanthropy from the February 25th, 2006 issue). Source
While the talk of software metrics abounds- indeed a number of companies do have a metrics plan- few implement it, or implement it in a systematic form. One reason could be that significant costs are associated with such measurements, one estimates puts the cost as 4-8% of the total development budget1. Another is that while metrics are useful primarily from a management perspective, the source of the data are the technical people on the project. More often than not, the latter are so much in the thick of meeting deadlines that data is either not collected sufficiently or not on time, when it is too late to collect it. Metrics, particularly related to defects, are certainly collected when the applications or product does not exhibit stability or cause too many issues for the end user. However, more often than not this is as part of a damage control action, rather than as part of a systematic practice of gathering data. On the other end of the spectrum, data that might be gathered as part of practice may not relate or even obfuscate the intended objective. It is, therefore, important that while data gathering and metrics analysis is done as part of the routine, care is taken to gather only a few but key metrics.
1. NE Fenton and SL Pfleeger, Software Metrics: A Rigorous and Practical Approach, 2ndedition, Boston: PWS Publishing, page 28