© Copyright 1999 Cutter
Information Corp. All rights reserved. This artical originally appeared in the April 1999 issue of Cutter IT Journal, and is reproduced here with permission.
BACK TO THE BASICS: METRICS THAT WORK FOR SOFTWARE PROJECTS
by Dwayne Phillips
Metrics are powerful tools, but they are not popular with people caught in tough projects. Nevertheless, there are basic metrics available [1, 4] that can help in current and future projects. But just as important as the content of the metrics is collecting and using data in a way that pleases everyone on the project.
PAT, MIKE, AND METRICS
Pat, a maturing project manager, is beginning a new project (12 months, 10 people). There are risks, unknowns, and pressures with this and all projects. Pat wants to succeed on this project and gain knowledge that will help her and her colleagues on future projects.
Mike is Pat's manager. He oversees Pat's and six other projects. Mike is implementing a metrics program for all the projects. He wants Pat to collect and use data on her project. Pat and her team, however, have never used metrics.
Pat is not anxious to have her project fall under "this new metrics program." She's read about the benefits of metrics, but what is really important is delivering a product -- not filling out forms. Besides, her team members have seen these management fads come and go. If she asks them to do this useless metrics work, they'll think everything she asks of them is useless.
Mike tries to understand Pat's reluctance. In his days as a project manager, he had management programs forced on him. Some of those programs helped, while many did not.
WHAT TO MEASURE: THE BASICS
Mike's first challenge is to explain to Pat what he wants to measure. Metrics programs choke when they try to measure too much. When burdened with too much data collection, people fake the numbers to save time.
Mike is interested in four items :
1. The size of the product
2. The effort required to make the product
3. The quality of the product
4. The schedule
The product differs in different phases of the project. While gathering requirements, the product is the requirements documentation (this could be a document, a list on a white-board, a video, etc.). The product in the design phase is the design documentation. Source code is the product of the coding phase. Test reports are the product of the test phase. This description follows the waterfall model, but the basic metrics apply to all process models.
The size measure varies as the product varies. The size of the requirements is the count of individual requirements. The size measure in design is a count of design elements. The number of lines of code, subroutines, or class elements is the size of the completed source code.
The effort associated with a product is measured in person-hours. The project's plan estimates the person-hours needed to create a unit of product (i.e., perform a task). The team reports the actual person-hours to perform tasks.
The quality of the product is measured in errors found and fixed. People make and find errors in all phases of a project, not just in testing. Pat's team members will count the errors they find in the peer reviews held throughout the project.
Schedule tracks when tasks are planned and performed. The project's plan states when tasks should start and end. Pat's team will report when the tasks actually start and end.
Mike emphasizes that these basic metrics work only if the project has a plan. A general plan for the entire project must exist when the project starts. A phase of a project cannot start without a detailed, day-by-day, task-by-task plan for that phase. The phases can be the traditional requirements, design, code, and test phases of the waterfall, the individual evolutions of an evolutionary project, etc.
Pat concedes that she and her team can handle these four metrics. She thought she would be in for much more. Her team can swallow these and not waste too much time.
WHY BASIC METRICS HELP
Mike's challenges are not over. Through their conversations on what to measure, Mike sees that Pat accepts the basic metrics. She does not, however, welcome them. She does not see why to use them or why she will benefit from them. Mike must show Pat that metrics provide:
Metrics are a powerful, positive motivation tool. (They can also be a negative tool -- more on that later.) Someone has said and experience has proven that "what gets measured gets done." Metrics motivate people to succeed when you "measure what you want to get done."
Mike's metrics aim to measure progress against the project's plan. People want the metrics to show they are doing well. Therefore, they execute the plan well. The plan must lead to a good product quickly and efficiently. If the plan doesn't lead to this, neither metrics nor anything else will matter.
The metrics Mike is proposing will help Pat manage her project. Pat will see week by week if she is (1) on schedule, (2) on budget, and (3) reaching acceptable levels of quality. Projects don't proceed exactly as planned. Some things are harder than anticipated, while others are easier. The metrics will help Pat see where and when to shift resources.
The metrics will also help Pat, Mike, and the other project managers estimate future projects. Records of projects show the resources consumed to produce products. They indicate what the next project requires. Future projects won't be exactly like this one, but Pat will have some factual basis for making and defending her estimates.
Mike must ensure that Pat does not misuse metrics. Finding and punishing sub-par performers is not one of the three reasons for using metrics. Pat must publicly post the three reasons why she is using metrics (more on public posting later). Everyone must know that the purpose of the metrics program is to help.
Pat cannot attribute metrics to individuals. Metrics apply only to tasks and products. No names and no blame.
Mike's metrics measure how well the team is performing according to the plan. If the team is behind schedule or over cost, the plan was wrong. This is crucial. The plan belongs to the team because the team created it and owns it. The team does not measure its performance against a plan forced upon it by outsiders.
People fall behind because the product was harder than anticipated. The team made a mistake when it made the plan. Everyone makes mistakes, so the team needs to work through it and go on.
Mike is not naive about this. He knows that some people underestimate (lie) to obtain work. He also knows that some people fall behind because they waste time (steal company time). Mike's organization already has policies for dealing with employees who lie and steal.
Mike is working toward an organization where people make honest plans and strive to meet them. That is why Mike empowers people to create their own plans versus dictating plans, and why he uses a metrics program that helps people versus blaming people.
HOW TO MEASURE: PUBLICLY
Mike has one more step to go with Pat. Pat now accepts the limited, basic metrics and understands why they can help her. The theory sounds fine, but how will she measure the four items day to day? Will she gather the data herself everyday? That could add 20 hours to her week. Will she burden her people with the task? Can Mike loan her a metrics person to do this?
Mike advocates a simple, public data collection method. Figure 1 shows an example of Mike's data collection sheet. This is from a spreadsheet, and it shows the data collected during the coding phase of a project. The data collection sheet allows for the entry of measures on size, effort, quality, and schedule. Notice that there are no names of individuals on the data collection sheet. Pat will tape this to the wall near the refrigerator her team uses.
Figure 1: The data collection sheet
The numbers from the detailed plan for this phase of the project are typed into the "estimated" columns of the sheet. Members of Pat's team will pencil in the "actual" blanks when they finish tasks. Once a week, Pat will transfer that data to her spreadsheet or project management software.
That is all there is to data collection. When team members finish a task, they spend five minutes scribbling in their data. There are no special forms, online databases, accounting systems, or the like.
Pat will use the data collected to create several charts. Figures 2 through 8 show examples of these charts.1 They show how the team is performing per the plan.
Figure 2: Tracking software size via estimated and actual size
Figure 3: Comparing actual to estimated product size
Figures 2 and 3 show how the team is performing per size of the product. Figure 2 plots the actual size against the estimated size for each product. Each point in the graph represents a unit of product. Points above 45 degrees mean that the product is larger than expected.
The points in Figure 3 show the result of dividing the actual size by the estimated size . Points above the 1.0 horizontal line mean the product is larger than expected.
The strength of Figure 2 is that it allows Pat to draw a line through the points on the graph. That line helps Pat adjust the estimates of future products. If the slope of Pat's line is less than 45 degrees, products will probably be smaller than estimated. This is predicting future size based on linear regression. Watts Humphrey gives a full explanation of this powerful technique in .
The weakness of Figure 2 is that many points can lie on top of one another. For example, there may be many products that are estimated to be 100 lines of code but are only 80 lines of code. (By the way, if you don't like the lines of code measure, use what works for you.) Figure 2 may have only 10 visible points, while 20 products have been completed. That is why Figure 3 is needed. One point for every product completed appears on the graph.
Figures 4 and 5 show information about the effort spent on each task. These use the estimated and actual person-hours for each task, just as Figures 2 and 3 used the estimated and actual size for a product.
Figure 4: Tracking effort via estimated and actual person-hours
Figure 5: Comparing actual to estimated effort
Figure 6 is another view of effort. It shows the estimated and actual number of people working on the project each week. If the project is understaffed, it is normal to be behind schedule. If the project is overstaffed and behind or right on schedule, Pat has problems with her budget.
Figure 6: Tracking effort via planned and actual people on the project
Figure 7 indicates the quality of the product. It shows the errors found and corrected in the product of the current phase. Projects commonly have fewer errors corrected than found. Pat must watch the separation between the two curves. As the line of errors corrected approaches the line of errors found, the team is moving toward acceptable quality. If the two lines are not growing closer, the schedule will stretch.
Figure 7: Tracking quality via found and corrected errors
Figure 8 shows how the team is performing per the schedule. If the tasks completed line is above the tasks scheduled line, the team is ahead of schedule.
Figure 8: Tracking schedule via completion of tasks
Pat tapes these charts on the wall next to the data collection sheet. Everyone on her team can see everyday how the team is doing. They all know the situation, and none of them had to do much extra work to gain this knowledge.
MAKING METRICS MEANINGFUL
Metrics are powerful but not popular tools. To use metrics effectively:
1. Measure only four basic items (i.e., size, effort, quality, and schedule)
2. Measure the team's performance against the team's plan
3. Keep data collection simple
4. Keep everything public
5. Measure what you want to get done
Metrics can help as long as you don't choke the project with them. Keep focused on the end result and use metrics to help guide the project.
1. Humphrey, Watts S. A Discipline for Software Engineering. Addison-Wesley, 1995.
2. Philips, Dwayne. The Software Project Manager's Handbook: Principles that Work at Work. IEEE Computer Society, 1998.
3. Philips, Dwayne. "How People Drive the Outsourcing Process (Sometimes Off the Road)." Cutter IT Journal, Vol. 11, No. 7 (July 1998), pp. 37-42.
4. Putnam, Lawrence H., and Ware Myers. Measures for Excellence: Reliable Software On Time, Within Budget. Prentice Hall, Yourdon Press, 1992.
1Several of these charts were shown in .
Dwayne Phillips has worked as a software and systems engineer with the US government since 1980. He helps people manage software projects and has found that simple metrics do help project managers know where they are and where to shift resources in projects. He has written The Software Project Manager's Handbook: Principles that Work at Work (IEEE Computer Society, 1998). He has a Ph.D. in electrical and computer engineering from Louisiana State University.
Dr. Phillips can be reached at 2315 Ballycairne Court, Reston, VA 20191-1633, USA. Tel: +1 703 476 1951; E-mail: d.phillips@"computer.org.
Cutter IT Journal is published 12 times a year by Cutter Information Corp., 37 Broadway, Suite 1, Arlington, MA 02474-5552. Tel. +1 781 641 5118 or + 1 800 964 5118. Fax: + 1 781 648 1950 or + 1 800 888 1816. E-mail: firstname.lastname@example.org. Please contact Megan Nields for more information or for a free trial subscription.
© Cutter Information Corp. All rights reserved. Unauthorized reproduction in any form, including photocopying, faxing, and image scanning, is against the law.
© CUTTER INFORMATION CORP.|