Software Acquisition

AAF  >  Software Acquisition  >  Metrics and Reporting

Program Management Metrics and Reporting

How to use this site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

 

 

Program Registration

Reference Source: DODI 5000.87 Sections 3.2.b and 3.3.b

 

Programs using the software acquisition pathway will be identified in component and DoD program lists and databases within 60 calendar days of initiating the planning phase in accordance with DoD’s implementation of Section 913 of Public Law 115-91 on acquisition data analysis.

Reporting Guidance

Reference Source: USD(A&S) Guidance

Upon approval to use the Software Pathway, programs are requested to notify the USD(A&S) Acquisition Enablers office by completing the registration form with basic program information for OSD and Congressional insight into pathway usage and sending to mailto:osd.mc-alex.ousd-a-s.mbx.osd-sw-pathway AT mail.mil. This will be an interim measure until formal reporting systems are updated to account for the Software Pathway.

Insight Reporting

Reference Source: DODI 5000.87 Section 1.2.l

 

Programs using the software acquisition pathway will report a set of data to the Office of the USD(A&S) on a semi-annual basis as defined in the AAF Software Acquisition Pathway Guidance located at https://aaf.dau.edu/aaf/software/. Data reported under this pathway will be used to monitor the effectiveness of the pathway and will not be used for program oversight.

 

Reference Source: DODI 5000.87 Section 2.7

 

DAs are responsible for providing required program data to the USD(A&S) to support management and continuous improvement of the software acquisition pathway.

Reporting Guidance

Reference Source: USD(A&S) Guidance

​Upon entering the SWP, the decision authority will semi-annually provide program reporting data to the OUSD(A&S) to provide insight into the operation of the pathway and to support decisions regarding future pathway improvements. The insight metrics are not intended for program oversight, but to inform DoD leaders and congressional members on the effectiveness of the software pathway.  The semi-annual reporting will revisit the registration data with addition programmatic information, budgets, contract types, IP strategy and development approach and 11 metrics that represent the pathway performance. The Pathway metrics are Avg Lead Time for Authority to Operate (days), Continuous Authority to Operate In-Place, Mean Time to Resolve Experienced Cyber Event, Mean Time to Experience Cyber Event, Avg Deployment Frequency, Avg Lead Time, Minimum/Maximum Lead Time, Avg Cycle Time, Change Fail Rate, Mean Time to Restore, and Value Assessment Rating.

Programs using the Software Acquisition Pathway will the Semi-Annual Reporting form and email to the SWP osd.mc-alex.ousd-a-s.mbx.osd-sw-pathway AT mail.mil the first Friday in April and October (2 Apr 2021 and 1 Oct 2021). This will be an interim measure until formal reporting systems are updated to account for the Software Pathway.

Program Management Metrics

Reference Source: DODI 5000.87 Section 3.3.b.(11)

 

Each program will develop and track a set of metrics to assess and manage the performance, progress, speed, cybersecurity, and quality of the software development, its development teams, and ability to meet users’ needs.  Metrics collection will leverage automated tools to the maximum extent practicable. The program will continue to update its cost estimates and cost and software data reporting from the planning phase throughout the execution phase.

Program Management Metrics Guidance

Reference Source: USD(A&S) Guidance

A Metrics Plan identifies metrics to be collected in order to manage the software program. The purpose of metrics is to provide data to PMs and other stakeholders to inform decisions and provide insight into the development effort. Every metric produced on a program should target a specific stakeholder or set of stakeholders, have a defined purpose, and support decision making at some level. Programs should establish and maintain metrics to measure progress in the following areas:

  • Process Efficiency Metrics: These metrics identify where inefficiencies may exist in the software development process. Maintaining process efficiency metrics supports decisions related to how/when/where to change the process, if needed, and enables continual process improvement.
  • Software Quality Metrics: These metrics identify where in the overall system software quality may be degraded and supports identification of specific software components or software teams that contribute to degraded quality. Maintaining software quality metrics supports decisions related to software architecture, software team performance, etc.
  • Software Development Progress Metrics: These metrics illustrate the capability developed to date as compared to the overall capability planned, and the speed at which capability is delivered. Maintaining progress metrics allows internal and external stakeholders to maintain visibility into the capability planned vs capability delivered and supports senior leader resourcing decisions or resourcing justification. Progress metrics also support cost estimation and decisions related to number and size of teams.
  • DevSecOps Metrics: These metrics identify where inefficiencies may exist in the DevSecOps pipeline. Maintaining these metrics supports identifying tool or configuration changes that may be necessary to improve the performance of the pipeline.
  • Cost Metrics: Cost metrics provide insight into the program budget and expenditure rate. Maintaining cost metrics supports resource decisions like number of teams required, or technical decisions like how much capability to plan for a given time span.
  • Value Metrics: Value metrics identify the level of significance for each capability and feature from the users’ perspective. Capabilities and features should all have a priority and a value assignment designated by the user to support prioritization and to provide a cursory view of the value (or significance) of the capability developed to date. This metric can be used in concert with more comprehensive value assessments that must be conducted periodically.

The minimum set of metrics to be collected should include at least one metric from each of the above categories. The program should establish a minimum specific set of metrics to provide insight into the status of the project and support technical and programmatic decisions. The program should have the ability to expand on the minimum set of metrics as needed so that the metrics remain appropriate for the size of the project, while also considering the level of effort and cost associated with the collection of each metric. The program should automate collection of metrics as much as possible. For those metrics that cannot be automated initially, the program should develop a plan for moving toward automation. Programs should consider migrating from a quarterly software metrics push to providing access to their set of software metrics via an automated read only self-service metrics portal for OUSD(A&S), OUSD(R&E) and other approved stakeholders. The following subsections list example metrics for each category.

Process Efficiency Metrics
  • Feature Points – The project team uses feature points (e.g., story points, use cases, etc.) to perform relative sizing of features. The developer assigned to a feature is responsible for identifying how much effort is required to complete the work in each iteration. Based on the duration of each iteration, minus overhead and time off, the team builds an understanding of the number of points the team can complete in each iteration. Over time the team develops efficiencies and estimation tends to improve.
  • Velocity – Velocity measures the amount of work, in feature points, that the team completes in each iteration. It is derived by summing the total points of all the features completed in each iteration.
  • Feature Completion Rate – Feature completion rate describes the number of features completed in each iteration or release.
  • Feature Burndown Chart – Teams use a feature burndown chart to estimate the pace of work accomplished daily. The pace is usually measured in hours of work, although no specific rule prevents the team from measuring in feature points.
  • Release Burnup – Release burn up charts measure the amount of work completed for a given release based on the total amount of work planned for the release. Usually feature points are used as the unit of measure to show planned and completed work.
  • Number of Blockers – A blocker is an issue that cannot be resolved by the individual assigned to complete the activity and requires assistance to overcome. Number of blockers describes the number of events that prohibit the completion of an activity.
Software Quality Metrics
  • Recidivism Rate – Recidivism describes stories that are returned to the team for various reasons.
  • Defect Count – Defect count measures the number of defects per iteration or release.
  • Change Fail Rate – The percentage of changes to the production system that fail.
Software Development Progress
  • Deployment Frequency – Deployment frequency provides information on the cadence of deployments in terms of time elapsed between deployments.
  • Progress against Roadmap – Progress measures major capabilities planned versus delivered.
  • Achievement date of MVP / MVCR.
Cost Metrics
  • Total Cost Estimate – This metric provides the total estimated cost for the product being developed or the service being acquired. The cost estimation approach can depend on whether the program is seeking services over time (e.g., DevSecOps expert; Full Stack Developer; tester or product delivery based on a clear set of Agile user requirements (user stories) contained in a product backlog baseline.
  • Burn Rate – Burn rate measures incurred cost over time (e.g., monthly burn rate, iteration burn rate, release burn rate).
Capability / Value Delivery Metrics
  • Delivered Features – Count of delivered features measures the business-defined features accepted and delivered.
  • Delivered Value Points – This metric represents the count of value points delivered to users for a given release. Value points are usually defined by the users (or user representatives) to indicate the business value assigned to a given feature or story.
  • Level of User Satisfaction – This metric represents the degree of user satisfaction based on the value delivered by the product or solution.
Metrics Considerations for Programs Implementing Agile Methods

Programs implementing Agile metrics should consider taking the following actions to improve implementation success and pace of adoption.

  • Align on metrics to be collected.
  • Identify tools to enable automation of metrics and supporting data to reduce the level of effort required to collect and report on metrics.
  • Document the metrics to be collected in a Metrics Plan. The plan should include:
    • The list of metrics to be collected and reported
    • Information on which metrics are automated
    • A plan for automating metrics not yet automated, or justification for why the metric should not be automated
    • Frequency of reporting for each metric
    • Tools used to collect and report metrics.
  • Ensure leadership support of the plan for Agile metrics.