Software Acquisition

AAF  >  Software Acquisition  >  Metrics and Reporting

Program Management Metrics and Reporting

How To Use This Site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

Program Registration

Reference Source: DODI 5000.87 Sections 3.2.b and 3.3.b


Programs using the software acquisition pathway will be identified in component and DoD program lists and databases within 60 calendar days of initiating the planning phase in accordance with DoD’s implementation of Section 913 of Public Law 115-91 on acquisition data analysis.

Program Registration Reporting Guidance

Reference Source: USD(A&S) Guidance

Upon approval to use the Software Acquisition Pathway, programs are requested to notify USD(A&S) by completing the registration form (CAC required) with basic program information for OSD and Congressional insight into pathway usage and sending to mailto:[email protected]. This will be an interim measure until formal reporting systems are updated to account for the SWP.

Insight Reporting

Reference Source: DODI 5000.87 Section 1.2.l


Programs using the software acquisition pathway will report a set of data to the Office of the USD(A&S) on a semi-annual basis as defined in the AAF Software Acquisition Pathway Guidance located at Data reported under this pathway will be used to monitor the effectiveness of the pathway and will not be used for program oversight.


Reference Source: DODI 5000.87 Section 2.7


DAs are responsible for providing required program data to the USD(A&S) to support management and continuous improvement of the software acquisition pathway.

Semi-Annual Reporting Guidance

Reference Source: USD(A&S) Guidance

​Upon entering the SWP, the decision authority will semi-annually provide program reporting data to the OUSD(A&S) to provide insight into the operation of the pathway and to support decisions regarding future pathway improvements. The insight metrics are not intended for program oversight, but to inform DoD leaders and congressional members on the effectiveness of the software pathway.  The semi-annual reporting will revisit the registration data with addition programmatic information, budgets, contract types, IP strategy and development approach and metrics that represent the pathway performance. The Pathway metrics are: Avg Lead Time for Authority to Operate (days), Continuous Authority to Operate In-Place, Mean Time to Resolve Experienced Cyber Incident or CVE, Mean Time to Detect Cyber Incident, Avg Deployment Frequency, Avg Lead Time for Change, Minimum/Maximum Lead Time for Change, Avg Cycle Time, Change Fail Rate, Mean Time to Restore, Value Assessment Rating, and Executive Summary from Last Value Assessment.

Programs using the Software Acquisition Pathway will complete the Semi-Annual Reporting (CAC required) and email to OUSD A&S at [email protected] in April and October. This will be an interim measure until formal reporting systems are updated to account for the Software Pathway.

Program Management Metrics

Reference Source: DODI 5000.87 Section 3.3.b.(11)


Each program will develop and track a set of metrics to assess and manage the performance, progress, speed, cybersecurity, and quality of the software development, its development teams, and ability to meet users’ needs.  Metrics collection will leverage automated tools to the maximum extent practicable. The program will continue to update its cost estimates and cost and software data reporting from the planning phase throughout the execution phase.

Program Management Metrics Guidance

Reference Source: USD(A&S) Guidance

The following sections provide guidance for selecting and leveraging metrics to inform and enable program management. The metrics shared below are notional metrics for programs to consider and tailor to specific program needs. While SWP programs provide a set of insight metrics semi-annually as part of OSD reporting, program offices require additional more discrete metrics to assess progress, shape decisions and maximize the program’s impact.

As programs establish metrics and collect associated data, they are encouraged to consider:

  • Begin with the end in mind – what key insights do you need to guide decisions at the team, program, and stakeholder levels?
  • Don’t boil the ocean – focus on metrics that provide valuable insight.
  • Always use metrics to problem solve (not punish or blame).
  • Ensure alignment between leadership and team(s) on the key metrics, insights provided, intended audience, and usage.
  • Understand that Agile metrics place heavy emphasis on value delivered and team performance versus cost and schedule.
    • Traditional waterfall projects attempt to fix scope on Day 0 and use that scope to estimate schedule and budget. Adherence to schedule and budget becomes the measure for project success and metrics are aligned accordingly. Agile efforts estimate scope, resulting in an shifting emphasis to team performance, workflow, quality, and value delivered metrics.
  • Utilize a combination of metrics demonstrating value delivered against program goals/objectives, commitment to Agility, and team performance.
  • Focus on real-time metrics (versus lagging), so decision-makers have actionable information.
  • Automate metrics, data collection and analysis to the greatest extent possible using Agile-based tools (e.g., Jira (Atlassian), PlanView, ServiceNow) built to manage Agile workflow.
    • Note: Target tools that automatically generate data, metrics, and dashboards in real-time that are accessible to the team and decision-makers. This reduces the overhead/administrative burden associated with collecting data/reporting and focuses the team on delivering value.
  • Consider how best to establish a blameless and safe culture to:
    • Allow the team to leverage metrics to maximize opportunities to experiment, fail fast, and continuously improve.
    • Reinforce transparency and candor that provides the ability to share new learning across teams
    • Ensure metrics/reporting accuracy
    • Note: Agile focuses on team solutioning over individual success/failure, and typically avoids metrics centered on individual performance
  • Review metrics on a recurring basis to continuously improve.

    Metrics to Consider

    The goal of generating metrics is to provide leadership, the Product Owner, team members, and other key stakeholders information and insights into the development effort to guide technical/programmatic decision-making, continuous improvement efforts, and remediation of blockers/impediments. Software teams should regularly review metrics as part of their sprint/release retrospectives and leverage metrics both for continuous improvement and to plan future iterations. Programs should have the ability to expand on the minimum set of metrics as needed, considering metrics to measure progress in the following areas:

    Process Efficiency Metrics

    Reference Source: USD(A&S) Guidance

    Metrics related to the efficiency of development and delivery processes. These metrics identify the speed of value delivery, where inefficiencies may exist in the software development process, and support decisions related to how/when/where to change the process. These metrics enable continual process improvement.

    • Story Points– The project team uses story points to estimate the relative size/complexity of each story. The entire team estimates the relative size/complexity of each story in the Product Backlog or at minimum the stories prioritized by the Product Owner. Story points are used to inform the amount of work the team can complete in each sprint/iteration. Note: Story point estimates are team-specific and story points completed should not be used to compare one team to another.
    • Velocity– After determining the relative size (points) for each story, the team considers amount of work (which stories) they can realistically complete in the next iteration/sprint. The sum of the story points for the stories they complete is their velocity. Over multiple iterations/sprints, the team will have a good understanding of their velocity.  The team should work to hold velocity constant or improve it.  Teams should factor in time off when estimating velocity for the next iteration. Note: Due to the variation of relative size (story) estimates, Velocity is team specific and should not be used to compare one team against another or for contract progress and incentives.
    • Velocity Variance – The variance in velocity of a specific sprint to the average velocity. Can help the team determine if there are successful activities that they should institutionalize or challenges they should resolve.
    • Story Throughput – if your stories are all relatively the same size, using a count of stories completed (throughput) in an iteration can be a simpler or complementary measure.
    • Cycle (Resolution) Time – the time it takes once a story is in-progress (being worked on) until it is completed/delivered. This provides insight on the efficiency of workflow.
    • Cumulative Flow – shows a variety of information: 1) # of stories created over time, which shows amount of work in backlog and if teams are adding to it; 2) # of stories completed over time, which shows if teams are completing work and how much; and 3) amount of work in progress (WIP) over time, which shows if teams are limiting WIP appropriately (typically 1 story per team member at any given point in time).
    • Story Completion Rate– Story completion rate describes the number of features completed in each iteration or release.
    • Story Burndown Chart– Shows the amount and pace of work completed / outstanding over time. Paired with velocity, it shows if the team is on target to complete the work committed to for the iteration/sprint.
    • Sprint Goal Success Rates – shows the #/% of sprints where the Team did and did not achieve the Sprint goal (target velocity).
    • Release Burnup– Release burn-up charts measure the amount of work completed for a given release based on the total amount of work planned for the release. Usually, story points are used as the unit of measure to show planned and completed work.
    • Number / Percent of Stories Blocked– A blocker is an issue that cannot be resolved by the individual assigned to complete the activity and requires assistance to overcome. Number of blockers describes the number of events that prohibit the completion of an activity. Percent of blockers may be more useful # of stories blocked/total # of stories in the sprint.
    • Time Blocked and Time Blocked Per Story – the time the team was blocked identifies wasted/idle time. Time Blocked Per Story tells how much time the team was blocked per work item and is calculated as the total time blocked/# of stories in the sprint. High numbers here show that the team has impediments that they should address or bubble up for leadership support in addressing.
    • Lead Time – the time it takes for once a story is created/captured from a customer until it is completed/delivered to the customer.
    • Lead Time for Change – time from code committed to code successfully delivered to the customer.

    Software Quality Metrics

    Reference Source: USD(A&S) Guidance

    Metrics related to the quality of the work delivered. These metrics identify where in the overall system software quality has been achieved and where it may be degraded. It supports identification of specific software architecture/components or team specific challenges that contribute to degraded quality.

    • Acceptance Rate – $ and % of stories delivered vs accepted.
    • Recidivism Rate– The % of stories that are returned to the team for various reasons. # of stories returned/total number of stories completed.
    • Defect Count by Story Count– Measures the number of defects per iteration/total number of stories in the iteration.
    • Change Fail Rate– The percentage of changes to the production system that fail.
    • Mean Time to Recover/Restore (MTTR) – the average time it takes to restore a module, component, or system after it fails.
    • Escaped Defects – the number of defects found after they are in production.
    • Code Coverage Rate – Tells the proportion of the lines of code covered by the testing approach.
    • Automated Test Coverage – Tells the % of the system covered by automated testing vs the % manually performed.
    • Release/Deployment Failure Rate – how often deployment results in outages, remediations, or degraded performance.

    Software Development Progress Metrics

    Reference Source: USD(A&S) Guidance

    These metrics illustrate the capability delivered versus the capability planned, as well as the cadence of delivering value.  This enables senior leader resourcing decisions and justifications, as well as cost estimation and decisions related to number and size of teams.

    • Release/Deployment Frequency– Cadence of deployments in terms of time elapsed between deployments. Shows how frequently the team delivers value to the customer/end-user.
    • Time Between Releases/Mean Time Between Releases – the actual/average time between releases to show how often value is pushed out to the customer/end-user.
    • Progress against Roadmap– Compares targeted versus delivered value to help determine if appropriate progress is being made.  This informs future planning, prioritization and investment.
    • Achievement date of MVP / MVCR, Future Release Cadence – Communicated MVP / MVCR release dates and future release cadence.

    Cyber Security Metrics

    Reference Source: USD(A&S) Guidance

    Metrics related to protecting software products, infrastructure, and data from unauthorized access and use. These should consider attempts, detection, and remediation.

    • Intrusion Attempts – number of times intrusions were attempted, typically compared against a benchmark and/or # of incidents.
    • Security Incident Rate – #/% of times attackers breached your data, systems, or networks.
    • Mean Time to Detect – Amount of time it takes to discover a security incident/threat.
    • Mean Time to Remediate (MTTR) – the average time it takes to repair/restore a module, component, system to functional use after a security incident.

    Cost Metrics

    Reference Source: USD(A&S) Guidance

    Cost metrics provide insight into the program budget and expenditure rate to guide resource decisions (number of teams required), or technical decisions (how much capability to deliver in a given time span).

    • Total Cost Estimate– This metric provides the total estimated cost for the product being developed or the service being acquired. The cost estimation approach can depend on whether the program is seeking services over time (e.g., DevSecOps expert; Full Stack Developer; tester or product delivery based on a clear set of Agile user requirements (user stories) contained in a product backlog baseline.
    • Cost Per Agile Team – this metric shows the cost of Labor Costs per Agile Team + Non-Labor Costs Per Agile Team of Hardware, Software, Licensing, and Cloud Services required by the Agile Team to develop and deliver value to the customer. This is used in conjunction with team performance to estimate the number of teams required to deliver the desired amount of value. Should include day-to-day Agile Team members (e.g., Product Owner, Scrum Master) and exclude Program Management Costs and Customer/User Engagement Costs. May be broken out by the two cost types noted above.
    • Labor Costs Per Agile Team – this metrics shows only the labor cost based on the Agile Team structure. May be broken out by resource type. Should include day-to-day Agile Team members (e.g., Product Owner, Scrum Master) and exclude Program Management Costs and Customer/User Costs.
    • Non-Labor Costs Per Agile Team – non-labor costs of Hardware, Software, Licensing, and Cloud Services, Storage, Networks, Bandwidth required. May be broken out into the categories noted or others as required.
    • Program Management Costs – shows the size and annual cost of program management team. This should not include Agile Team roles (e.g. Product Owner, Scrum Master). May be broken out by resource type.
    • Customer/User Costs – shows the size and annual cost of business engagement (excluding the cost of the Agile Team). This can be estimated leveraging user community representation and required activities captured in the User Agreement (UA). May be broken out by user community.
    • Cost Per Release – shows the cost per release to trend over time and to compare against the value delivered per release.
    • Burn Rate– Burn rate measures incurred cost over time (e.g., monthly burn rate, iteration burn rate, release burn rate).

    Value Metrics

    Reference Source: USD(A&S) Guidance

    Related to evaluating the impact of the work. Value metrics identify the level of significance for each capability and feature/epic from the user and mission perspective. Capabilities and features/epics should each have a priority and a value assignment designated by the user to support prioritization and to provide a cursory view of the value (or significance) of the capability developed to date. This metric can be used in concert with more comprehensive value assessments that must be conducted periodically. The metrics below can be assessed per iteration and/or release.

    • Value Assessment Ratings – ratings from the value assessment to show actual/perceived value of what was delivered to the customer.
    • Delivered Features – Count of delivered features measures the business-defined features accepted and delivered.
    • Delivered Value Points – This metric represents the count of value points delivered to users for a given release. Value points are usually defined by the users (or user representatives) to indicate the business value assigned to a given feature or story.
    • Level of User Satisfaction – This metric represents the degree of user satisfaction based on the value delivered by the product or solution.
    • Net Promoter Score – customer satisfaction score that demonstrates if customers would recommend your product to other users. For example, if you rate on a scale of 1-5, 4-5 would be promoters and 1-3 detractors.

    Earned Value Management (EVM)

    Reference Source: USD(A&S) Guidance

    Current Earned Value Management (EVM) practices often do not align with modern software practices and SWP programs are encouraged to think out of the box when it comes to programmatic metrics and tracking program execution.  DoD seeks to adopt cutting-edge commercial best practices for software development management. In lieu of EVM, SWP programs use program metricsregular software deliveriesvalue assessmentsroadmaps, and active user engagements to assess the program’s progress and health.

     Modern Software Approaches

    • Earlier Warning System– Agile/DevSecOps pipelines are instrumented to continuously monitor development in real-time; small batches will allow daily status and delivery to the Warfighter.
    • Course Correction– Agile/DevSecOps pipelines generate telemetry that provide insight into all aspects of the development and deployment process. The small batch development with short sprint development cycles allows the ability to pivot within a sprint (typically 2 weeks, but daily, on-demand value delivery as needed).
    • Management by Exception– Agile methods all provide mechanisms to allow anyone to identify a problem throughout development process.
    • Communication Tool– Agile Toolsets and DevSecOps pipelines are instrumented to provide live dashboards throughout the process. Agile release planning events, demos, and retrospectives provide clear communication of scope, schedule and functionality delivered.

    The SWP offers a range of program metrics, value assessments, and dynamic roadmaps and backlogs to track progress.

     What software acquisition programs can do:

    While the FAR and DFARs require EVM for cost or incentive contracts >$20M, SWP programs can:

    • Use fixed priced, Time and Materials, or Services contracts(no EVM requirement)
    • Explicitly capture in the Acquisition Strategy the intent to modern software metrics and practices in lieu of EVM. Then issue contracts without EVM clauses following the approval from the decision authority.
      • FAR 1.102“The role of each member of the Acquisition Team is to exercise personal initiative and sound business judgment in providing the best value product or service to meet the customer’s needs…. and minimizing administrative operating costs. “
      • Per DODI 5000.02“Tailor in” the regulatory information requirements that will be used to describe the management of the program. In this context, “tailoring-in” means that the PM will identify, and recommend for MDA/DA approval, the regulatory information that will be employed to document program plans and how that information will be formatted and provided for review by the decision authority.
      • Congress in the FY18 NDAA Section 874directed DoD to establish Agile Software Development Pilot Programs and explicitly exempted these pilot programs from Earned Value Management or EVM-like reporting, along with traditional predefined schedules, plans and requirements.

    Agile Metrics Guide (pdf)