Software Acquisition

AAF  >  Software Acquisition  >  Active User Engagement

Value Assessment

How to use this site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

 

 

Reference Source: DODI 5000.87 Glossary

 

Value Assessment: An outcome-based assessment of mission improvements and efficiencies realized from the delivered software capabilities, and a determination of whether the outcomes have been worth the investment.  The sponsor and user community perform value assessments at least annually, to inform DA and PM decisions.

 

Reference Source: DODI 5000.87 Section 3.1.e

 

Value assessments will be performed at least annually after the software is fielded to determine if the mission improvements or efficiencies realized from the delivered software are timely and worth the current and future investments from the end user perspective.  More frequent value assessments are encouraged if practical.

 

Reference Source: DODI 5000.87 Section 3.3.b.(12)

 

The sponsor and user community will perform a value assessment at least annually on the software delivered.  The sponsor will provide feedback on whether the mission improvements or efficiencies realized from the delivered software capabilities are timely and worth the investment.  The feedback should be informed by test and evaluation results.  The DA, sponsor, and PM will use the value assessments to assess progress on the program, update strategies, designs, and inform resourcing decisions.

Agile has improved delivery of value and the ability to capture feedback from three years on average to 10 months. This helps ensure that we are not delivering obsolescence (requirements defined at beginning of three years are not obsolete on delivery).  We also can capture better feedback faster over the life of the product to evolve it in future releases.

Information Screening and Delivery System (ISDS) an 873 Agile Pilot Program

Value Assessment Guidance

Reference Source: USD(A&S) Guidance

A value assessment reveals how much impact the software has on the mission from the end user’s perspective. The value assessment measures the program’s progress toward meeting user needs and enables the program to determine if the cost of the software development effort is commensurate with the value it provides. The assessment is typically a combination of subjective and objective measures.

Guiding Principles of a Value Assessment

Programs should conduct thorough value assessments at least annually but should consider a streamlined value assessment regularly (i.e., at demonstration or release). The timing of the VA is negotiated between the sponsor and program office. The User Agreement template has a dedicated section to capture when the sponsor should provide inputs to the VA. In general, we have observed that program offices initiate the VA (which is common for all acquisition documentation) and then garner feedback from the user community. There is no rule that a highly-engaged sponsor cannot take ownership of the VA since it is fundamentally their assessment. In absence of a sponsor taking that role on, the onus is on the program office to initiate the process.

Assessments should be accomplished in a program-appropriate manner that leverages planned activities to the extent possible. E.g., information derived from developmental and operational tests can be used to inform a value assessment.  Field exercises, laboratory experiments, and/or operator-led capability demonstrations may also help determine whether value assessment factors have been satisfied or perhaps why the level of desired value was not achieved. Also, software pipeline and help desk tools may be sources of automated data collections that reflect software development performance and timelines for responding to problems and dynamic warfighter needs.

Notional Scenarios of Value Assessment Frequencies

Value Assessment Scenarios

Programs must consider several factors when determining the value or impact that software has on operations. These factors identify specific benefits that result from using the new software. Overall user satisfaction is one aspect of determining value, but programs should also consider other objective measures, such as:

  • Increase in mission effectiveness: Mission effectiveness measures the extent to which the system contributes to mission accomplishment. Ideally, mission effectiveness is evaluated using quantitative or statistical measures, although in some cases it may be evaluated through a qualitative user assessment. Examples of mission-based metrics include lethality, survivability, accuracy, etc.
  • Cost efficiencies: Cost savings apply to the system, related systems, and mission operations. Programs should use quantitative data if possible.
  • User workload reduction: Workload reduction is the reduction in the amount of effort required by an operator to accomplish a mission task. Effort relates to the number of operations required to perform the task or the cognitive ability required to perform the task. This measure can be qualitative or quantitative.
  • User manpower reduction: Manpower reduction is a reduction in the number of operators required to accomplish a mission task. This measure should be quantitative.
  • Equipment footprint reduction: Footprint reduction is a reduction in the amount of equipment required in the field in order to accomplish the mission. Equipment includes computers, racks, servers, vehicles, support equipment, or any other items that deploy with the system. Programs should use quantitative data if possible.
  • Number and significance of field reports (software trouble reports): Field reports document problems that the end users experience with the software. Problems can include software defects, missing features, inadequate performance, etc. Programs should use quantitative data.
  • Software Development Performance: Enhancing responsiveness to warfighter needs can include increasing software deployment frequency to provide new capabilities, add features, or address defects on a faster cycle. Other metrics such as quality (based on change fail rate) can also be used with the intent of demonstrating if the program is meeting its software development potential as part of delivering a high level of value to the user. 
  • Enhanced alignment with “to-be” architectures: Aligning to “to-be” architectures can enable   interoperability, update to commonly-used data standards, make product solutions more adaptable, and better support broader mission architectures and threads.  
  • User adoption and user satisfaction: User adoption is a quantitative measure that considers how many military units use the software in the field. Some software applications are fielded but are not used, so user adoption measures how useful the military units find the software. User satisfaction is a subjective assessment by the end user that considers usability, suitability, reliability, and overall impact that the software has on performing the mission.
  • Net promoter score (NPS): A simple and effective method to determine if customers/end-users are happy with the value generated by your product/service. Customers/end-users rate the product/service on a scale of 1-10 by answering the question, How likely are you (the customer/end-user) to recommend this product/service to a friend or colleague?” 1-6 ratings are considered detractors (unhappy customers that may spread negative word of mouth), 7 or 8 ratings are considered passive users (satisfied but unenthusiastic customers) 9 or 10 are considered promoters (loyal champions who spread positive word of mouth), The NPS is then calculated as the # of promoters minus the # of detractors divided by the total # of respondents. For example, if you have 50 respondents with 25 promoters and 25 detractors, your NPS would equal (25-25)/50 or 0%. If you have 50 respondents with 35 promoters and 15 detractors, your NPS would equal (35-15)/50 or 40%. The goal over time is to deliver enough value that your NPS rises over time.     
Overall Rating Conventions and Notional Example of Supporting Metrics and Measures

Reference Source: USD(A&S) Guidance   

 

Overall Rating  

The value assessment should include an overall value rating for the determined cycle. The SWP Semi-Annual reporting proposes 5 categories: No Value, Low Value, Moderate Value, High Value, and Exceptional Value.  These can be defined in greater detail when establishing the program assessment approach for the assessment cycle.  Possible definitions include:  

Exceptional Value  Exceeded expectations across all measurement areas. 
High Value  Exceeded expectations across one or more measurement areas. 
Moderate Value  Met most of the stated expectations in the measurement areas. 
Low Value  Met some of the stated expectations across the measurement areas. 
No Value  Met none of the stated expectations across the measurement areas. 

Potential Objective Metrics Categories 

  • Software Development Performance: Quantitative improvements in software development trends as determined by the program metrics (such as lead time, deployment frequency, change fail rates, mean time to restore etc). 
  • Increase in Mission Effectiveness: Quantitative improvements to lethality, survivability, accuracy, etc. 
  • Cost Efficiencies: Quantitative cost savings achieved for the system, related systems or mission operations.  
  • User Workload Reduction: Quantitative efficiencies achieved by reducing the number of operations required to perform a task or the cognitive ability required to perform the task. 
  • Manpower Reduction: Quantitative reductions achieved in the number of operators required to accomplish a mission task.  
  • Equipment Footprint Reduction: Quantitative reductions in the amount of equipment required in the field to accomplish the mission.  
  • User Adoption: Quantitative measurement of the number of military users/organizations that voluntarily adopt a fielded application.  This can be measured by comparing expected adoption rates based on communicated needs and the overall rate of adoption. 

 

Notional examples of objective and subjective assessment measures 

To assess achieved increases to mission effectiveness. 

Measurement  Improvement Goal  Mission Effectiveness With New Features  Assessed Value 
ID Range  From 50km to 70km  80km  Exceeded Goal. Can identify targets 10km farther than planned which increases engagement opportunities by x% 
Accuracy  From 60% to 70%  80%  Exceeded Goal. 20% more reports accurate, reduced risk of fratricide by x% 
Operating Time 

From 100 hours  

to 150 hours 

150 hours  Met Goal. New software improves power utilization, and increases operating time 
Value Assessed  High Value 

 

To assess software development performance.  This measure is focused on evaluating the means with which operational value is generated. 

Measurement  Expected Performance  Achieved Performance  Assessed Value 
Deployment Frequency 

6x/yr 

For Highest Prioritized Features 

4x/yr 

Mostly Highly Prioritized Features 

Did Not Meet Goal.  The releases delivered however provided important capability. 
Change Fail Rate  <6%  10%  Did Not Meet Goal.  The program still achieved reasonable fail rate levels. 
Value Assessed  Moderate Value 

 

 To assess value achieved in areas where the program office is less able to directly influence but where the quality of the product plays a major role.   

Measurement  Expected Level  Achieved Level  Assessed Value 
User Adoption  Gain 100 users/month  Gained 500 users/month  The product developed for a wide set of users was deemed extremely useful and adoption is growing beyond expectations.    
Value Assessed  Exceptional Value 

 

To assess value across a wide swath of users based on user satisfaction ratings.  

Measurement  Reported Level Last Cycle  Reported Level This Cycle  Assessed Value 
User Satisfaction  63%  72%  The delivered improvements to the product are increasing user satisfaction.     
Value Assessed  Exceptional Value 

 

To assess Return on Investment via a ratio of capability gained against resources expended. 

Capability  Resources Expended  Development Team Usage  Assessed Value 
Enhanced Targeting  

$3.2M/ 

$106.8M 

1/10 Teams  High Value.  This improvement was well worth the resources expended.     
Decreased Battlefield Assessment Time 

$50.7M/ 

$106.8M 

4/10 Teams  Moderate Value.  This improvement was helpful but was marginally worth the resources expended 
Improved Data Collection Rate 

$52.9M/ 

$106.8M 

5/10 Teams  Moderate Value.  This improvement was helpful but was marginally worth the resources expended 
Overall ROI Assessed  Moderate Value.  Valuable improvements were made but overall assessment is that they were marginally worth the high cost. 

 

 

 To assess design improvements that add value and position the program for future capability while not immediately delivering user features in this assessment cycle. 

Value-Added Activities  Resources Expended  Development Team Usage  Assessed Value 
Improved System Modularity  

$8M/ 

20M 

1/3 Teams  High Value.  The program implemented numerous design changes that improved the ability to more seamlessly and cost effectively add future capabilities.     
Reduced Technical Debt 

$7M/ 

$20M 

1/3 Teams  High Value.  The program was able to significantly burn down the technical debt by updating inefficient legacy code, refactoring code to improved design patterns, documenting key processes, and automating test processes. 
Improved Developer Environment 

$5M/ 

$20M 

1/3 Teams  Moderate Value.  Onboarded new tools, updated data libraries for easier accessibility, optimized workflows, conduct targeted trainings to improve productivity.  
Overall ROI Assessed  High Value.  The system and the developer teams are significantly better postured to deliver future capabilities at a faster rate and with less impact to the users. 

 

Note: If the below video(s) show a “Website Blocked” banner, try accessing the video via a non-DoD network. No sensitive content is contained in this video. Current DoD network policies to conserve bandwidth may be impacting the ability to view this video.