Software Acquisition

AAF  >  Software Acquisition  >  Active User Engagement  >  Value Assessment

Value Assessment

How to use this site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

Reference Source: DODI 5000.87 Glossary

 

Value Assessment: An outcome-based assessment of mission improvements and efficiencies realized from the delivered software capabilities, and a determination of whether the outcomes have been worth the investment.  The sponsor and user community perform value assessments at least annually, to inform DA and PM decisions.

 

Reference Source: DODI 5000.87 Section 3.1.e

 

Value assessments will be performed at least annually after the software is fielded to determine if the mission improvements or efficiencies realized from the delivered software are timely and worth the current and future investments from the end user perspective.  More frequent value assessments are encouraged if practical.

 

Reference Source: DODI 5000.87 Section 3.3.b.(12)

 

The sponsor and user community will perform a value assessment at least annually on the software delivered.  The sponsor will provide feedback on whether the mission improvements or efficiencies realized from the delivered software capabilities are timely and worth the investment.  The feedback should be informed by test and evaluation results.  The DA, sponsor, and PM will use the value assessments to assess progress on the program, update strategies, designs, and inform resourcing decisions.

Value Assessment Guidance

Reference Source: USD(A&S) Guidance

A value assessment reveals how much impact the software has on the mission from the end user’s perspective. The value assessment measures the program’s progress toward meeting user needs and enables the program to determine if the cost of the software development effort is commensurate with the value it provides. The assessment is typically a combination of subjective and objective measures.

Guiding Principles of a Value Assessment

Programs should conduct value assessments at least annually, or more frequently as determined by the user community and the Program Manager. When the version of the software to be assessed is delivered, the Program Manager and the user community can set up a field exercise, operator-led demonstration, test event, or laboratory experiment to conduct the assessment. The users will operate the software and identify specific impacts to operations from an end user’s perspective.

Programs must consider several factors when determining the value or impact that software has on operations. These factors identify specific benefits that result from using the new software. Overall user satisfaction is one aspect of determining value, but programs should also consider other objective measures, such as:

  • Increase in mission effectiveness: Mission effectiveness measures the extent to which the system contributes to mission accomplishment. Ideally, mission effectiveness is evaluated using quantitative or statistical measures, although in some cases it may be evaluated through a qualitative user assessment. Examples of mission-based metrics include lethality, survivability, accuracy, etc.
  • Cost efficiencies: Cost savings apply to the system, related systems, and mission operations. Programs should use quantitative data if possible.
  • User workload reduction: Workload reduction is the reduction in the amount of effort required by an operator to accomplish a mission task. Effort relates to the number of operations required to perform the task or the cognitive ability required to perform the task. This measure can be qualitative or quantitative.
  • User manpower reduction: Manpower reduction is a reduction in the number of operators required to accomplish a mission task. This measure should be quantitative.
  • Equipment footprint reduction: Footprint reduction is a reduction in the amount of equipment required in the field in order to accomplish the mission. Equipment includes computers, racks, servers, vehicles, support equipment, or any other items that deploy with the system. Programs should use quantitative data if possible.
  • Number and significance of field reports (software trouble reports): Field reports document problems that the end users experience with the software. Problems can include software defects, missing features, inadequate performance, etc. Programs should use quantitative data.
  • User adoption and user satisfaction: User adoption is a quantitative measure that considers how many military units use the software in the field. Some software applications are fielded but are not used, so user adoption measures how useful the military units find the software. User satisfaction is a subjective assessment by the end user that considers usability, suitability, reliability, and overall impact that the software has on performing the mission.

The overall value of the software represents a culmination of the assessment of all factors evaluated during the event for all the new features. The user community, including end users, evaluates the software from these perspectives to determine the extent to which the software improves the conduct or outcome of mission operations. Table 1 shows a notional example of software evaluation factors:

Table 1 – Software Evaluation Factors Example

Feature(s):

Feature X

Feature Y

Mission Without New Software Mission With New Software Value

Mission Effectiveness

 

ID Range

 

Accuracy

 

Operating time

     
50km 80km Can identify targets 30km farther, increased engagement opportunities by x%
60% 80% 20% more reports accurate, reduced risk of fratricide by x%
100  hours 150 hours New software manages power utilization and  efficiency, and increased operating time
Associated Cost – parts 50k 14k Power efficiencies reduced number of times to recycle system and reduced needed replacement parts

During the planning phase, the Program Manager and user community should collaboratively define the plan to measure value (including factors to consider, metrics to collect, periodicity of the assessments, etc.). During the execution phase, the user will assign a priority and value to each feature in the development backlog. The program can then develop automated value metrics to calculate a value score based on the importance of the feature (priority) and the value assigned to the feature. The automated value score provides a meaningful way to measure progress during software development.

Note: If the below video(s) show a “Website Blocked” banner, try accessing the video via a non-DoD network. No sensitive content is contained in this video. Current DoD network policies to conserve bandwidth may be impacting the ability to view this video.