Software Acquisition

AAF  > Software Acquisition  >  User Engagement

How to use this site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

Capability Needs Statement

Reference Source: Software Acquisition Pathway Interim Policy and Procedures, 3 Jan 2020

 

A high-level capture of need that provides enough information to define the software solution space, considering the threat environment. The CNS must accompany the UA. The CNS must be periodically updated to reflect the actual baseline as necessary.

Reference Source: Software Acquisition Pathway Guide v1.0

The program sponsor leads the user community in the development of a Capability Needs Statement (CNS), which must be established prior to acquisition of major software capabilities. A CNS identifies the operational gaps and the operational capabilities needed to close the gap, accomplish the mission, and combat threats from adversaries. A CNS should provide enough information to define the software solution space, considering the threat environment. To initiate the planning phase of a software program, the sponsor should establish an initial CNS identifying the operational gaps and briefly describing the operational capability. An approved CNS is required to enter the execution phase of a software program.

The formal CNS is developed when the Service or Component Acquisition Executive determines that a software-based materiel solution will satisfy the capability need. Existing programs with established Joint Capabilities Integration and Development System (JCIDS) requirements documents should transition to a CNS to migrate to the Software Acquisition Pathway. The CNS for programs with JCIDS documents should start with prioritized capabilities drawn from the existing JCIDS document(s). All programs using this pathway should prioritize the capabilities in the CNS and load them into the backlog (i.e., the list of software features and changes to be made, which allows the dynamic reprioritization of work as needed, traces the work to planned iterations, and tracks the progress of each) for traceability. Once the capabilities are defined and prioritized in the backlog, they can be decomposed into more detailed feature sets, which are managed as flexible products. The scenarios below illustrate the use of a CNS:

  • If a completely new capability is required that does not build upon an existing system, the program should develop a CNS and have it approved via an expedited Component, Agency, or Service validation process and expedited joint process if there are joint implications.
  • If an existing software program (still in development) has an approved JCIDS document, the CNS can be generated from the JCIDS requirements and used to refine the software requirements. The CNS becomes the governing requirement document for the software portion of the system. If the system has sufficiently flexible requirements, the JCIDS document can serve as the CNS. Otherwise, the CNS can become the governing document.
  • Existing programs still in development, with validated JCIDS requirements for software embedded in platforms or weapon systems, should generate the CNS to identify and capture specific software requirements. The CNS will serve as the governing document for the software portion of the system. The CNS must identify software capabilities tied to system-wide Key Performance Parameters (KPPs) because those capabilities will be assessed in concert with the rest of the system during test and evaluation.
  • If a program for an existing fielded system identifies a new capability requirement (e.g., integration of a new interface with another system or new subsystem, a new application or set of applications to be added to an existing platform, new modules or services that implement new algorithms or perform new services, etc.), the CNS must capture the new required capability and guide development.
  • If an existing fielded system with validated JCIDS requirements is being modernized, the sponsor should generate a CNS to guide the modernization effort.
  • If a program for an existing fielded system identifies a better way to achieve the objectives of the materiel solution that has been fielded (to improve performance, efficiency, interoperability, etc.), then it can use a CNS to guide the development of the modification/upgrade.
Guiding Principles of a Capability Needs Statement

The following are key characteristics of a CNS:

  • A short, high-level document based on the size, scope, and complexity of the operational need
  • Focused on operational needs, key features and interfaces, not system specifications
  • Conveys high-level priorities, timelines, and operational constraints
  • Provides the foundation for dynamic management of capabilities (or features) throughout development
  • Aligns key themes from JCIDS, TechFAR, and modern software development methods.
Contents of a Capability Needs Statement
  • Cover Page
  • Signature Page
  • Related Program(s) or Requirements Document(s)
  • Executive Summary
  • Document Body. The document shall be no more than 10 pages long, including any content modified or augmented by a classified Annex, if used.
    • Section 1: Operational Context
    • Section 2: Threat Summary
    • Section 3: Capability Discussion
    • Section 4: Program Summary
    • Section 5: Performance Attributes
    • Section 6: Joint Interoperability

User Agreement

Reference Source: Software Acquisition Pathway Interim Policy and Procedures, 3 Jan 2020

 

An agreement between the operational and acquisition communities to gain commitment to continuous user involvement and assign decision-making authority in the development and delivery of software capability releases, as well as operational tradeoffs among software features, cadence, and management of the requirements backlog. The UA will ensure proper resourcing of operational user involvement, which should occur as frequently as possible to support the development process.

 

Source: Software Acquisition Pathway Guide v1.0

A User Agreement is a written and signed agreement developed by the Program Manager and the user community that ensures the user is properly represented and engaged throughout the software development and delivery process. The User Agreement is intended to set expectations in terms of developer/user interaction and cadence; prioritization of features set against an overarching CNS, Minimum Viable Product (MVP) and Minimum Viable Capability Release (MVCR); user training; and user acceptance testing and decisions regarding software release and deployment. It must be signed by all relevant stakeholders before a software program enters the execution phase. As Agile software methodologies require extensive and continuous user engagement, users must make a greater commitment to software programs than to traditional programs. This level of engagement may require separate funding resources to secure the commitment needed.

Establishing Roles and Responsibilities in the User Agreement

The User Agreement captures the roles and responsibilities of the program office and user community. The intent is to establish a clear understanding of who is empowered to make decisions regarding feature identification, prioritization, and timing and scope of software iterations and releases. Although each program will have a unique set of specific functional roles, the list below identifies some of the typical user roles involved in cooperative software development programs.

  • Operational sponsor: the sponsor represents operational needs at the highest level.
  • Capability sponsor: owner of the CNS and responsible for prioritization of features in the overall product roadmap.
  • Program manager: responsible for contracting for, acquiring, delivering, and deploying the software capability.
  • Requirements manager: responsible for development and maintenance of user requirements, both in the initial CNS and as the program evolves.
  • Operational users: the intended end users of the software system. They work directly with the software development team during the software development cycle.

The software acquisition pathway is designed to be tailorable, and the list above is meant to present only a broad set of roles. For smaller software efforts, all user-related roles (operational sponsor, capability sponsor, requirements manager, and operational users) could be combined into a single role. At the most basic level, the key roles are the Program Manager, the software development team, and users.

It is critical that programs have the right balance of users that represent the entire user population. If a program is overly focused on a specific user pool (e.g., from a single Military Occupational Skill (MoS) or a single military unit), the software may be optimized for that specific user pool and suboptimal for the rest of the user population. There are established best practices that designate ‘user representatives’ who have a deep understanding of the mission and are familiar with modern software methods. Another technique to engage the end user is the use of test detachments that are specially designated to work with software programs to prioritize features and evaluate early versions of software. Test detachments are populated with operational users who return from deployment and spend a rotation in the test detachment. Other techniques include using beta test sites or embedding end users within the software development team. The Sponsor and PM should consider collocating users and developers to facilitate exchange of information. The User Agreement should identify the technique(s) that the program will use to engage end users.

Activities Covered Under a User Agreement

The User Agreement identifies the types of activities that require user engagement and sets clear expectation of decision-making authorities.

  • Feature identification and prioritization – The designated users will work with the Program Manager to identify software features and determine prioritization, as well as the expectations surrounding MVP, MVCR, and the product roadmap (capability over time). Additionally, the users will provide input to the product roadmap as it evolves over time.
  • CNS development and refinement – User engagement will be required to develop, refine, and maintain the CNS. The initial CNS will be refined during the planning phase.
  • End user testing – The User Agreement will identify how user testing will be accomplished (test plan, test environment) prior to deployment and rollout of the software.
  • Deployment and rollout decisions – The User Agreement should specify what sites will receive new features/capabilities and the sequencing of rollout locations. It should also document the decision-making authorities for ‘go/no-go’ decisions.
  • Human System Interface Assessment (HIS) Assessment – The HSI Assessment is the formal reporting mechanism to ensure usability and operational suitability issues are managed and/or mitigated as software matures. It also codifies how user testing was accomplished, how the program ensured optimum user experiences, and reports out on any outstanding user performance risks that should/could be mitigated in future software releases/upgrades/etc.

Value Assessment

Reference Source: Software Acquisition Pathway Interim Policy and Procedures, 3 Jan 2020

 

The sponsor and user community shall perform a value assessment at least annually and provide justification for resources expended to deliver capabilities. End users will determine if the mission improvements and/or efficiencies realized from the delivered software capabilities are worth the investment. The decision authority will use the value assessments to measure progress on the program and inform resourcing decisions. Additional interim value assessments can be performed at any periodicity agreed to by the user and the PM.

Reference Source: Software Acquisition Pathway Guide v1.0

A value assessment reveals how much impact the software has on the mission from the end user’s perspective. The value assessment measures the program’s progress toward meeting user needs and enables the program to determine if the cost of the software development effort is commensurate with the value it provides. The assessment is typically a combination of subjective and objective measures.

Guiding Principles of a Value Assessment

Programs should conduct value assessments at least annually, or more frequently as determined by the user community and the Program Manager. When the version of the software to be assessed is delivered, the Program Manager and the user community can set up a field exercise, operator-led demonstration, test event, or laboratory experiment to conduct the assessment. The users will operate the software and identify specific impacts to operations from an end user’s perspective.

Programs must consider several factors when determining the value or impact that software has on operations. These factors identify specific benefits that result from using the new software. Overall user satisfaction is one aspect of determining value, but programs should also consider other objective measures, such as:

  • Increase in mission effectiveness: Mission effectiveness measures the extent to which the system contributes to mission accomplishment. Ideally, mission effectiveness is evaluated using quantitative or statistical measures, although in some cases it may be evaluated through a qualitative user assessment. Examples of mission-based metrics include lethality, survivability, accuracy, etc.
  • Cost efficiencies: Cost savings apply to the system, related systems, and mission operations. Programs should use quantitative data if possible.
  • User workload reduction: Workload reduction is the reduction in the amount of effort required by an operator to accomplish a mission task. Effort relates to the number of operations required to perform the task or the cognitive ability required to perform the task. This measure can be qualitative or quantitative.
  • User manpower reduction: Manpower reduction is a reduction in the number of operators required to accomplish a mission task. This measure should be quantitative.
  • Equipment footprint reduction: Footprint reduction is a reduction in the amount of equipment required in the field in order to accomplish the mission. Equipment includes computers, racks, servers, vehicles, support equipment, or any other items that deploy with the system. Programs should use quantitative data if possible.
  • Number and significance of field reports (software trouble reports): Field reports document problems that the end users experience with the software. Problems can include software defects, missing features, inadequate performance, etc. Programs should use quantitative data.
  • User adoption and user satisfaction: User adoption is a quantitative measure that considers how many military units use the software in the field. Some software applications are fielded but are not used, so user adoption measures how useful the military units find the software. User satisfaction is a subjective assessment by the end user that considers usability, suitability, reliability, and overall impact that the software has on performing the mission.

The overall value of the software represents a culmination of the assessment of all factors evaluated during the event for all the new features. The user community, including end users, evaluates the software from these perspectives to determine the extent to which the software improves the conduct or outcome of mission operations. Table 1 shows a notional example of software evaluation factors:

Table 1 – Software Evaluation Factors Example

Feature(s):

Feature X

Feature Y

Mission Without New Software Mission With New Software Value

Mission Effectiveness

 

ID Range

 

Accuracy

 

Operating time

     
50km 80km Can identify targets 30km farther, increased engagement opportunities by x%
60% 80% 20% more reports accurate, reduced risk of fratricide by x%
100  hours 150 hours New software manages power utilization and  efficiency, and increased operating time
Associated Cost – parts 50k 14k Power efficiencies reduced number of times to recycle system and reduced needed replacement parts

 

During the planning phase, the Program Manager and user community should collaboratively define the plan to measure value (including factors to consider, metrics to collect, periodicity of the assessments, etc.). During the execution phase, the user will assign a priority and value to each feature in the development backlog. The program can then develop automated value metrics to calculate a value score based on the importance of the feature (priority) and the value assigned to the feature. The automated value score provides a meaningful way to measure progress during software development.