Software Acquisition

AAF  > Software Acquisition  >  Design and Architecture

Design and Architecture

How to use this site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

Minimum Viable Product (MVP) and Minimum Viable Capability Release (MVCR)

Reference Source: Software Acquisition Pathway Interim Policy and Procedures, 3 Jan 2020

 

An MVP is working software, delivered to the Warfighter/end user that provides a meaningful first version of the software capability as agreed to by the users. The MVP must be defined early in the execution phase with active user engagement, and may evolve as users interact with the software and user needs are better understood. The delivery of an MVP demonstrates to the users and other acquisition stakeholders that the program is capable of cost-effectively delivering reasonable increments of needed functionality.

 

An MVCR is the first version of the software that contains sufficient capability to be fielded for operational use. The MVCR contains the minimum set of functions that provide value to the Warfighter/end user and requires the minimum amount of software development effort. MVCRs provide a baseline set of capabilities to validate assumptions and determine if the proposed system will deliver the expected or acceptable business/mission value. The Warfighter/end-user shall determine when the MVCR and subsequent software releases are to be delivered operationally and will ensure that each operational release can achieve interoperability certification, if applicable.

 

The product roadmap is a plan to facilitate the delivery of multiple releases of software over time to reach full capability. It identifies which releases will be evaluated or fielded for operational use, and the point at which the software transitions to sustainment. The product roadmap can evolve over time to address changing threats and technology. The roadmap informs the planned evolution of the solution capabilities and architecture, is updated periodically, communicates the capabilities/feature sets targeted for delivery at discrete times in an iterative fashion, identifies the point at which the software transitions to sustainment, and aligns with Warfighter/end-user needs.

 

Reference Source: Software Acquisition Pathway Guide v1.0

The MVCR is akin to the Minimum Marketable Product (MMP) or Minimum Marketable Release (MMR) in commercial industry terms. Definition and delivery of the MVP, MVCR, and product roadmap inform a key business decision at the start of the execution phase. Programs must distinguish between an MVP (designed to deliver early functionality to inform the development process) and an MVCR (designed to deliver the minimum set of operational capability that provides enough value to the Warfighter/end user).

Minimum Viable Product

The main purpose of the MVP is to validate the need for the capability and gain user feedback on the new capability. The MVP must be sized as a manageable, demonstrable set of scenario threads through a minimal set of features. The MVP, by definition, should not include all the capabilities identified in the product roadmap. The MVP accelerates the learning process by delivering the minimum functionality necessary to elicit meaningful feedback from users quickly and, thus, enable continuous learning and iterating by the product team. Feedback from users can also state that the product/service is not needed, negating the need for future iterations.

An MVP is typically defined during project initiation and refined during subsequent planning periods. If external resources (i.e., contractors, system integrators, end users) are part of the team they take part in the MVP definition process.

Generating an MVP requires that programs put product teams and processes in place and that the teams execute the necessary processes. While the MVP is typically deployed to a production environment, programs may have constraints and/or needs that require a staging environment or other option for MVP deployment. This is acceptable and should be identified at the outset.

The Product Owner / Product Manager (collectively referred to as the Product Management team, for larger systems/solutions) and development team(s) need to work together to identify and agree upon the product MVP. The Product Management team is responsible for ensuring that the MVP provides business value, and the development team determines if the scope of the MVP is reasonable to allow delivery within the set period.

Minimum Viable Capability Release

An MVCR is designed to provide minimum capability that a Warfighter/end user can employ operationally. An MVCR has three key attributes:

Minimalistic: MVCRs must contain minimal capabilities that must be fielded to ensure acceptable safety, security, and performance. For example, a program cannot field a major software upgrade for an aircraft without airworthiness designation. MVCR releases of any scope should be minimal (as small as possible) but not to the detriment of the operational mission.

Rapid Validation: The MVP strategy relies on rapid user feedback to shape a product quickly. MVCR capabilities must be evaluated by testable measures of effectiveness. An essential part of the MVCR strategy is to establish early testing and validation that provide actionable feedback for timely updates.

Architecture: MVPs are often built in a software as a service (SaaS) and open source development ecosystem that is inexpensive, well understood, and stable. Where the MVP strategy takes a long-term, stable, and enabling architecture for granted, the MVCR strategy recognizes that architecture must be modular, well defined, and enduring to support the entire software lifecycle.

Product Roadmap

Programs use the product roadmap to communicate when capability is projected to be delivered. A product roadmap provides a rolling calendar-based view of key capabilities/feature sets to be delivered in the near term (10–12 weeks) through the coming 12–18 months for a product/service, and a high-level description of capabilities to be delivered annually. The roadmap is considered a product schedule. It informs the planned evolution of the solution capabilities and should align to the product vision, enable the MVP, and visibly communicate the capabilities/feature sets targeted for delivery and/or deployment at discrete points in time. The capabilities/feature sets identified in the product roadmap provide focus and align the development team(s) with the product management and user teams on those feature sets to be delivered first (representing the features of highest value to the end user community, as assessed by the user and product management teams). The product roadmap is a living document and therefore will change over time, and it should always be an accurate representation of the business product priorities.

The product owner is responsible for defining the product roadmap. The roadmap must be at a high enough level of abstraction to allow teams to elaborate and decompose the high-level capabilities and features into the emerging system design. The product roadmap should not include the lower level tasks.

A product roadmap neither assigns work to specific teams nor dictates explicit design. Instead, it provides a prioritized, value-driven view of work to be performed that aligns to the program vision. It depicts the capabilities or features to be developed over time while avoiding overly prescriptive plans. Because product roadmaps are defined at a high level of abstraction, they allow the teams to collaborate on lower level requirements and design details.

The product roadmap is an input to iteration and increment planning; therefore, the Product Owner must review and edit (if appropriate) the roadmap prior to and in support of these events. The roadmap should include key upcoming external-facing milestones such as legislative events, participation in exercises, or field events that will be a performance priority.

Additionally, the product roadmap provides input to the program user training plan. Users who evaluate early versions of software should be adequately trained to perform the evaluation effectively. For every software release deployed to the field, the receiving military units should receive adequate training that ensures enough proficiency with the new capability to maintain operational effectiveness.

Software Architecture

Reference Source: Software Acquisition Pathway Guide v1.0

The architecture of a software system captures the fundamental structure of how the software is organized. It describes the choices made by the software architect to achieve the quality attributes that are important to produce the software features, utilize infrastructure properly, and support the environment necessary to provide expected overall capability to users. These choices are important, since (among other reasons) a system with a high-quality architecture is usually easier to understand, can be changed to accommodate new functionality more easily, and can be maintained in a cost-effective manner. Almost any architectural decision will involve a trade-off, which requires making sure that the decisions made are appropriate to the context and expected use of a given system. A good example of this is the requirement for an architecture to support a given deployment model. In an Agile context, there is often a tension between the desire to leverage an emergent architecture – that is, allowing the architecture to emerge over time, as detailed user needs are better understood and the capability is implemented iteratively – and the need to have a solid and robust architectural basis that allows changes to be made easily.

Befitting the fundamental nature of the architecture, programs need to maintain focus on appropriately addressing architecture equities throughout the software’s lifecycle in different ways and at different points in the Pathway. In the planning phase, the Program Manager should ensure the creation of a high-level software architecture that will adequately support the continuous engineering of capability, at first to support the sprint and over time to support the total lifetime of the system. Experienced software architects will be aware of tested architectural principles and patterns, such as open systems, modularity, and security patterns, which can be used to structure these initial decisions. However, as described above, there is no straightforward recipe for a “good” architecture, given that choices must be continuously assessed for appropriateness in a specific context. Because of this, when necessary, programs should engage in pathfinding activities during this phase to explore architectural decisions, so that they can begin to understand the tradeoffs associated with various decisions in their specific context. (Note that the focus must be on “understanding” versus “developing” the architecture at this time, since the result of these pathfinding sprints may not be intended to be the first version of the system. If learning occurs that can influence the actual system architecture, these activities provide value.)

The Software Acquisition Pathway itself does not specify a format, notation set, or architectural language to use. Programs should focus on selecting a format and level of detail for any architectural description that meets the needs of users. In the case of legacy systems, the Program Manager should adopt a phased approach to re-architecting for modularity or micro-services, over an appropriate timeframe. While a micro-service approach may be well suited to a DevSecOps infrastructure, it will not be appropriate for all systems. Further, if a system’s architectural quality has not been maintained over time, re-architecting the entire system to adapt to a micro-services approach may be too expensive to execute all at once. However, maintaining a focus on improving modularity over time is a general good practice.

By the time the program has entered the execution phase and developed its MVP, it should be able to produce an architectural analysis that demonstrates the architecture will support delivery of capability with appropriate cadence going forward. The MVP is the first major delivery of system capability and will serve as the basis for future work, so it is important to ensure that the program has a well-thought-out approach to its architecture by this time and is implementing software in accordance with the architectural approach. The architecture may change over time as the system evolves and the capabilities needed are better understood. Programs need not have anticipated all of the most appropriate answers to architectural questions at this time, but should have a rationale for defending decisions made to date and have completed enough of an architecture and design to guide and act as an appropriate constraint and support the start of development. In Agile this is referred to as an architectural runway. As the architecture and design emerge the architect must continue to make defensible trade-offs that support quality attributes.

Throughout the execution phase, the Program Manager should have an approach to continuous monitoring of architecture quality in order to promote continuous improvement, which provides evidence that the system architecture meets at least minimum thresholds for architectural characteristics such as modularity and complexity. It is important to note that programs should continuously improve and refactor architecture to manage “technical debt” – instances where short-term solutions were implemented instead of a more robust and long-term approach.

Programs should prefer automated tool scans to ensure that design decisions are made based upon the as-built architecture. Automated tools do not capture all aspects of software development and obviously do not bring human judgment to bear, but they can help programs avoid the worst mistakes and ensure that programs are aware of problematic areas of the software code. It is important to highlight that the monitoring and analysis should focus on the architecture as it is built in the software in order to provide the government with full cognizance of the current state of the system. This is another reason to use automated tools: automation is necessary to generate a representation of the as-built architecture in a timely fashion that permits monitoring at multiple points over time.

Programs should consider including the results of the continuous monitoring as part of their annual value assessments, to demonstrate that short-term capabilities were not delivered at the expense of longer-term targets. The stakeholders who receive the value assessments will likely not be able to determine whether the program made the appropriate architecture decisions. However, using these assessments as an opportunity to review architectural quality gives the program an opportunity to make the case that the architecture is sound, and that the developers have not been pushing out rough-and-ready code to deliver short-term capability at the expense of long-term maintainability.

Product Roadmap

Reference Source: Software Acquisition Pathway Guide v1.0

Programs use the product roadmap to communicate when capability is projected to be delivered. A product roadmap provides a rolling calendar-based view of key capabilities/feature sets to be delivered in the near term (10–12 weeks) through the coming 12–18 months for a product/service, and a high-level description of capabilities to be delivered annually. The roadmap is considered a product schedule. It informs the planned evolution of the solution capabilities and should align to the product vision, enable the MVP, and visibly communicate the capabilities/feature sets targeted for delivery and/or deployment at discrete points in time. The capabilities/feature sets identified in the product roadmap provide focus and align the development team(s) with the product management and user teams on those feature sets to be delivered first (representing the features of highest value to the end user community, as assessed by the user and product management teams). The product roadmap is a living document and therefore will change over time, and it should always be an accurate representation of the business product priorities.

The product owner is responsible for defining the product roadmap. The roadmap must be at a high enough level of abstraction to allow teams to elaborate and decompose the high-level capabilities and features into the emerging system design. The product roadmap should not include the lower level tasks.

A product roadmap neither assigns work to specific teams nor dictates explicit design. Instead, it provides a prioritized, value-driven view of work to be performed that aligns to the program vision. It depicts the capabilities or features to be developed over time while avoiding overly prescriptive plans. Because product roadmaps are defined at a high level of abstraction, they allow the teams to collaborate on lower level requirements and design details.

The product roadmap is an input to iteration and increment planning; therefore, the Product Owner must review and edit (if appropriate) the roadmap prior to and in support of these events. The roadmap should include key upcoming external-facing milestones such as legislative events, participation in exercises, or field events that will be a performance priority.

Additionally, the product roadmap provides input to the program user training plan. Users who evaluate early versions of software should be adequately trained to perform the evaluation effectively. For every software release deployed to the field, the receiving military units should receive adequate training that ensures enough proficiency with the new capability to maintain operational effectiveness.

Interoperability

Reference Source: Software Acquisition Pathway Guide v1.0

IT interoperability includes both the technical exchange of information and the end-to-end operational effectiveness of that exchange of information as required for mission accomplishment. Interoperability extends beyond information exchange: it includes systems, processes, procedures, organizations, and missions over the lifecycle and must be balanced with cybersecurity. (Source DODI 8330.01). In addition to the DoD definition of IT interoperability, IEEE 610.12 considers software interoperability as the ability of two or more software systems or system components to exchange information or functions and use the information or functions received. It includes topics such as data mapping, distributed objects, and interface definition languages. (Source: ACM CCS)

Per DODI 8330.01, Interoperability of IT, including National Security Systems, Change 1, 18 Dec 2017, it is DoD policy that (partial list):

  • DoD IT must interoperate, to the maximum extent practicable, with existing and planned systems (including applications) and equipment of joint, combined, and coalition forces, and other U.S. government and non-governmental organizations, as required based on operational context.
  • IT interoperability must be evaluated early and with sufficient frequency throughout a system’s life cycle to capture and assess changes affecting interoperability in a joint, multinational, and interagency environment. Interoperability testing must be comprehensive, cost effective, and completed, and interoperability certification granted, before fielding a new IT capability or upgrading existing IT.
  • IT interoperability includes both the technical exchange of information and the end-to-end operational effectiveness of that exchange of information as required for mission accomplishment. Interoperability testing must replicate the system’s operational network environment to the maximum extent possible.
  • IT must be certified for interoperability or possess an interim certificate to operate (ICTO) or waiver to policy before connection to any DoD network (other than for test purposes).

Programs must integrate interoperability into software needs, strategies, designs, cost estimates, testing, certification, and more. Software is rarely used as a standalone capability; it typically runs on a broadly available platform and interfaces with many other systems. Software can no longer be program centric, but must be designed, developed, and operated as an integrated suite of capabilities.

From an operational perspective, the program must identify users’ interoperability needs. These needs pertain to the other systems the software must interface with, what type of information must be exchanged, and how interfaces feed into the broader mission thread. The captured users’ needs should be system/solution agnostic. A holistic view of the operational mission thread may lead to designs where a new software development enables the core functionality of multiple systems, allowing the retirement of some systems and eliminating the need to design interfaces.

From a technical perspective, the enterprise architecture should outline how the software fits into the broader platform and/or enterprise (current and future state). It should identify common standards, interfaces, and protocols the new software must use or align to. As the Defense Innovation Board highlights in its Software Acquisition Practices (SWAP) Study: “Standard is better than custom. Standards enable quality, speed, adoption, cost control, sustainability, and interoperability.” DoD software should maximize the use of commercial standards and platforms when applicable to improve quality, interoperability, expandability, reliability, and competition, and to rapidly integrate future capabilities.

When applicable, the design team should engage with the developers of the interoperating systems to ensure agreement on interoperability standards, timelines, and expectations. If necessary, Interface Control Documents should be developed and signed by key stakeholders. The DevSecOps environment and related Enterprise Services, if implemented correctly, would integrate interoperability considerations to ensure the development team always operates with the broader picture in mind. This includes integrating the latest cybersecurity standards and strategies.

As the software will be iteratively developed and delivered via small, frequent releases, interoperability will likely be achieved iteratively. Initial iterations (e.g., MVCR) may pass some data to one or a small number of priority systems, while subsequent iterations will expand on the information passed and the number of interfacing systems. As other systems evolve and/or cybersecurity threats/risks are identified, the changes may drive interoperability requirements for a future software release. Interoperability, like all software features, will be implemented in priority order and iteratively improved.

As with most software functions, test and evaluation of software interoperability should be automated to the maximum extent practicable. Interoperability testing covers the full suite of systems in a development and/or test environment. Test results should analyze system and system-of-systems performance and risks. Interoperability test results will help inform decisions as to whether the software is ready to be deployed and will shape the design and functionality of future releases.

Maximize Use of Prototyping, Experimentation, and Minimum Viable Products.

A prototype or MVP in the hands of operators and engineers would accelerate learning and design of solutions beyond a team conducting a CBA or AoA. Portfolios should use the multiple prototyping pathways to the maximum extent before establishing a formal program or follow-on increment to shape scope and requirements. Iterative prototypes and MVPs would improve opportunities to exploit leading technologies and the chances of delivering high-value capabilities to Warfighters. 

Section 809 Panel