Architecture and Interoperability
How to use this site
Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.
DoD and Service policy is indicated by a BLUE vertical line.
Directly quoted material is preceeded with a link to the Reference Source.
Reference Source: DODI 5000.87 Section 3.2.b.(3)
The program should begin developing the software design and architecture, leveraging existing enterprise services as much as possible.
- The program will also consider the development environment, processes, automated tools, designs, architectures, and implications across the broader portfolio, component, and joint force.
- The chosen software development methodology will incorporate continuous testing and evaluation, resiliency, and cybersecurity, with maximum possible automation, as persistent requirements and include a risk-based lifecycle management approach to address software vulnerabilities, supply chain and development environment risk, and intelligence threats throughout the entire lifecycle.
- The program may develop prototypes or initial capabilities to explore possible solutions, architecture options and solicit user and stakeholder feedback.
Reference Source: DODI 5000.87 Section 3.2.d
The Acquisition Strategy includes…Architecture strategies to enable a modular open systems approach that is interoperable with required systems.
Reference Source: DODI 5000.87 Section 3.3.b.(1)
Programs will assemble software architecture, infrastructure, services, pipelines, development and test platforms, and related resources from enterprise services and development contracts. Leveraging existing services from enterprise services and development contracts will be preferred over acquiring new services to the extent consistent with the program acquisition strategy and IP strategy.
Software Architecture Guidance
Reference Source: USD(A&S) Guidance
The architecture of a software system captures the fundamental structure of how the software is organized. It describes the choices made by the software architect to achieve the quality attributes that are important to produce the software features, utilize infrastructure properly, and support the environment necessary to provide expected overall capability to users. These choices are important, since (among other reasons) a system with a high-quality architecture is usually easier to understand, can be changed to accommodate new functionality more easily, and can be maintained in a cost-effective manner. Almost any architectural decision will involve a trade-off, which requires making sure that the decisions made are appropriate to the context and expected use of a given system. A good example of this is the requirement for an architecture to support a given deployment model. In an Agile context, there is often a tension between the desire to leverage an emergent architecture – that is, allowing the architecture to emerge over time, as detailed user needs are better understood and the capability is implemented iteratively – and the need to have a solid and robust architectural basis that allows changes to be made easily.
Befitting the fundamental nature of the architecture, programs need to maintain focus on appropriately addressing architecture equities throughout the software’s lifecycle in different ways and at different points in the Pathway. In the planning phase, the Program Manager should ensure the creation of a high-level software architecture that will adequately support the continuous engineering of capability, at first to support the sprint and over time to support the total lifetime of the system. Experienced software architects will be aware of tested architectural principles and patterns, such as open systems, modularity, and security patterns, which can be used to structure these initial decisions. However, as described above, there is no straightforward recipe for a “good” architecture, given that choices must be continuously assessed for appropriateness in a specific context. Because of this, when necessary, programs should engage in pathfinding activities during this phase to explore architectural decisions, so that they can begin to understand the tradeoffs associated with various decisions in their specific context. (Note that the focus must be on “understanding” versus “developing” the architecture at this time, since the result of these pathfinding sprints may not be intended to be the first version of the system. If learning occurs that can influence the actual system architecture, these activities provide value.)
The Software Acquisition Pathway itself does not specify a format, notation set, or architectural language to use. Programs should focus on selecting a format and level of detail for any architectural description that meets the needs of users. In the case of legacy systems, the Program Manager should adopt a phased approach to re-architecting for modularity or micro-services, over an appropriate timeframe. While a micro-service approach may be well suited to a DevSecOps infrastructure, it will not be appropriate for all systems. Further, if a system’s architectural quality has not been maintained over time, re-architecting the entire system to adapt to a micro-services approach may be too expensive to execute all at once. However, maintaining a focus on improving modularity over time is a general good practice.
By the time the program has entered the execution phase and developed its MVP, it should be able to produce an architectural analysis that demonstrates the architecture will support delivery of capability with appropriate cadence going forward. The MVP is the first major delivery of system capability and will serve as the basis for future work, so it is important to ensure that the program has a well-thought-out approach to its architecture by this time and is implementing software in accordance with the architectural approach. The architecture may change over time as the system evolves and the capabilities needed are better understood. Programs need not have anticipated all of the most appropriate answers to architectural questions at this time, but should have a rationale for defending decisions made to date and have completed enough of an architecture and design to guide and act as an appropriate constraint and support the start of development. In Agile this is referred to as an architectural runway. As the architecture and design emerge the architect must continue to make defensible trade-offs that support quality attributes.
Throughout the execution phase, the Program Manager should have an approach to continuous monitoring of architecture quality in order to promote continuous improvement, which provides evidence that the system architecture meets at least minimum thresholds for architectural characteristics such as modularity and complexity. It is important to note that programs should continuously improve and refactor architecture to manage “technical debt” – instances where short-term solutions were implemented instead of a more robust and long-term approach.
Programs should prefer automated tool scans to ensure that design decisions are made based upon the as-built architecture. Automated tools do not capture all aspects of software development and obviously do not bring human judgment to bear, but they can help programs avoid the worst mistakes and ensure that programs are aware of problematic areas of the software code. It is important to highlight that the monitoring and analysis should focus on the architecture as it is built in the software in order to provide the government with full cognizance of the current state of the system. This is another reason to use automated tools: automation is necessary to generate a representation of the as-built architecture in a timely fashion that permits monitoring at multiple points over time.
Programs should consider including the results of the continuous monitoring as part of their annual value assessments, to demonstrate that short-term capabilities were not delivered at the expense of longer-term targets. The stakeholders who receive the value assessments will likely not be able to determine whether the program made the appropriate architecture decisions. However, using these assessments as an opportunity to review architectural quality gives the program an opportunity to make the case that the architecture is sound, and that the developers have not been pushing out rough-and-ready code to deliver short-term capability at the expense of long-term maintainability.
Reference Source: DODI 5000.87 Glossary
Interoperability is the ability of systems, units or forces to provide data, information, materiel, and services to, and accept the same from, other systems, units, or forces and to use the data, information, materiel and services so exchanged to enable them to operate effectively together. Interoperability includes information exchanges, systems, processes, procedures, organizations, and missions over the life cycle and must be balanced with cybersecurity.
Reference Source: USD(A&S) Guidance
IT interoperability includes both the technical exchange of information and the end-to-end operational effectiveness of that exchange of information as required for mission accomplishment. Interoperability extends beyond information exchange: it includes systems, processes, procedures, organizations, and missions over the lifecycle and must be balanced with cybersecurity. (Source DODI 8330.01). In addition to the DoD definition of IT interoperability, IEEE 610.12 considers software interoperability as the ability of two or more software systems or system components to exchange information or functions and use the information or functions received. It includes topics such as data mapping, distributed objects, and interface definition languages. (Source: ACM CCS)
Per DODI 8330.01, Interoperability of IT, including National Security Systems, Change 1, 18 Dec 2017, it is DoD policy that (partial list):
- DoD IT must interoperate, to the maximum extent practicable, with existing and planned systems (including applications) and equipment of joint, combined, and coalition forces, and other U.S. government and non-governmental organizations, as required based on operational context.
- IT interoperability must be evaluated early and with sufficient frequency throughout a system’s life cycle to capture and assess changes affecting interoperability in a joint, multinational, and interagency environment. Interoperability testing must be comprehensive, cost effective, and completed, and interoperability certification granted, before fielding a new IT capability or upgrading existing IT.
- IT interoperability includes both the technical exchange of information and the end-to-end operational effectiveness of that exchange of information as required for mission accomplishment. Interoperability testing must replicate the system’s operational network environment to the maximum extent possible.
- IT must be certified for interoperability or possess an interim certificate to operate (ICTO) or waiver to policy before connection to any DoD network (other than for test purposes).
Programs must integrate interoperability into software needs, strategies, designs, cost estimates, testing, certification, and more. Software is rarely used as a standalone capability; it typically runs on a broadly available platform and interfaces with many other systems. Software can no longer be program centric, but must be designed, developed, and operated as an integrated suite of capabilities.
From an operational perspective, the program must identify users’ interoperability needs. These needs pertain to the other systems the software must interface with, what type of information must be exchanged, and how interfaces feed into the broader mission thread(s). The captured users’ needs should be system/solution agnostic. A holistic view of the operational mission thread(s) may lead to designs where a new software development enables the core functionality of multiple systems, allowing the retirement of some systems and eliminating the need to design interfaces.
From a technical perspective, the enterprise architecture should outline how the software fits into the broader platform and/or enterprise (current and future state). It should identify common standards, interfaces, and protocols the new software must use or align to. As the Defense Innovation Board highlights in its Software Acquisition Practices (SWAP) Study: “Standard is better than custom. Standards enable quality, speed, adoption, cost control, sustainability, and interoperability.” DoD software should maximize the use of commercial standards and platforms when applicable to improve quality, interoperability, expandability, reliability, and competition, and to rapidly integrate future capabilities.
When applicable, the design team should engage with the developers of the interoperating systems to ensure agreement on interoperability standards, timelines, and expectations. If necessary, Interface Control Documents should be developed and agreed to by key stakeholders. Digital engineering may offer a capability to expedite (e.g., auto-generate) development and management of Interface Control Documents. The DevSecOps environment and related Enterprise Services, if implemented correctly, would integrate interoperability considerations to ensure the development team always operates with the broader picture in mind. This includes integrating the latest cybersecurity standards and strategies.
As the software will be iteratively developed and delivered via small, frequent releases, interoperability will likely be achieved iteratively. Initial iterations (e.g., MVCR) may pass some data to one or a small number of priority systems, while subsequent iterations will expand on the information passed and the number of interfacing systems. As other systems evolve and/or cybersecurity threats/risks are identified, the changes may drive interoperability requirements for a future software release. Interoperability, like all software features, will be implemented in priority order and iteratively improved.
As with most software functions, test and evaluation of software interoperability should be automated to the maximum extent practicable. Interoperability testing covers the full suite of systems in a development and/or test environment. Test results should analyze system and system-of-systems performance and risks. Interoperability test results will help inform decisions as to whether the software is ready to be deployed and will shape the design and functionality of future releases.