Software Acquisition

AAF  >  Software Acquisition  >  Design and Enterprise Services

Design and Enterprise Services

How to use this site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

Software Architecture

Reference Source: DODI 5000.87 Section 3.2.b.(3)

 

The program should begin developing the software design and architecture, leveraging existing enterprise services as much as possible.

  • The program will also consider the development environment, processes, automated tools, designs, architectures, and implications across the broader portfolio, component, and joint force.
  • The chosen software development methodology will incorporate continuous testing and evaluation, resiliency, and cybersecurity, with maximum possible automation, as persistent requirements and include a risk-based lifecycle management approach to address software vulnerabilities, supply chain and development environment risk, and intelligence threats throughout the entire lifecycle.
  • The program may develop prototypes or initial capabilities to explore possible solutions, architecture options and solicit user and stakeholder feedback.

 

Reference Source: DODI 5000.87 Section 3.2.d

 

The Acquisition Strategy includes…Architecture strategies to enable a modular open systems approach that is interoperable with required systems.

 

Reference Source: DODI 5000.87 Section 3.3.b.(1)

 

Programs will assemble software architecture, infrastructure, services, pipelines, development and test platforms, and related resources from enterprise services and development contracts. Leveraging existing services from enterprise services and development contracts will be preferred over acquiring new services to the extent consistent with the program acquisition strategy and IP strategy.

Software Architecture Guidance

Reference Source: USD(A&S) Guidance

The architecture of a software system captures the fundamental structure of how the software is organized. It describes the choices made by the software architect to achieve the quality attributes that are important to produce the software features, utilize infrastructure properly, and support the environment necessary to provide expected overall capability to users. These choices are important, since (among other reasons) a system with a high-quality architecture is usually easier to understand, can be changed to accommodate new functionality more easily, and can be maintained in a cost-effective manner. Almost any architectural decision will involve a trade-off, which requires making sure that the decisions made are appropriate to the context and expected use of a given system. A good example of this is the requirement for an architecture to support a given deployment model. In an Agile context, there is often a tension between the desire to leverage an emergent architecture – that is, allowing the architecture to emerge over time, as detailed user needs are better understood and the capability is implemented iteratively – and the need to have a solid and robust architectural basis that allows changes to be made easily.

Befitting the fundamental nature of the architecture, programs need to maintain focus on appropriately addressing architecture equities throughout the software’s lifecycle in different ways and at different points in the Pathway. In the planning phase, the Program Manager should ensure the creation of a high-level software architecture that will adequately support the continuous engineering of capability, at first to support the sprint and over time to support the total lifetime of the system. Experienced software architects will be aware of tested architectural principles and patterns, such as open systems, modularity, and security patterns, which can be used to structure these initial decisions. However, as described above, there is no straightforward recipe for a “good” architecture, given that choices must be continuously assessed for appropriateness in a specific context. Because of this, when necessary, programs should engage in pathfinding activities during this phase to explore architectural decisions, so that they can begin to understand the tradeoffs associated with various decisions in their specific context. (Note that the focus must be on “understanding” versus “developing” the architecture at this time, since the result of these pathfinding sprints may not be intended to be the first version of the system. If learning occurs that can influence the actual system architecture, these activities provide value.)

The Software Acquisition Pathway itself does not specify a format, notation set, or architectural language to use. Programs should focus on selecting a format and level of detail for any architectural description that meets the needs of users. In the case of legacy systems, the Program Manager should adopt a phased approach to re-architecting for modularity or micro-services, over an appropriate timeframe. While a micro-service approach may be well suited to a DevSecOps infrastructure, it will not be appropriate for all systems. Further, if a system’s architectural quality has not been maintained over time, re-architecting the entire system to adapt to a micro-services approach may be too expensive to execute all at once. However, maintaining a focus on improving modularity over time is a general good practice.

By the time the program has entered the execution phase and developed its MVP, it should be able to produce an architectural analysis that demonstrates the architecture will support delivery of capability with appropriate cadence going forward. The MVP is the first major delivery of system capability and will serve as the basis for future work, so it is important to ensure that the program has a well-thought-out approach to its architecture by this time and is implementing software in accordance with the architectural approach. The architecture may change over time as the system evolves and the capabilities needed are better understood. Programs need not have anticipated all of the most appropriate answers to architectural questions at this time, but should have a rationale for defending decisions made to date and have completed enough of an architecture and design to guide and act as an appropriate constraint and support the start of development. In Agile this is referred to as an architectural runway. As the architecture and design emerge the architect must continue to make defensible trade-offs that support quality attributes.

Throughout the execution phase, the Program Manager should have an approach to continuous monitoring of architecture quality in order to promote continuous improvement, which provides evidence that the system architecture meets at least minimum thresholds for architectural characteristics such as modularity and complexity. It is important to note that programs should continuously improve and refactor architecture to manage “technical debt” – instances where short-term solutions were implemented instead of a more robust and long-term approach.

Programs should prefer automated tool scans to ensure that design decisions are made based upon the as-built architecture. Automated tools do not capture all aspects of software development and obviously do not bring human judgment to bear, but they can help programs avoid the worst mistakes and ensure that programs are aware of problematic areas of the software code. It is important to highlight that the monitoring and analysis should focus on the architecture as it is built in the software in order to provide the government with full cognizance of the current state of the system. This is another reason to use automated tools: automation is necessary to generate a representation of the as-built architecture in a timely fashion that permits monitoring at multiple points over time.

Programs should consider including the results of the continuous monitoring as part of their annual value assessments, to demonstrate that short-term capabilities were not delivered at the expense of longer-term targets. The stakeholders who receive the value assessments will likely not be able to determine whether the program made the appropriate architecture decisions. However, using these assessments as an opportunity to review architectural quality gives the program an opportunity to make the case that the architecture is sound, and that the developers have not been pushing out rough-and-ready code to deliver short-term capability at the expense of long-term maintainability.

Interoperability

Reference Source: DODI 5000.87 Glossary

 

Interoperability is the ability of systems, units or forces to provide data, information, materiel, and services to, and accept the same from, other systems, units, or forces and to use the data, information, materiel and services so exchanged to enable them to operate effectively together. Interoperability includes information exchanges, systems, processes, procedures, organizations, and missions over the life cycle and must be balanced with cybersecurity.

Interoperability Guidance

Reference Source: USD(A&S) Guidance

IT interoperability includes both the technical exchange of information and the end-to-end operational effectiveness of that exchange of information as required for mission accomplishment. Interoperability extends beyond information exchange: it includes systems, processes, procedures, organizations, and missions over the lifecycle and must be balanced with cybersecurity. (Source DODI 8330.01). In addition to the DoD definition of IT interoperability, IEEE 610.12 considers software interoperability as the ability of two or more software systems or system components to exchange information or functions and use the information or functions received. It includes topics such as data mapping, distributed objects, and interface definition languages. (Source: ACM CCS)

Per DODI 8330.01, Interoperability of IT, including National Security Systems, Change 1, 18 Dec 2017, it is DoD policy that (partial list):

  • DoD IT must interoperate, to the maximum extent practicable, with existing and planned systems (including applications) and equipment of joint, combined, and coalition forces, and other U.S. government and non-governmental organizations, as required based on operational context.
  • IT interoperability must be evaluated early and with sufficient frequency throughout a system’s life cycle to capture and assess changes affecting interoperability in a joint, multinational, and interagency environment. Interoperability testing must be comprehensive, cost effective, and completed, and interoperability certification granted, before fielding a new IT capability or upgrading existing IT.
  • IT interoperability includes both the technical exchange of information and the end-to-end operational effectiveness of that exchange of information as required for mission accomplishment. Interoperability testing must replicate the system’s operational network environment to the maximum extent possible.
  • IT must be certified for interoperability or possess an interim certificate to operate (ICTO) or waiver to policy before connection to any DoD network (other than for test purposes).

Programs must integrate interoperability into software needs, strategies, designs, cost estimates, testing, certification, and more. Software is rarely used as a standalone capability; it typically runs on a broadly available platform and interfaces with many other systems. Software can no longer be program centric, but must be designed, developed, and operated as an integrated suite of capabilities.

From an operational perspective, the program must identify users’ interoperability needs. These needs pertain to the other systems the software must interface with, what type of information must be exchanged, and how interfaces feed into the broader mission thread. The captured users’ needs should be system/solution agnostic. A holistic view of the operational mission thread may lead to designs where a new software development enables the core functionality of multiple systems, allowing the retirement of some systems and eliminating the need to design interfaces.

From a technical perspective, the enterprise architecture should outline how the software fits into the broader platform and/or enterprise (current and future state). It should identify common standards, interfaces, and protocols the new software must use or align to. As the Defense Innovation Board highlights in its Software Acquisition Practices (SWAP) Study: “Standard is better than custom. Standards enable quality, speed, adoption, cost control, sustainability, and interoperability.” DoD software should maximize the use of commercial standards and platforms when applicable to improve quality, interoperability, expandability, reliability, and competition, and to rapidly integrate future capabilities.

When applicable, the design team should engage with the developers of the interoperating systems to ensure agreement on interoperability standards, timelines, and expectations. If necessary, Interface Control Documents should be developed and signed by key stakeholders. The DevSecOps environment and related Enterprise Services, if implemented correctly, would integrate interoperability considerations to ensure the development team always operates with the broader picture in mind. This includes integrating the latest cybersecurity standards and strategies.

As the software will be iteratively developed and delivered via small, frequent releases, interoperability will likely be achieved iteratively. Initial iterations (e.g., MVCR) may pass some data to one or a small number of priority systems, while subsequent iterations will expand on the information passed and the number of interfacing systems. As other systems evolve and/or cybersecurity threats/risks are identified, the changes may drive interoperability requirements for a future software release. Interoperability, like all software features, will be implemented in priority order and iteratively improved.

As with most software functions, test and evaluation of software interoperability should be automated to the maximum extent practicable. Interoperability testing covers the full suite of systems in a development and/or test environment. Test results should analyze system and system-of-systems performance and risks. Interoperability test results will help inform decisions as to whether the software is ready to be deployed and will shape the design and functionality of future releases.

Enterprise Services and DevSecOps Pipeline

Reference Source: DODI 5000.87 Glossary

 

Enterprise Services are services that have the proper scope to play a productive role in automating business processes in enterprise computing, networking, and data services.  Enterprise services include technical services such as cloud infrastructure, software development pipeline platforms, common containers, virtual machines, monitoring tools, and test automation tools.  Responsibility for these functions is generally above the program manager.

 

Reference Source: DODI 5000.87 Section 1.2.h

 

Leveraging existing enterprise services, if available, is preferred over creating unique software services for individual programs.  These may be procured from the DoD, the DoD components, other government agencies, or commercial providers, and leverage category management solutions and enterprise software agreements.

 

Reference Source: DODI 5000.87 Section 3.3.b.(1)

 

Programs will assemble software architecture, infrastructure, services, pipelines, development and test platforms, and related resources from enterprise services and development contracts.  Leveraging existing services from enterprise services and development contracts will be preferred over acquiring new services to the extent consistent with the program acquisition strategy and IP strategy.

Enterprise Services and DevSecOps Guidance

Reference Source: USD(A&S) Guidance

DevSecOps platforms should ideally be provided as an enterprise service with tailoring options as opposed to having each program independently develop separate instances of DevSecOps pipelines. An enterprise service is an enabling capability that can be leveraged by several software programs. Use of enterprise services provides some key benefits such as optimizing cost of platforms across the enterprise, leveraging skills across organizations, and minimizing complexity between toolsets. Considerations that a program should use in selecting an appropriate DevSecOps platform include:

  • Who (Service or Agency) is providing the DevSecOps platform?
  • Does the use of the proposed DevSecOps platform help the program achieve its desired software development value stream map?
  • What type of agreement (e.g., service level agreement) is in place to govern the use of DevSecOps solution?
  • Does the DevSecOps capability support collection and reporting of a minimum essential list of metrics?
  • Does the DevSecOps capability support the program’s roadmap?
  • What is the maturity level of the DevSecOps capability (i.e., is this capability being developed, already in use, widely used, etc.)?
  • Does the DevSecOps capability support the delivery model required by the application developer?
  • Are all the required tools and components already integrated into the DevSecOps pipeline?
  • Is the DevSecOps platform fully resourced?
  • Does this capability support the security requirements?
  • Are any key components missing from the DevSecOps service?
Security and Testing

Security and testing should be incorporated throughout the software development and delivery lifecycle. The selected DevSecOps pipeline should provide tools that support and integrate automated testing, security scans, logging, and monitoring.

Source Code Management Tools

Source Code Management (SCM) is an important feature of a DevSecOps pipeline. The use of tools that support source code management allows developers to check in code, leads to automated builds, and supports key security activities such as the ability to perform static checks.

Programs should create a designated Continuous Integration (CI) user account within the SCM tool, whose sole purpose is to interact with the CI server. This enables traceability from an access control perspective between SCM and CI services and avoids the need to enter individual user account credentials into the CI service.

Programs should have methods and tools to store source code that is delivered by a contractor. The delivery of the source code should be accompanied by the necessary tools required to manage the source code.

The complete software baseline should be interpreted as all configuration artifacts used throughout the development and delivery of software, beginning with the project MVP.  This is to include all configuration management criteria such as backlog tasks, user stories, epics task and time estimates. Baselines may also be split up into various categories along the SDLC such as:

  • Functional Baseline: initial specifications established; contract, etc.
  • Allocated Baseline: state of work products after requirements are approved Developmental Baseline: state of work products amid development
  • Product Baseline: contains the releasable contents of the project
  • Operational Baseline: state of deliveries/deployments and upon delivery/deployment of MVP into production
Continuous Integration

CI is one of the key objectives of DevSecOps. A best practice in many contexts is to build the CI application every time a check-in of code occurs. The build process should be automated, and while all tools are not equal in this regard, programs should make efforts to automate builds for their given technology set. When crafting a CI build plan, the goal should be to remain as agnostic as possible. Relying too heavily on built-in features of specific CI servers can lead to replication issues if the CI tool must be changed. The manual activity required to build a software package from code should be as minimal as possible; any necessary manual processes should be thoroughly documented and then replicated in a CI server.

Continuous Delivery / Continuous Deployment

Another key objective of DevSecOps is the ability to deliver a version of the software, once compiled, to an environment where it can be operated, outside the development environment. In commercial environments, the goal may be Continuous Deployment: once a new working version of the software has been compiled and shown to pass the appropriate quality checks it can be deployed directly into the operational environment for users. In the DoD context, it is often appropriate to focus on Continuous Delivery, in which each new working version of the software is delivered to another environment (e.g., an operationally relevant test environment) but not to operational users. This distinction is entirely appropriate to the DoD context. For example, in the case of weapon systems, the Warfighter may not be able to adopt frequent updates to operational systems given the need for similarly updated training and tactics. (Because of this distinction, this guide uses the more general term “Continuous Delivery” throughout.) Programs should ensure that they are making the correct choice concerning where to deliver operational software.

Even though Continuous Deployment may not be appropriate for many of the DoD’s use cases, the goal of automating as much as possible throughout the pipeline remains the same. A DevSecOps pipeline practicing Continuous Delivery should have enough automated testing in place so that, even though software will not be deployed directly to production, delivery could still be feasible due to the rigor of in-place automated testing. A common practice that makes this possible is to use of containerization to create testing environments on the fly, run automated tests, and then destroy the environment immediately after the tests have concluded. This can be accomplished as part of the Continuous Delivery or Deployment process. Because this short-lived test environment is created with every build, it allows continuous insight to determine whether or not Continuous Deployment could be a possibility, even if Continuous Delivery has been the chosen method of moving artifacts into production.

Additional decisions that programs should consider include:

  • Delivery Model – What delivery model will the program use? Examples include:
    • Model A: Continuous Delivery – automates delivery to persistent environments (either cloud or on-premises)
    • Model B: Cloud-Enabled Delivery – deploys applications automatically after provisioning a new environment from the cloud or data center. The primary difference from Model A is that Model B uses Infrastructure as Code.
    • Model C: Container-Enabled Delivery – deploys applications as a set of containers into one or more hosts that are dynamically created. The primary difference from Model B is that Model C involves full support of modular application architectures (e.g., micro-services).
  • Release Orchestration Tools– What release orchestration tools does the platform provide to support continuous delivery?
  • Container Tools – What container tools will the program use to support continuous deployment?
  • Release Cadence – How often will releases be deployed to various environments in accordance with the delivery model?
  • Release Scope – Will delivery take the form of full or incremental releases?
  • Transition – What are the DevSecOps implementation plan and plan to transition to the pipeline?

Cybersecurity

Reference Source: DODI 5000.87 Section 1.2.i

 

Cybersecurity and program protection will be addressed from program inception throughout the program’s lifecycle in accordance with applicable cybersecurity policies and issuances.  A risk-based management approach will be an integral part of the program’s strategies, processes, designs, infrastructure, development, test, integration, delivery, and operations.  Software assurance, cyber security, test and evaluation are integral parts of this approach to continually assess and measure cybersecurity preparedness and responsiveness, identify and address risks and execute mitigation actions.

 

Reference Source: DODI 5000.87 Section 3.2

 

 

The chosen software development methodology will incorporate continuous testing and evaluation, resiliency, and cybersecurity, with maximum possible automation, as persistent requirements and include a risk-based lifecycle management approach to address software vulnerabilities, supply chain and development environment risk, and intelligence threats throughout the entire lifecycle.

 

Cybersecurity strategies in accordance with the applicable cybersecurity policies and issuances which include recurring assessment of the supply chain, development environment, processes and tools, continuous automated cybersecurity test and operational evaluation to provide a system resilient to offensive cyber operations.

 

Reference Source: DODI 5000.87 Section 3.3.b

 

Subsequent capability releases will be delivered at least annually. Software updates to address cybersecurity vulnerabilities will be released in a timely manner, potentially including out of release cycle as needed, per the program’s risk based lifecycle management approach.

 

Automated cyber testing and continuous monitoring of operational software will be designed and implemented to support a cATO or an accelerated accreditation process to the maximum extent practicable; and will be augmented with additional testing where appropriate in accordance with cybersecurity policies, and in coordination with the assigned authorizing official.  All safety critical software standards and guidance apply for programs using the software acquisition pathway.  Programs will implement recurring cybersecurity assessments of the development environment, processes and tools.

 

Cybersecurity and software assurance will be integral to strategies, designs, development environment, processes, supply chain, architectures, enterprise services, tests, and operations.  Continuous and automated cybersecurity and cyber threat testing will identify vulnerabilities to help ensure software resilience throughout the lifecycle.  PMs will work with stakeholders to provide sufficient controls to enable a cATO where appropriate.  Ensuring software security includes:

  • Secure development (e.g., development environment, vetted personnel, coding, test, identity and access management, and supply chain risk management).
  • Cybersecurity and assurance capabilities (e.g., software updates and patching, encryption, runtime monitoring, and logging).
  • Secure lifecycle management (e.g., vulnerability management, rigorous and persistent cybersecurity testing, and configuration control).

 

Each program will develop and track a set of metrics to assess and manage the performance, progress, speed, cybersecurity, and quality of the software development, its development teams, and ability to meet users’ needs.

Cybersecurity Guidance

Reference Source: USD(A&S) Guidance

Software security is a fundamental consideration and extends beyond the Approval to Operate (ATO). Resources must be committed to building secure software from the beginning of software development throughout the whole lifecycle. The software program should protect the software and associated critical data from vulnerabilities, internal and external threats, and critical errors that can affect performance or expose sensitive data. Software security spans three broad areas: Secure Development: includes secure coding practices, testing and verification, supply chain considerations, tool chain configuration, and identity/access management. Secure Capabilities: includes identity management / authentication, secure software patching/updates, encryption, authorization/access controls, logging, and error/exception handling. Secure Lifecycle: includes vulnerability management, configuration, vulnerability notification/patching, and end-of-life considerations.

Guiding Principles

While the Risk Management Framework (RMF) and the National Institute of Standards and Technology (NIST) 800-53 controls form a starting point, programs should adopt a framework to aid in analyzing cybersecurity practices throughout the lifecycle and identify opportunities for continued improvement, such as the BSA (the Software Alliance) Framework for Secure Software.

Secure Development
Secure Coding

Secure coding is based on sound, recognizable, and enforceable coding standards and uses standard data formats. Additionally, the software should be secured against known vulnerabilities, unsafe functions, and unsafe libraries. Software should validate input and output to mitigate potential vulnerabilities. The software architecture and design must include software assurance measures. Programs should employ segmentation practices through sandboxing, containerization, or similar methodologies. Software should also implement fault isolation mechanisms.

Testing and Verification

Using threat model(s) and risk analysis, software developers should identify and map the software attack surface. Based on threat-informed risk, testing should occur at multiple stages in the software development process. Programs should develop a test plan that evaluates the security of the software in conjunction with the functionality. At the basic level, developers should perform code reviews, ideally via automation. Software should be subjected to adversarial testing techniques, including penetration testing.

Supply Chain

Rarely is software developed in isolation by a single organization. Instead, modern software is ‘assembled’ and relies on third-party software components that are outside of the immediate control of the software program. Programs should make efforts to ensure the visibility, traceability, and security of third-party components, including open source components. Using manual and automated technologies, programs should document all software components and trace their lineage and dependencies. Additionally, third-party software security policies should be incorporated into contracts, policies, and standards for vendors providing software components to software programs.

Tool Chain

The software development environment should use up-to-date versions of all tools and platform elements. Compilers should be configured to address security vulnerabilities and prohibit unintentional removal or modification of security-critical code. Containers and other virtualization tools should use secure configurations.

Identity and Access Management

The software development environment should implement strong authentication methods for robust access control. User and operator credentials should be stored securely and revoked/disabled when no longer needed. Programs should develop and implement a policy for granting access control according to specific roles. Any changes or deletions to code in the development environment should be logged and flagged if unauthorized.

Secure Capabilities
Identity Management and Authentication

The software architecture should address weaknesses that would create risk of authentication failure. This includes avoiding the use of hard-coded passwords, implementing authentication mechanisms that avoid common security weaknesses, and properly storing authentication information (credentials and keys).

Patchability

Software should be able to incorporate secure updates and security patches, notify users of patches and updates, and revert to last ‘known good’ states in case of failed installation of a patch or update.

Encryption

Developers should employ strong encryption throughout the software to protect sensitive data from unauthorized disclosure. Encryption also protects the software itself from tampering. Strong encryption entails the use of authenticated encryption and strong algorithms and key lengths. Encryption keys should be stored securely and managed separately from encrypted data. Developers should implement mechanisms to manage keys and certificates and validate the certificates.

Authorization and Access Controls

The software architecture should support authorization and access controls based on the principle of least privilege. In the case of authorization failure, the software should not grant access to unauthorized or unauthenticated users.

Logging

The software should log critical security events and incidents and should be able to distinguish between monitoring logs and auditing logs. Logged events should include identifying information and associated timestamps but should avoid capturing sensitive information such as system details, passwords, or session identifiers. Access to logs should be restricted to authorized users.

Secure Lifecycle
Vulnerability Management

Vulnerability management relies on the development, implementation, and maintenance of a vulnerability management plan that includes identification and occurrence of a vulnerability, verification of the vulnerability, remediation and mitigation, release of a solution, and any post-release activities.

Configuration

Software deployment should include configurations and configuration guidance that facilitate secure installation and operation. The documentation that specifies configuration parameters should be as a restrictive as possible so as not to expose software to attacks and exploits.

Special Considerations – Continuous Approval to Operate (cATO)

The key benefit of cATO is to promote speed in delivery. The cATO shifts the focus of an ATO from the software product to the software factory by which that product is developed. The cATO process examines the tools, technologies, and processes that produce the software. It is important to note that a cATO does not represent an entirely new approach to risk. Rather, it moves the checks into automation so that risks can be identified and dealt with continuously throughout the acquisition process. Programs may find the Kessel Run (KR) “Continuous Authorization Risk Management Playbook” to be a useful reference. This document represents reflection from one of the best sources of DoD experience to date in achieving an effective cATO. It gives programs an overview of necessary considerations for achieving a pipeline that can be given a continuous authorization, based on activities in five areas, which overlap with other areas of this guidance:

  • Continuous deployment
  • Architecture
  • Product and process
  • Lean management and monitoring
  • Organizational culture.

The decision whether to grant a cATO falls to the cognizant Authorizing Official (AO), and is made based on the acceptance of risk for a given system in a given environment. As a result, program teams should work closely with their AO to tailor these guidelines appropriately. The program’s DevSecOps software factory instance must implement the controls defined in NIST SP 800-53 as a minimum baseline. AOs should consider continuous authorization only with stipulations attached that establish guideposts indicating when the authorization must be re-established. Although the final determination will reflect the risk assessment made in the specific context, examples can help AOs think through the relevant factors. For instance, AOs should consider establishing some performance measures or criteria for the following indicators of the risk of continued operation:

  • Iterative penetration testing
  • Involvement of qualified security personnel, as mutually agreed by the program and the AO
  • Real-time access by designated cybersecurity personnel to results of testing, scanning, monitoring, and performance metrics
  • Critical vulnerabilities mitigated within 24 hours of discovery
  • Moderate vulnerabilities mitigated within a timeframe acceptable to the AO
  • Disciplined compliance with appropriate processes / procedures
  • Appropriate steps taken to ensure security of the development environment (including physical, information, and operational security measures)
  • Team members who are up to speed on software engineering and cybersecurity best practices.

AOs should also rely on periodic, independent outside assessments of the security pipeline.

Integrated Security Practices Using Automated Testing

Security automation takes place at multiple phases throughout the DevSecOps pipeline. Since the software pushed through the pipeline and the pipeline itself are both living entities that mature over time, security must be treated as a state. For example, a software application is only as secure as the most recent security tests/assessments show it is. With every new deployment of code, the attack vector of a given software application increases and the code must be reassessed appropriately. This concept applies to the application code itself, third-party dependencies/libraries, and the dependencies used by underlying infrastructure. The security of all three categories must be assessed in an automated fashion as part of the DevSecOps pipeline using automated static code analysis, automated penetration testing, and automated container vulnerability scanning. In a perfect DevSecOps pipeline, three major forms of automated testing should occur: automated security testing, automated functional and unit testing, and automated application performance testing, and integration testing. The first two forms of testing take place during the CI (automated build) phase of the DevSecOps pipeline. The last form of automated testing takes place in a production-like (staging/testing) environment after an initial deployment and will ultimately only occur if the software has met quality criteria for the first two forms of testing. To take full advantage of the power of automated testing, programs must determine the quality criteria for testing pass/fail as early in the software development lifecycle (SDLC) as possible. The specific criteria will vary on a project-by-project basis, and establishing the criteria requires collaboration among development, security, and operations teams, as well as the designated project AO, for automated testing to be as effective as possible.

Automated Security Testing

Automated security testing can be broken down into three subcategories: automated static code analysis, automated penetration testing, and automated Docker image vulnerability scanning. Development teams must always perform testing in at least the first category. Conducting an automated penetration test or Docker image vulnerability scan as part of the CI process may not be necessary for software projects that are not developing web applications or using containers. Information security teams must have the proper personnel (preferably with a software engineering or computer science background) to review the reports generated by static code analysis tools. This will allow security engineers and developers to have the conversations necessary to mitigate any vulnerabilities discovered by the tools. Automated static code analysis is readily available via third-party plugins for standard CI services. Static code analysis can be used for web applications as well as embedded systems software, and successful implementation as early as possible in the SDLC enables developers to mitigate vulnerabilities as they occur in the initial development of a project’s MVP. However, implementation of automated static code analysis cannot be considered “successful” once it has been integrated into the CI process alone. It is imperative that the correct information security personnel be delegated the responsibility of viewing the reports generated by static code analysis tools; for example, an information security professional whose primary skillset is network/perimeter security should not be tasked with reviewing a static code analysis report. Assigning the evaluation of static code analysis reports to individuals who have “taken coding classes but have never practiced the discipline as a profession” creates additional risk and could complicate the iterative feedback conversations between security and development teams. Automated penetration testing of web applications also takes place during the CI phase of the pipeline. Tool suites that provide penetration testing should be strategically placed throughout the CI build plan to provide continuous automated reports that can be assessed by information security teams. Lastly, if applicable, automated scanning of images is more important than scanning the application code itself. Automated vulnerability scanning of images requires security and operations Information Technology (IT) teams to work in unison throughout the composition and sustainment of custom containers. Teams must maintain vigilant and constant security of Infrastructure as Code scripts to prevent exploitable infrastructure from being deployed into production.

Automated Functional, Unit, and Performance Testing

Automated functional and unit testing is the second form of testing incorporated in the CI process. Once an initial form of automated security testing has been integrated into a DevSecOps pipeline, developers should implement automated functional testing. It is critical to ensure that an appropriate logging mechanism is in place to capture test results. A logging solution is crucial to pinpointing a specific place to test automation failure. For web applications a web browser automation tool is necessary to simulate functionality that users exercise in a browser. In combination with test automation frameworks, automated functional/unit testing can also be integrated into the CI process and can use predetermined quality criteria as pass/fail thresholds for CI builds. The amount of testing that can be automated will vary among projects. The forms of automated testing that will occur will ultimately depend on the type of software pushed through the pipeline. Automated functional testing of web applications is relatively feasible in comparison to testing of embedded system software due to the existence of extensive testing frameworks. For example, web applications undergo a tremendous amount of automated front end/graphical user interface (GUI) testing, while embedded systems software do not. Automated integration testing, however, must take place regardless of the type of software that is built and deployed. Finally, automated application performance testing takes place outside the CI phase of the DevSecOps pipeline. Only this type of rigorous testing may detect specific performance-related bugs and issues. Application performance testing occurs after an initial deployment to a staging/test/production-like environment that will expose issues in non-functional requirements which could not be replicated in a development environment. It is important to conduct application performance testing as soon as possible and with every deployment (e.g., on the MVP once it has initially been deployed to staging), so that performance/architectural defects can be detected and addressed in the earliest sprints possible. Availability issues discovered by application performance testing in themselves pose security issues due to a fundamental violation of the [define] CIA triad (availability).

Safety Critical Software

Continuous deployment of safety critical software must be considered much differently than normal CI/CD commercial practices.  All safety critical software standards and guidance such as MIL-STD-882E, DoD Standard Practice System Safety, DO-178C Software Considerations in Airborne Systems and Equipment Certification, Joint Software Systems Safety Engineering Handbook (JSSSEH) and AOP-52, Guidance on Software Safety Design and Assessment of Munition-related Computing Systems apply when executing this software pathway. Authorizing Official for Cyberspace Innovation, SAF/CIO A6. “Continuous ATO Playbook: Constructing a Secure Software Factory to Achieve Ongoing Authority to Operate.” (undated)