Software Acquisition

AAF  > Software Acquisition  >  Test and Infrastructure

Test and Infrastructure

How to use this site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

Test Strategy

Reference Source: Software Acquisition Pathway Interim Policy and Procedures, 3 Jan 2020

 

The test strategy defines the process by which capabilities, user features, user stories, use cases, etc., will be tested and evaluated to satisfy DT criteria and demonstrate operational effectiveness for OT to the maximum extent possible. The strategy shall:

  • Include system-level performance requirements, non-functional requirements, and the metrics that will be used to verify that the system will meet user needs.
  • Identify key independent test organizations and their roles and responsibilities, and establish agreements on how they will be integrated early into the development activities and throughout the software lifecycle.
  • Identify the tools and resources necessary to assist in data collection and transparency to support DT and OT.
  • For embedded software, include a risk assessment of any safety critical implications as well as a companion mitigation strategy, and a strategy, including resources, to describe testing and evaluation as the software transitions from the development environment to the test environment to the operational environment.
  • Assess the findings/recommendations of the PMs Software System Safety assessment accomplished in accordance with Military Standard 882E, “Standard Practice for System Safety”.

 

Automated testing should be used at the unit level, for Application Programming Interface (API) and integration tests, and to the maximum extent possible for user acceptance and to evaluate mission effectiveness. Automated testing tools and automated security tools should be accredited by an Operational Test Authority as “fit for purpose”. Continuous runtime monitoring of operational software will provide additional data collection opportunities to support test and continuous OT.

Source: Software Acquisition Guide v1

Security automation takes place at multiple phases throughout the DevSecOps pipeline. Since the software pushed through the pipeline and the pipeline itself are both living entities that mature over time, security must be treated as a state. For example, a software application is only as secure as the most recent security tests/assessments show it is. With every new deployment of code, the attack vector of a given software application increases and the code must be reassessed appropriately. This concept applies to the application code itself, third-party dependencies/libraries, and the dependencies used by underlying infrastructure. The security of all three categories must be assessed in an automated fashion as part of the DevSecOps pipeline using automated static code analysis, automated penetration testing, and automated container vulnerability scanning.

In a perfect DevSecOps pipeline, three major forms of automated testing should occur: automated security testing, automated functional and unit testing, and automated application performance testing, and integration testing. The first two forms of testing take place during the CI (automated build) phase of the DevSecOps pipeline. The last form of automated testing takes place in a production-like (staging/testing) environment after an initial deployment and will ultimately only occur if the software has met quality criteria for the first two forms of testing.

To take full advantage of the power of automated testing, programs must determine the quality criteria for testing pass/fail as early in the software development lifecycle (SDLC) as possible. The specific criteria will vary on a project-by-project basis, and establishing the criteria requires collaboration among development, security, and operations teams, as well as the designated project AO, for automated testing to be as effective as possible.

Automated Security Testing

Automated security testing can be broken down into three subcategories: automated static code analysis, automated penetration testing, and automated Docker image vulnerability scanning. Development teams must always perform testing in at least the first category. Conducting an automated penetration test or Docker image vulnerability scan as part of the CI process may not be necessary for software projects that are not developing web applications or using containers. Information security teams must have the proper personnel (preferably with a software engineering or computer science background) to review the reports generated by static code analysis tools. This will allow security engineers and developers to have the conversations necessary to mitigate any vulnerabilities discovered by the tools.

Automated static code analysis is readily available via third-party plugins for standard CI services. Static code analysis can be used for web applications as well as embedded systems software, and successful implementation as early as possible in the SDLC enables developers to mitigate vulnerabilities as they occur in the initial development of a project’s MVP. However, implementation of automated static code analysis cannot be considered “successful” once it has been integrated into the CI process alone. It is imperative that the correct information security personnel be delegated the responsibility of viewing the reports generated by static code analysis tools; for example, an information security professional whose primary skillset is network/perimeter security should not be tasked with reviewing a static code analysis report. Assigning the evaluation of static code analysis reports to individuals who have “taken coding classes but have never practiced the discipline as a profession” creates additional risk and could complicate the iterative feedback conversations between security and development teams.

Automated penetration testing of web applications also takes place during the CI phase of the pipeline. Tool suites that provide penetration testing should be strategically placed throughout the CI build plan to provide continuous automated reports that can be assessed by information security teams.

Lastly, if applicable, automated scanning of images is more important than scanning the application code itself. Automated vulnerability scanning of images requires security and operations Information Technology (IT) teams to work in unison throughout the composition and sustainment of custom containers. Teams must maintain vigilant and constant security of Infrastructure as Code scripts to prevent exploitable infrastructure from being deployed into production.

Automated Functional, Unit, and Performance Testing

Automated functional and unit testing is the second form of testing incorporated in the CI process. Once an initial form of automated security testing has been integrated into a DevSecOps pipeline, developers should implement automated functional testing. It is critical to ensure that an appropriate logging mechanism is in place to capture test results. A logging solution is crucial to pinpointing a specific place to test automation failure. For web applications a web browser automation tool is necessary to simulate functionality that users exercise in a browser. In combination with test automation frameworks, automated functional/unit testing can also be integrated into the CI process and can use predetermined quality criteria as pass/fail thresholds for CI builds.

The amount of testing that can be automated will vary among projects. The forms of automated testing that will occur will ultimately depend on the type of software pushed through the pipeline. Automated functional testing of web applications is relatively feasible in comparison to testing of embedded system software due to the existence of extensive testing frameworks. For example, web applications undergo a tremendous amount of automated front end/graphical user interface (GUI) testing, while embedded systems software do not. Automated integration testing, however, must take place regardless of the type of software that is built and deployed.

Finally, automated application performance testing takes place outside the CI phase of the DevSecOps pipeline. Only this type of rigorous testing may detect specific performance-related bugs and issues. Application performance testing occurs after an initial deployment to a staging/test/production-like environment that will expose issues in non-functional requirements which could not be replicated in a development environment. It is important to conduct application performance testing as soon as possible and with every deployment (e.g., on the MVP once it has initially been deployed to staging), so that performance/architectural defects can be detected and addressed in the earliest sprints possible. Availability issues discovered by application performance testing in themselves pose security issues due to a fundamental violation of the [define] CIA triad (availability).

Enterprise Services and DevSecOps Pipeline

Reference Source: Software Acquisition Pathway Interim Policy and Procedures, 3 Jan 2020

 

The program should develop a plan to leverage enterprise software development services, if available, and use modern tools and technology to develop and deliver software. This includes the architectural tradeoff decision to use an enterprise-level platform focused on continuous integration and continuous delivery that can provide integrated software tools, services, and standards that enable partners and programs to develop, deploy, and operate applications in a secure, flexible, and interoperable fashion. Enterprise Services will span multiple programs and will be scaled and resourced to support demand. The program should establish enterprise agreements as appropriate, and make enterprise services discoverable to assist in the evaluation and use of the software by other programs if the capability to discover services exists. If a PM determines that the program requires an enterprise service that is not available in the enterprise, the PM should add the solution to the enterprise repository. Use of enterprise services will enable rapid start-up, real-time delivery, and scalability.

Reference Source: Software Acquisition Pathway Guide v1.0

DevSecOps platforms should ideally be provided as an enterprise service with tailoring options as opposed to having each program independently develop separate instances of DevSecOps pipelines. An enterprise service is an enabling capability that can be leveraged by several software programs. Use of enterprise services provides some key benefits such as optimizing cost of platforms across the enterprise, leveraging skills across organizations, and minimizing complexity between toolsets. Considerations that a program should use in selecting an appropriate DevSecOps platform include:

  • Who (Service or Agency) is providing the DevSecOps platform?
  • Does the use of the proposed DevSecOps platform help the program achieve its desired software development value stream map?
  • What type of agreement (e.g., service level agreement) is in place to govern the use of DevSecOps solution?
  • Does the DevSecOps capability support collection and reporting of a minimum essential list of metrics (see Metrics Plan)?
  • Does the DevSecOps capability support the program’s roadmap?
  • What is the maturity level of the DevSecOps capability (i.e., is this capability being developed, already in use, widely used, etc.)?
  • Does the DevSecOps capability support the delivery model required by the application developer?
  • Are all the required tools and components already integrated into the DevSecOps pipeline?
  • Is the DevSecOps platform fully resourced?
  • Does this capability support the security requirements?
  • Are any key components missing from the DevSecOps service?
Security and Testing

Security and testing should be incorporated throughout the software development and delivery lifecycle (see Software Security). The selected DevSecOps pipeline should provide tools that support and integrate automated testing, security scans, logging, and monitoring.

Source Code Management Tools

Source Code Management (SCM) is an important feature of a DevSecOps pipeline. The use of tools that support source code management allows developers to check in code, leads to automated builds, and supports key security activities such as the ability to perform static checks.

Programs should create a designated Continuous Integration (CI) user account within the SCM tool, whose sole purpose is to interact with the CI server. This enables traceability from an access control perspective between SCM and CI services and avoids the need to enter individual user account credentials into the CI service.

Programs should have methods and tools to store source code that is delivered by a contractor. The delivery of the source code should be accompanied by the necessary tools required to manage the source code (see Contracting for Agile Software Development).

The complete software baseline should be interpreted as all configuration artifacts used throughout the development and delivery of software, beginning with the project MVP.  This is to include all configuration management criteria such as backlog tasks, user stories, epics task and time estimates. Baselines may also be split up into various categories along the SDLC such as:

  • Functional Baseline: initial specifications established; contract, etc.
  • Allocated Baseline: state of work products after requirements are approved Developmental Baseline: state of work products amid development
  • Product Baseline: contains the releasable contents of the project
  • Operational Baseline: state of deliveries/deployments and upon delivery/deployment of MVP into production
Continuous Integration

CI is one of the key objectives of DevSecOps. A best practice in many contexts is to build the CI application every time a check-in of code occurs. The build process should be automated, and while all tools are not equal in this regard, programs should make efforts to automate builds for their given technology set. When crafting a CI build plan, the goal should be to remain as agnostic as possible. Relying too heavily on built-in features of specific CI servers can lead to replication issues if the CI tool must be changed. The manual activity required to build a software package from code should be as minimal as possible; any necessary manual processes should be thoroughly documented and then replicated in a CI server.

Continuous Delivery / Continuous Deployment

Another key objective of DevSecOps is the ability to deliver a version of the software, once compiled, to an environment where it can be operated, outside the development environment. In commercial environments, the goal may be Continuous Deployment: once a new working version of the software has been compiled and shown to pass the appropriate quality checks it can be deployed directly into the operational environment for users. In the DoD context, it is often appropriate to focus on Continuous Delivery, in which each new working version of the software is delivered to another environment (e.g., an operationally relevant test environment) but not to operational users. This distinction is entirely appropriate to the DoD context. For example, in the case of weapon systems, the Warfighter may not be able to adopt frequent updates to operational systems given the need for similarly updated training and tactics. (Because of this distinction, this guide uses the more general term “Continuous Delivery” throughout.) Programs should ensure that they are making the correct choice concerning where to deliver operational software.

Even though Continuous Deployment may not be appropriate for many of the DoD’s use cases, the goal of automating as much as possible throughout the pipeline remains the same. A DevSecOps pipeline practicing Continuous Delivery should have enough automated testing in place so that, even though software will not be deployed directly to production, delivery could still be feasible due to the rigor of in-place automated testing. A common practice that makes this possible is to use of containerization to create testing environments on the fly, run automated tests, and then destroy the environment immediately after the tests have concluded. This can be accomplished as part of the Continuous Delivery or Deployment process. Because this short-lived test environment is created with every build, it allows continuous insight to determine whether or not Continuous Deployment could be a possibility, even if Continuous Delivery has been the chosen method of moving artifacts into production.

Additional decisions that programs should consider include:

  • Delivery Model – What delivery model will the program use? Examples include:
    • Model A: Continuous Delivery – automates delivery to persistent environments (either cloud or on-premises)
    • Model B: Cloud-Enabled Delivery – deploys applications automatically after provisioning a new environment from the cloud or data center. The primary difference from Model A is that Model B uses Infrastructure as Code.
    • Model C: Container-Enabled Delivery – deploys applications as a set of containers into one or more hosts that are dynamically created. The primary difference from Model B is that Model C involves full support of modular application architectures (e.g., micro-services).
  • Release Orchestration Tools– What release orchestration tools does the platform provide to support continuous delivery?
  • Container Tools – What container tools will the program use to support continuous deployment?
  • Release Cadence – How often will releases be deployed to various environments in accordance with the delivery model?
  • Release Scope – Will delivery take the form of full or incremental releases?
  • Transition – What are the DevSecOps implementation plan and plan to transition to the pipeline?

Secure Software and Cyber Security Plan

Reference Source: Software Acquisition Pathway Interim Policy and Procedures, 3 Jan 2020

 

PMs shall establish and/or leverage a secure software development pipeline and a security lifecycle plan. Software tests shall be run automatically where possible, and at a predetermined cadence sufficient to ensure that cybersecurity controls and other considerations are addressed early and throughout the acquisition process. PMs should establish the conditions to enable a continuous Authority to Operate (ATO) where appropriate. Ensuring software security includes secure development (coding, test, identity/access management, supply chain risk management), secure capabilities (software patching, encryption, runtime monitoring, and logging) and secure lifecycle management (vulnerability management and configuration control). Automated build scripts and test results shall be available to the test community so that critical verification functions (e.g., performance, reliability, etc.), validation functions (e.g., effectiveness, suitability and survivability) can be assessed iteratively and incrementally. The automated cyber testing shall be designed to support a continuous ATO if possible or an aggressive accreditation process otherwise; and shall be augmented with additional testing where appropriate in accordance with the DoD Cybersecurity Guidelines.

 

Reference Source: Software Acquisition Pathway Guide v1.0

Software security is a fundamental consideration and extends beyond the Approval to Operate (ATO). Resources must be committed to building secure software from the beginning of software development throughout the whole lifecycle. The software program should protect the software and associated critical data from vulnerabilities, internal and external threats, and critical errors that can affect performance or expose sensitive data. Software security spans three broad areas: Secure Development: includes secure coding practices, testing and verification, supply chain considerations, tool chain configuration, and identity/access management. Secure Capabilities: includes identity management / authentication, secure software patching/updates, encryption, authorization/access controls, logging, and error/exception handling. Secure Lifecycle: includes vulnerability management, configuration, vulnerability notification/patching, and end-of-life considerations.

Guiding Principles

While the Risk Management Framework (RMF) and the National Institute of Standards and Technology (NIST) 800-53 controls form a starting point, programs should adopt a framework to aid in analyzing cybersecurity practices throughout the lifecycle and identify opportunities for continued improvement, such as the BSA (the Software Alliance) Framework for Secure Software.

Secure Development
Secure Coding

Secure coding is based on sound, recognizable, and enforceable coding standards and uses standard data formats. Additionally, the software should be secured against known vulnerabilities, unsafe functions, and unsafe libraries. Software should validate input and output to mitigate potential vulnerabilities. The software architecture and design must include software assurance measures. Programs should employ segmentation practices through sandboxing, containerization, or similar methodologies. Software should also implement fault isolation mechanisms.

Testing and Verification

Using threat model(s) and risk analysis, software developers should identify and map the software attack surface. Based on threat-informed risk, testing should occur at multiple stages in the software development process. Programs should develop a test plan that evaluates the security of the software in conjunction with the functionality. At the basic level, developers should perform code reviews, ideally via automation. Software should be subjected to adversarial testing techniques, including penetration testing.

Supply Chain

Rarely is software developed in isolation by a single organization. Instead, modern software is ‘assembled’ and relies on third-party software components that are outside of the immediate control of the software program. Programs should make efforts to ensure the visibility, traceability, and security of third-party components, including open source components. Using manual and automated technologies, programs should document all software components and trace their lineage and dependencies. Additionally, third-party software security policies should be incorporated into contracts, policies, and standards for vendors providing software components to software programs.

Tool Chain

The software development environment should use up-to-date versions of all tools and platform elements. Compilers should be configured to address security vulnerabilities and prohibit unintentional removal or modification of security-critical code. Containers and other virtualization tools should use secure configurations.

Identity and Access Management

The software development environment should implement strong authentication methods for robust access control. User and operator credentials should be stored securely and revoked/disabled when no longer needed. Programs should develop and implement a policy for granting access control according to specific roles. Any changes or deletions to code in the development environment should be logged and flagged if unauthorized.

Secure Capabilities
Identity Management and Authentication

The software architecture should address weaknesses that would create risk of authentication failure. This includes avoiding the use of hard-coded passwords, implementing authentication mechanisms that avoid common security weaknesses, and properly storing authentication information (credentials and keys).

Patchability

Software should be able to incorporate secure updates and security patches, notify users of patches and updates, and revert to last ‘known good’ states in case of failed installation of a patch or update.

Encryption

Developers should employ strong encryption throughout the software to protect sensitive data from unauthorized disclosure. Encryption also protects the software itself from tampering. Strong encryption entails the use of authenticated encryption and strong algorithms and key lengths. Encryption keys should be stored securely and managed separately from encrypted data. Developers should implement mechanisms to manage keys and certificates and validate the certificates.

Authorization and Access Controls

The software architecture should support authorization and access controls based on the principle of least privilege. In the case of authorization failure, the software should not grant access to unauthorized or unauthenticated users.

Logging

The software should log critical security events and incidents and should be able to distinguish between monitoring logs and auditing logs. Logged events should include identifying information and associated timestamps but should avoid capturing sensitive information such as system details, passwords, or session identifiers. Access to logs should be restricted to authorized users.

Secure Lifecycle
Vulnerability Management

Vulnerability management relies on the development, implementation, and maintenance of a vulnerability management plan that includes identification and occurrence of a vulnerability, verification of the vulnerability, remediation and mitigation, release of a solution, and any post-release activities.

Configuration

Software deployment should include configurations and configuration guidance that facilitate secure installation and operation. The documentation that specifies configuration parameters should be as a restrictive as possible so as not to expose software to attacks and exploits.

Special Considerations – Continuous Approval to Operate (cATO)

The key benefit of cATO is to promote speed in delivery. The cATO shifts the focus of an ATO from the software product to the software factory by which that product is developed. The cATO process examines the tools, technologies, and processes that produce the software. It is important to note that a cATO does not represent an entirely new approach to risk. Rather, it moves the checks into automation so that risks can be identified and dealt with continuously throughout the acquisition process. Programs may find the Kessel Run (KR) “Continuous Authorization Risk Management Playbook” to be a useful reference. This document represents reflection from one of the best sources of DoD experience to date in achieving an effective cATO. It gives programs an overview of necessary considerations for achieving a pipeline that can be given a continuous authorization, based on activities in five areas, which overlap with other areas of this guidance:

  • Continuous deployment
  • Architecture
  • Product and process
  • Lean management and monitoring
  • Organizational culture.

The decision whether to grant a cATO falls to the cognizant Authorizing Official (AO), and is made based on the acceptance of risk for a given system in a given environment. As a result, program teams should work closely with their AO to tailor these guidelines appropriately. The program’s DevSecOps software factory instance must implement the controls defined in NIST SP 800-53 as a minimum baseline.[1] AOs should consider continuous authorization only with stipulations attached that establish guideposts indicating when the authorization must be re-established. Although the final determination will reflect the risk assessment made in the specific context, examples can help AOs think through the relevant factors. For instance, AOs should consider establishing some performance measures or criteria for the following indicators of the risk of continued operation:

  • Iterative penetration testing
  • Involvement of qualified security personnel, as mutually agreed by the program and the AO
  • Real-time access by designated cybersecurity personnel to results of testing, scanning, monitoring, and performance metrics
  • Critical vulnerabilities mitigated within 24 hours of discovery
  • Moderate vulnerabilities mitigated within a timeframe acceptable to the AO
  • Disciplined compliance with appropriate processes / procedures
  • Appropriate steps taken to ensure security of the development environment (including physical, information, and operational security measures)
  • Team members who are up to speed on software engineering and cybersecurity best practices.

AOs should also rely on periodic, independent outside assessments of the security pipeline.

Integrated Security Practices Using Automated Testing

Security automation takes place at multiple phases throughout the DevSecOps pipeline. Since the software pushed through the pipeline and the pipeline itself are both living entities that mature over time, security must be treated as a state. For example, a software application is only as secure as the most recent security tests/assessments show it is. With every new deployment of code, the attack vector of a given software application increases and the code must be reassessed appropriately. This concept applies to the application code itself, third-party dependencies/libraries, and the dependencies used by underlying infrastructure. The security of all three categories must be assessed in an automated fashion as part of the DevSecOps pipeline using automated static code analysis, automated penetration testing, and automated container vulnerability scanning. In a perfect DevSecOps pipeline, three major forms of automated testing should occur: automated security testing, automated functional and unit testing, and automated application performance testing, and integration testing. The first two forms of testing take place during the CI (automated build) phase of the DevSecOps pipeline. The last form of automated testing takes place in a production-like (staging/testing) environment after an initial deployment and will ultimately only occur if the software has met quality criteria for the first two forms of testing. To take full advantage of the power of automated testing, programs must determine the quality criteria for testing pass/fail as early in the software development lifecycle (SDLC) as possible. The specific criteria will vary on a project-by-project basis, and establishing the criteria requires collaboration among development, security, and operations teams, as well as the designated project AO, for automated testing to be as effective as possible.

Automated Security Testing

Automated security testing can be broken down into three subcategories: automated static code analysis, automated penetration testing, and automated Docker image vulnerability scanning. Development teams must always perform testing in at least the first category. Conducting an automated penetration test or Docker image vulnerability scan as part of the CI process may not be necessary for software projects that are not developing web applications or using containers. Information security teams must have the proper personnel (preferably with a software engineering or computer science background) to review the reports generated by static code analysis tools. This will allow security engineers and developers to have the conversations necessary to mitigate any vulnerabilities discovered by the tools. Automated static code analysis is readily available via third-party plugins for standard CI services. Static code analysis can be used for web applications as well as embedded systems software, and successful implementation as early as possible in the SDLC enables developers to mitigate vulnerabilities as they occur in the initial development of a project’s MVP. However, implementation of automated static code analysis cannot be considered “successful” once it has been integrated into the CI process alone. It is imperative that the correct information security personnel be delegated the responsibility of viewing the reports generated by static code analysis tools; for example, an information security professional whose primary skillset is network/perimeter security should not be tasked with reviewing a static code analysis report. Assigning the evaluation of static code analysis reports to individuals who have “taken coding classes but have never practiced the discipline as a profession” creates additional risk and could complicate the iterative feedback conversations between security and development teams. Automated penetration testing of web applications also takes place during the CI phase of the pipeline. Tool suites that provide penetration testing should be strategically placed throughout the CI build plan to provide continuous automated reports that can be assessed by information security teams. Lastly, if applicable, automated scanning of images is more important than scanning the application code itself. Automated vulnerability scanning of images requires security and operations Information Technology (IT) teams to work in unison throughout the composition and sustainment of custom containers. Teams must maintain vigilant and constant security of Infrastructure as Code scripts to prevent exploitable infrastructure from being deployed into production.

Automated Functional, Unit, and Performance Testing

Automated functional and unit testing is the second form of testing incorporated in the CI process. Once an initial form of automated security testing has been integrated into a DevSecOps pipeline, developers should implement automated functional testing. It is critical to ensure that an appropriate logging mechanism is in place to capture test results. A logging solution is crucial to pinpointing a specific place to test automation failure. For web applications a web browser automation tool is necessary to simulate functionality that users exercise in a browser. In combination with test automation frameworks, automated functional/unit testing can also be integrated into the CI process and can use predetermined quality criteria as pass/fail thresholds for CI builds. The amount of testing that can be automated will vary among projects. The forms of automated testing that will occur will ultimately depend on the type of software pushed through the pipeline. Automated functional testing of web applications is relatively feasible in comparison to testing of embedded system software due to the existence of extensive testing frameworks. For example, web applications undergo a tremendous amount of automated front end/graphical user interface (GUI) testing, while embedded systems software do not. Automated integration testing, however, must take place regardless of the type of software that is built and deployed. Finally, automated application performance testing takes place outside the CI phase of the DevSecOps pipeline. Only this type of rigorous testing may detect specific performance-related bugs and issues. Application performance testing occurs after an initial deployment to a staging/test/production-like environment that will expose issues in non-functional requirements which could not be replicated in a development environment. It is important to conduct application performance testing as soon as possible and with every deployment (e.g., on the MVP once it has initially been deployed to staging), so that performance/architectural defects can be detected and addressed in the earliest sprints possible. Availability issues discovered by application performance testing in themselves pose security issues due to a fundamental violation of the [define] CIA triad (availability).

Safety Critical Software

Continuous deployment of safety critical software must be considered much differently than normal CI/CD commercial practices.  All safety critical software standards and guidance such as MIL-STD-882E, DoD Standard Practice System Safety, DO-178C Software Considerations n Airborne Systems and Equipment Certification, Joint Software Systems Safety Engineering Handbook (JSSSEH) and AOP-52, Guidance on Software Safety Design and Assessment of Munition-related Computing Systems apply when executing this software pathway. [1] Authorizing Official for Cyberspace Innovation, SAF/CIO A6. “Continuous ATO Playbook: Constructing a Secure Software Factory to Achieve Ongoing Authority to Operate.” (undated)