Software Acquisition

AAF  >  Software Acquisition  >  Cybersecurity

Cybersecurity

How To Use This Site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

Reference Source: DODI 5000.87 Section 1.2.i

 

Cybersecurity and program protection will be addressed from program inception throughout the program’s lifecycle in accordance with applicable cybersecurity policies and issuances.  A risk-based management approach will be an integral part of the program’s strategies, processes, designs, infrastructure, development, test, integration, delivery, and operations.  Software assurance, cyber security, test and evaluation are integral parts of this approach to continually assess and measure cybersecurity preparedness and responsiveness, identify and address risks and execute mitigation actions.

 

Reference Source: DODI 5000.87 Section 3.2

 

 

The chosen software development methodology will incorporate continuous testing and evaluation, resiliency, and cybersecurity, with maximum possible automation, as persistent requirements and include a risk-based lifecycle management approach to address software vulnerabilities, supply chain and development environment risk, and intelligence threats throughout the entire lifecycle.

 

Cybersecurity strategies in accordance with the applicable cybersecurity policies and issuances which include recurring assessment of the supply chain, development environment, processes and tools, continuous automated cybersecurity test and operational evaluation to provide a system resilient to offensive cyber operations.

 

Reference Source: DODI 5000.87 Section 3.3.b

 

Subsequent capability releases will be delivered at least annually. Software updates to address cybersecurity vulnerabilities will be released in a timely manner, potentially including out of release cycle as needed, per the program’s risk based lifecycle management approach.

 

Automated cyber testing and continuous monitoring of operational software will be designed and implemented to support a cATO or an accelerated accreditation process to the maximum extent practicable; and will be augmented with additional testing where appropriate in accordance with cybersecurity policies, and in coordination with the assigned authorizing official.  All safety critical software standards and guidance apply for programs using the software acquisition pathway.  Programs will implement recurring cybersecurity assessments of the development environment, processes and tools.

 

Cybersecurity and software assurance will be integral to strategies, designs, development environment, processes, supply chain, architectures, enterprise services, tests, and operations.  Continuous and automated cybersecurity and cyber threat testing will identify vulnerabilities to help ensure software resilience throughout the lifecycle.  PMs will work with stakeholders to provide sufficient controls to enable a cATO where appropriate.  Ensuring software security includes:

  • Secure development (e.g., development environment, vetted personnel, coding, test, identity and access management, and supply chain risk management).
  • Cybersecurity and assurance capabilities (e.g., software updates and patching, encryption, runtime monitoring, and logging).
  • Secure lifecycle management (e.g., vulnerability management, rigorous and persistent cybersecurity testing, and configuration control).

 

Each program will develop and track a set of metrics to assess and manage the performance, progress, speed, cybersecurity, and quality of the software development, its development teams, and ability to meet users’ needs.

Cybersecurity Guidance

Reference Source: USD(A&S) Guidance

Software security is a fundamental consideration and extends beyond the Approval to Operate (ATO). Resources must be committed to building secure software from the beginning of software development throughout the whole lifecycle. The software program should protect the software and associated critical data from vulnerabilities, internal and external threats, and critical errors that can affect performance or expose sensitive data. Software security spans three broad areas: Secure Development: includes secure coding practices, testing and verification, supply chain considerations, tool chain configuration, and identity/access management. Secure Capabilities: includes identity management / authentication, secure software patching/updates, encryption, authorization/access controls, logging, and error/exception handling. Secure Lifecycle: includes vulnerability management, configuration, vulnerability notification/patching, and end-of-life considerations.

Guiding Principles

While the Risk Management Framework (RMF) and the National Institute of Standards and Technology (NIST) 800-53 controls form a starting point, programs should adopt a framework to aid in analyzing cybersecurity practices throughout the lifecycle and identify opportunities for continued improvement, such as the BSA (the Software Alliance) Framework for Secure Software.

Secure Development

Reference Source: USD(A&S) Guidance

Secure Coding

Secure coding is based on sound, recognizable, and enforceable coding standards and uses standard data formats. Additionally, the software should be secured against known vulnerabilities, unsafe functions, and unsafe libraries. Software should validate input and output to mitigate potential vulnerabilities. The software architecture and design must include software assurance measures. Programs should employ segmentation practices through sandboxing, containerization, or similar methodologies. Software should also implement fault isolation mechanisms.

Testing and Verification

Using threat model(s) and risk analysis, software developers should identify and map the software attack surface. Based on threat-informed risk, testing should occur at multiple stages in the software development process. Programs should develop a test plan that evaluates the security of the software in conjunction with the functionality. At the basic level, developers should perform code reviews, ideally via automation. Software should be subjected to adversarial testing techniques, including penetration testing.

Supply Chain

Rarely is software developed in isolation by a single organization. Instead, modern software is ‘assembled’ and relies on third-party software components that are outside of the immediate control of the software program. Programs should make efforts to ensure the visibility, traceability, and security of third-party components, including open source components. Using manual and automated technologies, programs should document all software components and trace their lineage and dependencies. Additionally, third-party software security policies should be incorporated into contracts, policies, and standards for vendors providing software components to software programs.

Tool Chain

The software development environment should use up-to-date versions of all tools and platform elements. Compilers should be configured to address security vulnerabilities and prohibit unintentional removal or modification of security-critical code. Containers and other virtualization tools should use secure configurations.

Identity and Access Management

The software development environment should implement strong authentication methods for robust access control. User and operator credentials should be stored securely and revoked/disabled when no longer needed. Programs should develop and implement a policy for granting access control according to specific roles. Any changes or deletions to code in the development environment should be logged and flagged if unauthorized.

Secure Capabilities

Reference Source: USD(A&S) Guidance

Identity Management and Authentication

The software architecture should address weaknesses that would create risk of authentication failure. This includes avoiding the use of hard-coded passwords, implementing authentication mechanisms that avoid common security weaknesses, and properly storing authentication information (credentials and keys).

Patchability

Software should be able to incorporate secure updates and security patches, notify users of patches and updates, and revert to last ‘known good’ states in case of failed installation of a patch or update.

Encryption

Developers should employ strong encryption throughout the software to protect sensitive data from unauthorized disclosure. Encryption also protects the software itself from tampering. Strong encryption entails the use of authenticated encryption and strong algorithms and key lengths. Encryption keys should be stored securely and managed separately from encrypted data. Developers should implement mechanisms to manage keys and certificates and validate the certificates.

Authorization and Access Controls

The software architecture should support authorization and access controls based on the principle of least privilege. In the case of authorization failure, the software should not grant access to unauthorized or unauthenticated users.

Logging

The software should log critical security events and incidents and should be able to distinguish between monitoring logs and auditing logs. Logged events should include identifying information and associated timestamps but should avoid capturing sensitive information such as system details, passwords, or session identifiers. Access to logs should be restricted to authorized users.

Secure Lifecycle

Reference Source: USD(A&S) Guidance

Vulnerability Management

Vulnerability management relies on the development, implementation, and maintenance of a vulnerability management plan that includes identification and occurrence of a vulnerability, verification of the vulnerability, remediation and mitigation, release of a solution, and any post-release activities.

Configuration

Software deployment should include configurations and configuration guidance that facilitate secure installation and operation. The documentation that specifies configuration parameters should be as a restrictive as possible so as not to expose software to attacks and exploits.

Special Considerations – The Team and Achieving Full Speed

In addition to leveraging opportunities to automate pipeline-related infrastructure and approval processes, programs can achieve timeline efficiencies by keeping in mind a lesson learned from the Agile Pilots with respect to the scope of agile teams.  In Agile, the team designs, builds, tests smaller batches of work and readies them for release along very short time frames. In many cases, automated tests should be built as part of the build cycle.  Testing organizations and control boards that exist outside the team and expect to test/evaluate large amounts of work long after that software has been built, or a program not ensuring that evaluation/acceptance criteria are clearly defined during the team’s design phase, will negatively impact work completion and become a bottleneck for delivery (when the benefits of the work are realized). Therefore, testing/evaluation organizations and control boards need to integrate their team members into the Agile team or step out of the process. Further, all parties involved need to be experts in automated test generation and review to avoid falling back into outdated waterfall test and evaluation approaches. The same is true of cyber-security testing, evaluation, and control boards. Organizations need to re-imagine the types and the continuum of test community efforts that can enable continuous iterative development and at-pace deployment… to ensure “at the speed of relevance” mission capability (results) for the warfighter:

  • Organizations and teams should be pushing everything test-related “to the left”, and test driven development (TDD) should be the standard. If teams believe it can’t be performed, then they need to get an exception (so the organization can fix the problem/remove the blocker).
  • Automate wherever possible.
  • Embed test and cyber representation within the teams, and bring any control boards to the table during design to ensure complete transparency of needs.
  • Evaluate existing test and cyber organizations, as well as control boards to determine if they are adding significant value beyond the team’s efforts or if they are blockers/overhead.

Integrated Security Practices Using Automated Testing

Security automation takes place at multiple phases throughout the DevSecOps pipeline. Since the software pushed through the pipeline and the pipeline itself are both living entities that mature over time, security must be treated as a state. For example, a software application is only as secure as the most recent security tests/assessments show it is. With every new deployment of code, the attack vector of a given software application increases and the code must be reassessed appropriately. This concept applies to the application code itself, third-party dependencies/libraries, and the dependencies used by underlying infrastructure. The security of all three categories must be assessed in an automated fashion as part of the DevSecOps pipeline using automated static code analysis, automated penetration testing, and automated container vulnerability scanning. In a perfect DevSecOps pipeline, three major forms of automated testing should occur: automated security testing, automated functional and unit testing, and automated application performance testing, and integration testing. The first two forms of testing take place during the CI (automated build) phase of the DevSecOps pipeline. The last form of automated testing takes place in a production-like (staging/testing) environment after an initial deployment and will ultimately only occur if the software has met quality criteria for the first two forms of testing. To take full advantage of the power of automated testing, programs must determine the quality criteria for testing pass/fail as early in the software development lifecycle (SDLC) as possible. The specific criteria will vary on a project-by-project basis, and establishing the criteria requires collaboration among development, security, and operations teams, as well as the designated project AO, for automated testing to be as effective as possible.

Automated Security Testing

Automated security testing can be broken down into three subcategories: automated static code analysis, automated penetration testing, and automated Docker image vulnerability scanning. Development teams must always perform testing in at least the first category. Conducting an automated penetration test or Docker image vulnerability scan as part of the CI process may not be necessary for software projects that are not developing web applications or using containers. Information security teams must have the proper personnel (preferably with a software engineering or computer science background) to review the reports generated by static code analysis tools. This will allow security engineers and developers to have the conversations necessary to mitigate any vulnerabilities discovered by the tools. Automated static code analysis is readily available via third-party plugins for standard CI services. Static code analysis can be used for web applications as well as embedded systems software, and successful implementation as early as possible in the SDLC enables developers to mitigate vulnerabilities as they occur in the initial development of a project’s MVP. However, implementation of automated static code analysis cannot be considered “successful” once it has been integrated into the CI process alone. It is imperative that the correct information security personnel be delegated the responsibility of viewing the reports generated by static code analysis tools; for example, an information security professional whose primary skillset is network/perimeter security should not be tasked with reviewing a static code analysis report. Assigning the evaluation of static code analysis reports to individuals who have “taken coding classes but have never practiced the discipline as a profession” creates additional risk and could complicate the iterative feedback conversations between security and development teams. Automated penetration testing of web applications also takes place during the CI phase of the pipeline. Tool suites that provide penetration testing should be strategically placed throughout the CI build plan to provide continuous automated reports that can be assessed by information security teams. Lastly, if applicable, automated scanning of images is more important than scanning the application code itself. Automated vulnerability scanning of images requires security and operations Information Technology (IT) teams to work in unison throughout the composition and sustainment of custom containers. Teams must maintain vigilant and constant security of Infrastructure as Code scripts to prevent exploitable infrastructure from being deployed into production.

Automated Functional, Unit, and Performance Testing

Automated functional and unit testing is the second form of testing incorporated in the CI process. Once an initial form of automated security testing has been integrated into a DevSecOps pipeline, developers should implement automated functional testing. It is critical to ensure that an appropriate logging mechanism is in place to capture test results. A logging solution is crucial to pinpointing a specific place to test automation failure. For web applications a web browser automation tool is necessary to simulate functionality that users exercise in a browser. In combination with test automation frameworks, automated functional/unit testing can also be integrated into the CI process and can use predetermined quality criteria as pass/fail thresholds for CI builds. The amount of testing that can be automated will vary among projects. The forms of automated testing that will occur will ultimately depend on the type of software pushed through the pipeline. Automated functional testing of web applications is relatively feasible in comparison to testing of embedded system software due to the existence of extensive testing frameworks. For example, web applications undergo a tremendous amount of automated front end/graphical user interface (GUI) testing, while embedded systems software do not. Automated integration testing, however, must take place regardless of the type of software that is built and deployed. Finally, automated application performance testing takes place outside the CI phase of the DevSecOps pipeline. Only this type of rigorous testing may detect specific performance-related bugs and issues. Application performance testing occurs after an initial deployment to a staging/test/production-like environment that will expose issues in non-functional requirements which could not be replicated in a development environment. It is important to conduct application performance testing as soon as possible and with every deployment (e.g., on the MVP once it has initially been deployed to staging), so that performance/architectural defects can be detected and addressed in the earliest sprints possible. Availability issues discovered by application performance testing in themselves pose security issues due to a fundamental violation of the [define] CIA triad (availability).

Safety Critical Software

Continuous deployment of safety critical software must be considered much differently than normal CI/CD commercial practices.  All safety critical software standards and guidance such as MIL-STD-882E, DoD Standard Practice System Safety, DO-178C Software Considerations in Airborne Systems and Equipment Certification, Joint Software Systems Safety Engineering Handbook (JSSSEH) and AOP-52, Guidance on Software Safety Design and Assessment of Munition-related Computing Systems apply when executing this software pathway. Authorizing Official for Cyberspace Innovation, SAF/CIO A6. “Continuous ATO Playbook: Constructing a Secure Software Factory to Achieve Ongoing Authority to Operate.” (undated)

Continuous Approval to Operate (cATO)

Reference Source: USD(A&S) Guidance

The key benefit of cATO is to promote speed in delivery. The cATO shifts the focus of an ATO from the software product to the software factory by which that product is developed. The cATO process examines the tools, technologies, and processes that produce the software. It is important to note that a cATO does not represent an entirely new approach to risk. Rather, it moves the checks into automation so that risks can be identified and dealt with continuously throughout the development process. Programs may find the Kessel Run (KR) “Continuous Authorization Risk Management Playbook” to be a useful reference. This document represents reflection from one of the best sources of DoD experience to date in achieving an effective cATO. It gives programs an overview of necessary considerations for achieving a pipeline that can be given a continuous authorization, based on activities in five areas, which overlap with other areas of this guidance:

  • Continuous deployment
  • Architecture
  • Product and process
  • Lean management and monitoring
  • Organizational culture.

The decision whether to grant a cATO will result from a two-step approach. First, the cognizant Authorizing Official (AO) and technical staff will establish required controls for the system based on the risk for a given system in a given environment. As a result, program teams should work closely with their AO to tailor these guidelines appropriately. The program’s DevSecOps software factory must comply with a DOD DevSecOps reference design and must implement the controls defined in NIST SP 800-53 as a minimum baseline. AOs when considering continuous authorization must include stipulations attached that establish guideposts indicating when the authorization must be re-established. Once the AO grants the systems initial ATO, the AO with the Program can request a cATO through the DOD CISO.

Although the final determination will reflect the risk assessment made in the specific context, examples can help AOs think through the relevant factors. For instance, AOs should consider establishing some performance measures or criteria for the following indicators of the risk of continued operation:

  • Iterative penetration testing
  • Involvement of qualified security personnel, as mutually agreed by the program and the AO
  • Real-time access by designated cybersecurity personnel to results of testing, scanning, monitoring, and performance metrics
  • Critical vulnerabilities mitigated within 24 hours of discovery
  • Moderate vulnerabilities mitigated within a timeframe acceptable to the AO
  • Disciplined compliance with appropriate processes / procedures
  • Appropriate steps taken to ensure security of the development environment (including physical, information, and operational security measures)
  • Team members who are up to speed on software engineering and cybersecurity best practices.
  • Where practical, pipelines should be protected with defensive cyber operations (either within the program, Service or via Enterprise)

System with cATOs should have periodic, independent outside assessments of the security pipeline.

Resources: