Software Acquisition

AAF  >  Software Acquisition  >  Develop Strategies  >  Test Strategy

Test Strategy

How To Use This Site

Each page in this pathway presents a wealth of curated knowledge from acquisition policies, guides, templates, training, reports, websites, case studies, and other resources. It also provides a framework for functional experts and practitioners across DoD to contribute to the collective knowledge base. This site aggregates official DoD policies, guides, references, and more.

DoD and Service policy is indicated by a BLUE vertical line.

Directly quoted material is preceeded with a link to the Reference Source.

Reference Source: DODI 5000.87 Section 1.2.k

Software development testing, government developmental testing, system safety assessment, security certification, and operational test and evaluation will be integrated, streamlined, and automated to the maximum extent practicable to accelerate delivery timelines based on early and iterative risk assessments. Maximum sharing, reciprocity, availability, and reuse of results and artifacts between the various testing and certification organizations is encouraged.

Reference Source: DODI 5000.87 Section 3.2.f

 

The test strategy defines the streamlined processes by which capabilities, features, user stories, use cases, etc., will be tested and evaluated to satisfy developmental test and evaluation criteria and to demonstrate operational effectiveness, suitability, interoperability, and survivability, including cyber survivability for operational test and evaluation. The strategy will:

  • Identify key independent test organizations and their roles and responsibilities and establish agreements on how they will be integrated early into the planning and development activities throughout the software lifecycle.
  • Encourage and identify test artifacts that can and will be shared across the testing and certification communities (e.g., developmental test and evaluation, operational test and evaluation, system safety assessments, and security certification).
  • Identify the tools and resources necessary to assist in data collection and transparency to support developmental test and evaluation and operational test and evaluation objectives.
  • For programs executing the embedded software path, include a safety critical risk assessment and mitigation strategy, including any safety critical implications.
  • Include a strategy to assess software performance, reliability, suitability, interoperability, survivability, operational resilience, and operational effectiveness.
  • Programs using the embedded software path will align test and integration with the testing and delivery schedules of the overarching system in which the software is embedded, including aligning resources and criteria for transitioning from development to test and operational environments.

 

Automated testing and operational monitoring should be used as much as practicable for user acceptance and to evaluate software suitability, interoperability, survivability, operational resilience, and operational effectiveness.  The DA will approve the test strategy and DOT&E will be the final approver on test strategies for programs on the DOT&E Oversight List. The test strategy should include information on the verification, validation, and accreditation authority, approach, and schedule for models and simulations in accordance with applicable modeling and simulation policies.  Continuous runtime monitoring of operational software will provide health-related reporting (e.g., performance, security, anomalies), as well as additional data collection opportunities to support test and continuous operational test.

T&E for the Software Acquisition Pathway

Reference Source: DODI 5000.89 Section 4.5

 

The software pathway focuses on modern iterative software development techniques such as agile, lean, and development security operations, which promise faster delivery of working code to the user. The goal of this software acquisition pathway is to achieve continuous integration and continuous delivery to the maximum extent possible. Integrated testing, to include contractor testing, is a critical component needed to reach this goal. Identifying integrated T&E and interoperability requirements early in test strategy development will enable streamlined integration, developmental and operational T&E, interoperability certification, and faster delivery to the field. The program acquisition strategy must clearly identify T&E requirements that have been fully coordinated with the test community.

 

The software pathway policy includes a requirement to create a test strategy. The program CDT or T&E lead, in collaboration with the other T&E stakeholders, should develop the test strategy and discuss the approach to developing measurable criteria derived from requirements (e.g., user features, user stories, use cases). The software pathway policy additionally requires the identification of test platforms and infrastructure be included in the acquisition strategy; the estimated T&E costs be included in the cost estimate; and the periodic delivery of the technical baseline to include scripts, tools, libraries, and other software executables necessary to test the software. Taken as whole, the test strategy for software intensive systems should include:

 

Characterization of proposed test platforms and infrastructure, including automated testing tools and plans to accredit their use.

  • Estimated T&E costs (DT&E, OT&E, and LFT&E).
  • Description of the necessary contractor-developed artifacts (e.g., source code, test scripts), along with any relevant scheduling information, to support the efficient reuse in streamlining T&E.
  • System-level performance requirements, non-functional performance requirements, and the metrics to be used to verify that the system will meet both functional and non-functional performance requirements.
  • Key independent organizations, roles and responsibilities, and established agreements on how they will be integrated early into the developmental activities and throughout the system life cycle.
  • How automated testing, test tools, and system telemetry will support the product throughout its life cycle.

The software acquisition pathway may also be applied to embedded systems. In the case of embedded software systems, the T&E strategy requires the same six features described in Paragraph 4.5.b., plus additional features, including:

  • Approach, including resources, for testing the software in the context of the hardware with which it will eventually be integrated. This should include information on resources such as model-based environments, digital twins, and simulations, as well as plans for tests on a production-representative system.
  • Identification of any safety critical risks, along with an approach to manage them.

 

PMs are encouraged to automate and integrate DT and OT testing to the maximum extent possible in order to accelerate acquisition timelines when feasible. This includes planning for and collecting test data from the contractor testing that can be used for evaluation. Just as the software product is being developed incrementally and iteratively in this modern software development paradigm, so too should be the T&E activities and products, particularly the test report. To maximize the benefit of early and automated data collection opportunities, the PM
must collaborate with the T&E interfaces and work through the T&E processes defined for DT&E (see Section 5) and OT&E (see Section 6) to tailor a plan that will enable the effective and efficient execution of analysis and evaluation, as well as the determination of test adequacy.

  • Automated testing should be used at the unit level, for application programming interface and integration tests, and to the maximum extent possible for user acceptance and to evaluate mission effectiveness.
  • Automated testing tools and automated security tools should be accredited by an OTA as “fit for purpose.”
  • Cybersecurity developmental and operational T&E assessments should consider, and reference, the DoD Cybersecurity T&E Guidebook to assist in the planning and execution of cybersecurity T&E activities needed beyond just the authority to operate (which is a necessary but not sufficient mechanism to assess cybersecurity). Automation, organic to the software acquisition pathway, provides data collection opportunities to develop cybersecurity T&E assessments in an incremental and iterative fashion.
  • Information gleaned from automated tests, such as those detailed above, as well as other forms of tests, should be provided to the sponsor and user community for use in their periodic value assessments of the software product.
  • The requirement for a PM to implement continuous runtime monitoring of operational software in this software acquisition pathway provides new opportunities to support operational test data requirements throughout the system life cycle.

We struggled with a testing and evaluation approach that was built for waterfall, to overcome this challenge, we are building automated software regression tests that are updated every time we release software, to minimize our test “bottleneck” in getting capability to the end users (approx. 10% covered, targeting 60%).”

AEGIS (Section 873 Agile Pilot Program)

T&E Guidance

Reference Source: USD(A&S) Guidance

 

In Agile, since design-build-test are tightly coupled and releases more frequent, it is desirable to automate as much testing as possible. This approach speeds delivery and can enhance quality. Agile teams typically integrate the build of automated tests as part of the “definition of done” for individual user stories/requirements. This approach alleviates the need for ongoing, repetitive, and labor-intensive manual testing, but may require traditional testing organizations to evolve their skillset (to understand best practices in test automation). Further, Agile testing occurs as close to build activities as possible, so the value of that releasable code can be delivered quickly, and issues can be resolved efficiently. This requires traditional testing and evaluation organizations (as well as other phase-gate approvals) to “shift left” their activities to better align with Agile practices. This requires these organizations/actors to integrate more closely/embed with program teams.

In a perfect DevSecOps pipeline, three major forms of automated testing should occur:

  • Automated unit and functional testing (including integration testing)
  • Automated security testing
  • Automated application performance testing

Automated unit, functional, and security testing take place during the CI (automated build) phase of the DevSecOps pipeline. Automated application performance testing takes place in a production-like (staging/testing) environment after an initial deployment and  only if the software has met quality criteria for the first two forms of testing.

To take full advantage of the power of automated testing, programs must determine the quality criteria for testing pass/fail as early in the software development lifecycle (SDLC) as possible. The specific criteria will vary on a project-by-project basis, and establishing the criteria requires collaboration among development, security, and operations teams, as well as the designated project AO, for automated testing to be as effective as possible.

Automated Unit and Functional Testing

Automated unit and functional testing is the first form of testing incorporated in the CI process. Once an initial form of automated security testing has been integrated into a DevSecOps pipeline, developers should implement automated functional testing. It is critical to ensure that an appropriate logging mechanism is in place to capture test results. A logging solution is crucial to pinpointing a specific place to test automation failure. For web applications a web browser automation tool is necessary to simulate functionality that users exercise in a browser. In combination with test automation frameworks, automated functional/unit testing can also be integrated into the CI process and can use predetermined quality criteria as pass/fail thresholds for CI builds.

The amount of testing that can be automated will vary among projects. The forms of automated testing that will occur will ultimately depend on the type of software pushed through the pipeline. Automated functional testing of web applications is relatively feasible in comparison to testing of embedded system software due to the existence of extensive testing frameworks. For example, web applications undergo a tremendous amount of automated front end/graphical user interface (GUI) testing, while embedded systems software do not. Automated integration testing, however, must take place regardless of the type of software that is built and deployed.

Automated Security Testing 
Security automation takes place at multiple phases throughout the DevSecOps pipeline. Since the software pushed through the pipeline and the pipeline itself are both living entities that mature over time, security must be treated as a state. For example, a software application is only as secure as the most recent security tests/assessments show it is. With every new deployment of code, the attack vector of a given software application increases and the code must be reassessed appropriately. This concept applies to the application code itself, third-party dependencies/libraries, and the dependencies used by underlying infrastructure. The security of all three categories noted above must be assessed in an automated fashion as part of the DevSecOps pipeline using automated static code analysis, automated penetration testing, and automated container vulnerability scanning. 

Automated security testing can be broken down into three subcategories:  

  • Automated static code analysis 
  • Automated penetration testing, and  
  • Automated Docker image vulnerability scanning 

Development teams must always at minimum perform automated static code analysis testing. Conducting an automated penetration test or Docker image vulnerability scan as part of the CI process may not be necessary for software projects that are not developing web applications or using containers. Information security teams must have the proper personnel (preferably with a software engineering or computer science background) to review the reports generated by static code analysis tools. This allows security engineers and developers to have the conversations necessary to mitigate any vulnerabilities discovered by the tools. 

Automated static code analysis is readily available via third-party plugins for standard CI services. Static code analysis can be used for web applications as well as embedded systems software, and successful implementation as early as possible in the SDLC enables developers to mitigate vulnerabilities as they occur in the initial development of a project’s MVP. However, implementation of automated static code analysis cannot be considered “successful” once it has been integrated into the CI process alone. It is imperative that the correct information security personnel be delegated the responsibility of viewing the reports generated by static code analysis tools; for example, an information security professional whose primary skillset is network/perimeter security should not be tasked with reviewing a static code analysis report. Assigning the evaluation of static code analysis reports to individuals who have “taken coding classes but have never practiced the discipline as a profession” creates additional risk and could complicate the iterative feedback conversations between security and development teams.

Automated penetration testing of web applications also takes place during the CI phase of the pipeline. Tool suites that provide penetration testing should be strategically placed throughout the CI build plan to provide continuous automated reports that can be assessed by information security teams. 

Lastly, if applicable, automated scanning of images is more important than scanning the application code itself. Automated vulnerability scanning of images requires security and operations Information Technology (IT) teams to work in unison throughout the composition and sustainment of custom containers. Teams must maintain vigilant and constant security of Infrastructure as Code scripts to prevent exploitable infrastructure from being deployed into production. 

Automated Performance Testing

Finally, automated application performance testing takes place outside the CI phase of the DevSecOps pipeline. Only this type of rigorous testing may detect specific performance-related bugs and issues. Application performance testing occurs after an initial deployment to a staging/test/production-like environment that will expose issues in non-functional requirements which could not be replicated in a development environment. It is important to conduct application performance testing as soon as possible and with every deployment (e.g., on the MVP once it has initially been deployed to staging), so that performance/architectural defects can be detected and addressed in the earliest sprints possible. Availability issues discovered by application performance testing in themselves pose security issues due to a fundamental violation of the [define] CIA triad (availability).

“Pulled some (testingleft due to training developers on [] testing/risk management [which shortened the traditional 12 month re-authorization down to approximately 90 days.

Information Screening and Delivery System (ISDS) (Section 873 Agile Pilot Program)