Test Levels

Submitted by lucent-test@up… on Thu, 08/13/2020 - 14:25

Test levels are groups of test activities that are organized and managed together. Each test level is an instance of the test process, consisting of the activities described in section 1.4, performed in relation to software at a given level of development, from individual units or components to complete systems or, where applicable, systems of systems. Test levels are related to other activities within the software development lifecycle. The test levels used in this syllabus are:

  • Component testing
  • Integration testing
  • System testing
  • Acceptance testing
  •  

Test levels are characterized by the following attributes:

  • Specific objectives
  • Test basis, referenced to derive test cases
  • Test object (i.e., what is being tested)
  • Typical defects and failures
  • Specific approaches and responsibilities

 

For every test level, a suitable test environment is required. In acceptance testing, for example, a production-like test environment is ideal, while in component testing the developers typically use their own development environment

Sub Topics

Objectives of component testing

Component testing (also known as unit or module testing) focuses on components that are separately testable. Objectives of component testing include:

  • Reducing risk
  • Verifying whether the functional and non-functional behaviors of the component are as designed and specified
  • Building confidence in the component’s quality
  • Finding defects in the component
  • Preventing defects from escaping to higher test levels

 

In some cases, especially in incremental and iterative development models (e.g., Agile) where code changes are ongoing, automated component regression tests play a key role in building confidence that changes have not broken existing components. Component testing is often done in isolation from the rest of the system, depending on the software development lifecycle model and the system, which may require mock objects, service virtualization, harnesses, stubs, and drivers.

Component testing may cover functionality (e.g., correctness of calculations), non-functional characteristics (e.g., searching for memory leaks), and structural properties (e.g., decision testing).

Test basis

Examples of work products that can be used as a test basis for component testing include:

  • Detailed design
  • Code
  • Data model
  • Component specifications


Test objects

Typical test objects for component testing include:

  • Components, units or modules
  • Code and data structures
  • Classes
  • Database modules

 

Typical defects and failures

Examples of typical defects and failures for component testing include:

  • Incorrect functionality (e.g., not as described in design specifications)
  • Data flow problems
  • Incorrect code and logic

 

Defects are typically fixed as soon as they are found, often with no formal defect management. However, when developers do report defects, this provides important information for root cause analysis and process improvement

Objectives of integration testing

Integration testing focuses on interactions between components or systems. Objectives of integration testing include:

  • Reducing risk
  • Verifying whether the functional and non-functional behaviors of the interfaces are as designed and specified
  • Building confidence in the quality of the interfaces
  • Finding defects (which may be in the interfaces themselves or within the components or systems)
  • Preventing defects from escaping to higher test levels

 

As with component testing, in some cases automated integration regression tests provide confidence that changes have not broken existing interfaces, components, or systems.

There are two different levels of integration testing described in this syllabus, which may be carried out on test objects of varying size as follows:

  • Component integration testing focuses on the interactions and interfaces between integrated components. Component integration testing is performed after component testing, and is generally automated. In iterative and incremental development, component integration tests are usually part of the continuous integration process.
  • System integration testing focuses on the interactions and interfaces between systems, packages, and microservices. System integration testing can also cover interactions with, and interfaces provided by, external organizations (e.g., web services). In this case, the developing organization does not control the external interfaces, which can create various challenges for testing (e.g., ensuring that test-blocking defects in the external organization’s code are resolved, arranging for test environments, etc.). System integration testing may be done after system testing or in parallel with ongoing system test activities (in both sequential development and iterative and incremental development).

 

Test basis

Examples of work products that can be used as a test basis for integration testing include:  

  • Software and system design
  • Sequence diagrams
  • Interface and communication protocol specifications
  • Use cases
  • Architecture at component or system level
  • Workflows
  • External interface definitions

 

Test objects

Typical test objects for integration testing include:

  • Subsystems
  • Databases
  • Infrastructure
  • Interfaces
  • APIs
  • Microservices

 

Typical defects and failures

Examples of typical defects and failures for component integration testing include:

  • Incorrect data, missing data, or incorrect data encoding
  • Incorrect sequencing or timing of interface calls
  • Interface mismatch
  • Failures in communication between components
  • Unhandled or improperly handled communication failures between components
  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between components

Objectives of system testing

System testing focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end tasks the system can perform and the non-functional behaviors it exhibits while performing those tasks. Objectives of system testing include:

  • Reducing risk
  • Verifying whether the functional and non-functional behaviors of the system are as designed and specified
  • Validating that the system is complete and will work as expected
  • Building confidence in the quality of the system as a whole
  • Finding defects
  • Preventing defects from escaping to higher test levels or production

 

For certain systems, verifying data quality may be an objective. As with component testing and integration testing, in some cases automated system regression tests provide confidence that changes have not broken existing features or end-to-end capabilities. System testing often produces information that is used by stakeholders to make release decisions. System testing may also satisfy legal or regulatory requirements or standards.

The test environment should ideally correspond to the final target or production environment.

Test basis

Examples of work products that can be used as a test basis for system testing include:

  • System and software requirement specifications (functional and non-functional)
  • Risk analysis reports
  • Use cases
  • Epics and user stories
  • Models of system behavior
  • State diagrams
  • System and user manuals

 

Test objects

Typical test objects for system testing include:

  • Applications
  • Hardware/software systems
  • Operating systems
  • System under test (SUT)
  • System configuration and configuration data

 

Typical defects and failures

Examples of typical defects and failures for system testing include:

  • Incorrect calculations
  • Incorrect or unexpected system functional or non-functional behavior
  • Incorrect control and/or data flows within the system
  • Failure to properly and completely carry out end-to-end functional tasks
  • Failure of the system to work properly in the production environment(s)
  • Failure of the system to work as described in system and user manuals

Objectives of acceptance testing

Acceptance testing, like system testing, typically focuses on the behavior and capabilities of a whole system or product. Objectives of acceptance testing include:

  • Establishing confidence in the quality of the system as a whole
  • Validating that the system is complete and will work as expected
  • Verifying that functional and non-functional behaviors of the system are as specified

Acceptance testing may produce information to assess the system’s readiness for deployment and use by the customer (end-user). Defects may be found during acceptance testing, but finding defects is often not an objective, and finding a significant number of defects during acceptance testing may in some cases be considered a major project risk. Acceptance testing may also satisfy legal or regulatory requirements or standards.

Common forms of acceptance testing include the following:

  • User acceptance testing
  • Operational acceptance testing
  • Contractual and regulatory acceptance testing
  • Alpha and beta testing

 

User acceptance testing (UAT)

The acceptance testing of the system by users is typically focused on validating the fitness for use of the system by intended users in a real or simulated operational environment. The main objective is building confidence that the users can use the system to meet their needs, fulfill requirements, and perform business processes with minimum difficulty, cost, and risk.

Operational acceptance testing (OAT)

The acceptance testing of the system by operations or systems administration staff is usually performed in a (simulated) production environment. The tests focus on operational aspects, and may include:

  • Testing of backup and restore
  • Installing, uninstalling and upgrading
  • Disaster recovery
  • User management
  • Maintenance tasks
  • Data load and migration tasks
  • Checks for security vulnerabilities
  • Performance testing

The main objective of operational acceptance testing is building confidence that the operators or system administrators can keep the system working properly for the users in the operational environment, even under exceptional or difficult conditions.

Contractual and regulatory acceptance testing

Contractual acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the parties agree to the contract. Contractual acceptance testing is often performed by users or by independent testers.

Regulatory acceptance testing is performed against any regulations that must be adhered to, such as government, legal, or safety regulations. Regulatory acceptance testing is often performed by users or by independent testers, sometimes with the results being witnessed or audited by regulatory agencies.

The main objective of contractual and regulatory acceptance testing is building confidence that contractual or regulatory compliance has been achieved.

Alpha and beta testing

Alpha and beta testing are typically used by developers of commercial off-the-shelf (COTS) software who want to get feedback from potential or existing users, customers, and/or operators before the software product is put on the market. Alpha testing is performed at the developing organization’s site, not by the development team, but by potential or existing customers, and/or operators or an independent test team. Beta testing is performed by potential or existing customers, and/or operators at their own locations. Beta testing may come after alpha testing, or may occur without any preceding alpha testing having occurred.

One objective of alpha and beta testing is building confidence among potential or existing customers, and/or operators that they can use the system under normal, everyday conditions, and in the operational environment(s) to achieve their objectives with minimum difficulty, cost, and risk. Another objective may be the detection of defects related to the conditions and environment(s) in which the system will be used, especially when those conditions and environment(s) are difficult to replicate by the development team.

 

Module Linking