Test Analysis and Design

Duration: 390 minutes

Keywords: acceptance criteria, acceptance test-driven development, black-box test technique, boundary value analysis, branch coverage, checklist-based testing, collaboration-based test approach, coverage, coverage item, decision table testing, equivalence partitioning, error guessing, experience-based test technique, exploratory testing, state transition testing, statement coverage, test technique, white-box test technique

Learning Objectives for Chapter 4:

Test Techniques Overview

Black-box Test Techniques

White-box Test Techniques

Experience-based Test Techniques

Collaboration-based Test Approaches

4.1. Test Techniques Overview

Test techniques support the tester in test analysis (what to test) and in test design (how to test). They help to develop a relatively small, but sufficient, set of test cases in a systematic way. Test techniques also assist testers in defining test conditions, identifying coverage items, and selecting test data during the test analysis and design phase. For more details, see the ISO/IEC/IEEE 29119-4 standard and references such as:

In this syllabus, test techniques are classified into three categories:

1. Black-box Test Techniques

(Also known as specification-based techniques)

These techniques are based on the analysis of the specified behavior of the test object without considering its internal structure. Therefore, the test cases are independent of how the software is implemented. Even if the implementation changes but the required behavior remains the same, the test cases will still be relevant.

2. White-box Test Techniques

(Also known as structure-based techniques)

These techniques rely on analyzing the internal structure and processing of the test object. Since these test cases are dependent on how the software is designed, they can only be created after the design or implementation phase is complete.

3. Experience-based Test Techniques

These techniques leverage the knowledge and experience of testers to design and implement test cases. Their effectiveness heavily depends on the tester’s skills. These techniques can detect defects that may be missed by black-box and white-box techniques, making them complementary to the other two methods.

4.2. Black-Box Test Techniques

Commonly used black-box test techniques discussed in the following sections are:

Equivalence Partitioning

Equivalence Partitioning (EP) divides data into partitions (called equivalence partitions) where all elements within a partition are processed similarly by the test object. If a defect is detected in one value from a partition, it is assumed that other values in the same partition would yield the same defect.

EP can be applied to inputs, outputs, configuration items, time-related values, or interface parameters. Partitions must be non-overlapping and non-empty. They can be valid (expected to be processed correctly) or invalid (expected to be ignored or rejected). To achieve 100% EP coverage, all partitions must be tested at least once.

Boundary Value Analysis

Boundary Value Analysis (BVA) focuses on boundary values of partitions since defects often occur at boundaries. This syllabus covers two versions of BVA:

Coverage is measured as the number of boundary values exercised divided by the total number of identified boundary values.

Decision Table Testing

Decision tables represent complex logic (e.g., business rules) by mapping conditions and actions. Each column represents a unique combination of conditions and their corresponding actions. Coverage is measured by testing all feasible columns at least once.

Notation used in decision tables:

While decision tables are effective for identifying gaps or contradictions, they can become large if there are many conditions. Minimization or a risk-based approach may be used to reduce the complexity.

State Transition Testing

State transition diagrams model system behavior by showing states and valid transitions. A transition occurs in response to an event, which may trigger specific actions. State tables provide an equivalent model by listing states and events in rows and columns.

Coverage criteria for state transition testing include:

Valid transitions coverage ensures all states are visited. All transitions coverage is the most rigorous and should be required for mission-critical software.

4.3. White-Box Test Techniques

This section focuses on two common code-related white-box test techniques due to their popularity and simplicity:

More rigorous techniques exist for safety-critical, mission-critical, or high-integrity environments, but these are beyond the scope of this syllabus. Similarly, some white-box techniques used in higher test levels or for non-code-related coverage (e.g., neural network testing) are also not discussed here.

Statement Testing and Statement Coverage

In statement testing, the goal is to design test cases that exercise executable statements in the code. Coverage is measured as the number of exercised statements divided by the total number of executable statements, expressed as a percentage.

100% statement coverage ensures that every executable statement has been tested at least once. However, it may not detect all defects, such as those dependent on specific data values or certain conditions (e.g., division by zero). Additionally, statement coverage does not guarantee that all decision logic (e.g., branches) has been exercised.

Branch Testing and Branch Coverage

A branch represents a control transfer between two nodes in the control flow of a program. Branch testing aims to design test cases that exercise all branches, including both unconditional and conditional branches.

Coverage is measured as the number of branches exercised divided by the total number of branches, expressed as a percentage. Achieving 100% branch coverage ensures that all branches (e.g., true/false conditions in “if...then” statements) have been tested, but like statement testing, branch testing may not uncover all defects.

Branch coverage subsumes statement coverage, meaning that if 100% branch coverage is achieved, 100% statement coverage is also achieved (but not vice versa).

The Value of White-Box Testing

White-box testing has the strength of examining the entire software implementation, allowing for defect detection even when specifications are incomplete or outdated. However, white-box testing may miss defects caused by requirements omissions, as it focuses on what has been implemented, not what should have been implemented.

White-box techniques are useful in static testing (e.g., reviewing non-executable code or pseudocode) and provide an objective measure of coverage. This allows for generating additional tests to increase coverage and confidence in the software.

4.4. Experience-based Test Techniques

The commonly used experience-based test techniques discussed in the following sections are:

Error Guessing

Error guessing is a technique used to anticipate errors, defects, and failures based on the tester’s knowledge, including:

Errors, defects, and failures may involve:

Fault attacks are a systematic approach to error guessing. They involve creating or acquiring lists of possible errors, defects, and failures, and designing tests to expose them. These lists can be based on experience, historical defect data, or general knowledge about software failures.

See references such as Whittaker (2002, 2003) and Andrews (2006) for more details on error guessing and fault attacks.

Exploratory Testing

In exploratory testing, tests are simultaneously designed, executed, and evaluated as the tester learns about the test object. It allows the tester to delve deeper into the application and identify untested areas.

Exploratory testing can be structured using session-based testing, which involves conducting the testing within a defined time-box. In this approach:

Exploratory testing is valuable when specifications are incomplete or time is limited. It complements formal testing techniques and benefits from experienced testers with analytical skills, curiosity, and creativity (see section 1.5.1).

Other testing techniques, such as equivalence partitioning, can also be incorporated into exploratory testing. For more details, refer to Kaner (1999), Whittaker (2009), and Hendrickson (2013).

Checklist-based Testing

In checklist-based testing, the tester uses a checklist to guide the design, implementation, and execution of tests. Checklists can be developed from experience, user requirements, or insights into common software failures.

Checklist items should be specific, actionable, and phrased as questions. They may cover various aspects, including:

Checklists should evolve over time to reflect new defects and prevent developers from repeating the same mistakes. However, they should be kept concise to maintain their effectiveness (Gawande 2009).

In the absence of detailed test cases, checklist-based testing offers some consistency and flexibility. However, high-level checklists may result in greater coverage but reduced repeatability.

4.5. Collaboration-based Test Approaches

Each of the above-mentioned techniques (see sections 4.2, 4.3, 4.4) has a particular objective with respect to defect detection. Collaboration-based approaches, however, focus on both defect detection and avoidance through collaboration and communication.

Collaborative User Story Writing

A user story represents a feature valuable to either a user or purchaser. User stories consist of three critical aspects, known as the "3 C’s" (Jeffries 2000):

The typical format for a user story is: “As a [role], I want [goal to be accomplished], so that I can [resulting business value for the role].” This is followed by the acceptance criteria.

Collaborative authorship of user stories involves techniques such as brainstorming and mind mapping. The goal is to align the team on a shared vision by considering perspectives from business, development, and testing.

A good user story follows the INVEST criteria: Independent, Negotiable, Valuable, Estimable, Small, and Testable. If stakeholders struggle to define tests for a story, it may indicate that the story lacks clarity, value, or that stakeholders need assistance with testing (Wake 2003).

Acceptance Criteria

Acceptance criteria are the conditions that must be met for a user story to be accepted by stakeholders. They act as test conditions and are usually derived from the Conversation aspect of the story (see section 4.5.1).

Acceptance criteria are used to:

Two common formats for acceptance criteria include:

Although most acceptance criteria fit one of these formats, teams may choose any custom format as long as the criteria are well-defined and unambiguous.

Acceptance Test-driven Development (ATDD)

ATDD is a test-first approach (see section 2.1.3) where test cases are created before the implementation of a user story. These test cases are developed collaboratively by team members from different perspectives, such as customers, developers, and testers (Adzic 2009). The tests can be executed manually or automated.

The ATDD process begins with a specification workshop, where the team discusses and refines the user story and its acceptance criteria. This ensures clarity and resolves ambiguities or defects. Next, the team creates test cases based on the acceptance criteria. These test cases provide examples of expected behavior and help ensure correct implementation.

Test design may incorporate techniques from sections 4.2, 4.3, and 4.4. The process typically follows these steps:

Test cases must align with the user story’s scope and not exceed it. Each test case should focus on a distinct characteristic to avoid redundancy. Test cases are often expressed in natural language, containing:

When test cases are documented in a format supported by a test automation framework, developers can automate the tests alongside feature development. This way, acceptance tests act as executable requirements, ensuring that features meet stakeholder expectations.