As part of our Software Tester Course, you will learn various Testing Terminologies aligned to ISTQB standards. We have complied & logically grouped a list of Testing Terminologies whilst referring to ISTQB Glossary that are significant and perhaps used at work place where ISTQB standards are practiced.
Testing: The process consisting of all life cycle activities, both static and dynamic, concerned
with planning, preparation and evaluation of software products and related work products
to determine that they satisfy specified requirements, to demonstrate that they are fit for
purpose and to detect defects.
Dynamic testing: Testing that involves the execution of the software of a component or
Static testing: Testing of a component or system at specification or implementation level
without execution of that software, e.g. reviews
Black box testing: Testing, either functional or non-functional, without reference to the
internal structure of the component or system
Test Process: The fundamental test process comprises test planning and control, test analysis
and design, test implementation and execution, evaluating exit criteria and reporting, and
test closure activities.
Test Level: A group of test activities that are organized and managed together. Examples of test levels are component test, integration test, system test and acceptance test.
Component testing: The testing of individual software components.
Component integration testing: Testing performed to expose defects in the interfaces and
interaction between integrated components.
Integration testing: Testing performed to expose defects in the interfaces and in the
interactions between integrated components or systems.
System testing: The process of testing an integrated system to verify that it meets specified
User Acceptance Testing = Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria
and to enable the user, customers or other authorised entity to determine whether or not to
accept the system.
Alpha testing: (internal acceptance testing) Operational testing by potential users or an
independent test team at the developers’ site, but outside the development organisation.
Beta testing: (external acceptance testing) Operational testing by potential users at an external
site not otherwise involved with the developers, to determine whether or not a component
or system satisfies the user needs and fits within the business processes.
Test Type: A group of test activities aimed at testing a component or system focused on a
specific test objective.
Functional requirement: A requirement that specifies a function that a component or system
Functionality: The capability of the software product to provide functions which meet stated
and implied needs
Functional testing: Testing based on an analysis of the specification of the functionality of a
component or system
Non-functional testing: Testing the attributes of a component or system that do not relate to
functionality, e.g. reliability, efficiency, usability, maintainability and portability
Re-testing: Testing that runs test cases that failed the last time they were run, in order to
verify the success of corrective actions.
Regression testing: Testing of a previously tested program following modification to ensure
that defects have not been introduced or uncovered in unchanged areas of the software, as a
result of the changes made. It is performed when the software or its environment is
Smoke test: A subset of all defined/planned test cases that cover the main functionality of a
component or system, to ascertaining that the most crucial functions of a program work,
but not bothering with finer details. A daily build and smoke test is among industry best
Usability testing: Testing to determine the extent to which the software product is
understood, easy to learn, easy to operate and attractive to the users under specified
Security: Attributes of software products that bear on its ability to prevent unauthorized
access, whether accidental or deliberate, to programs and data
Security testing: Testing to determine the security of the software product.
Monkey testing: Testing by means of a random selection from a large range of inputs and by
randomly pushing buttons, ignorant of how the product is being used.
Negative testing: Tests aimed at showing that a component or system does not work.
Negative testing is related to the testers’ attitude rather than a specific test approach or test
design technique, e.g. testing with invalid input values or exceptions
Load testing: A type of performance testing conducted to evaluate the behavior of a
component or system with increasing load, e.g. numbers of parallel users and/or numbers
of transactions, to determine what load can be handled by the component or system.
Performance: The degree to which a system or component accomplishes its designated
functions within given constraints regarding processing time and throughput rate.
Performance testing: The process of testing to determine the performance of a software
Stress testing: A type of performance testing conducted to evaluate a system or component at
or beyond the limits of its anticipated or specified work loads, or with reduced availability
of resources such as access to memory or servers
Maintenance testing: Testing the changes to an operational system or the impact of a
changed environment to an operational system.
Operational Acceptance Testing: Operational testing in the acceptance test phase, typically
performed in a (simulated) operational environment by operations and/or systems
administration staff focusing on operational aspects, e.g. recoverability, resource-behavior,
installability and technical compliance.
Test Condition = test requirement: An item or event of a component or system that could be verified by one or
more test cases, e.g. a function, transaction, feature, quality attribute, or structural element
Test item: The individual element to be tested. There usually is one test object and many test
Feature: An attribute of a component or system specified or implied by requirements
Testability: The capability of the software product to enable modified software to be tested
Test object: The component or system to be tested.
Deliverable: Any (work) product that must be delivered to someone other than the (work)
Test Policy: A high level document describing the principles, approach and major objectives
of the organisation regarding testing.
Test Strategy: A high-level description of the test levels to be performed and the testing within
those levels for an organisation or programme (one or more projects).
Requirements-based testing: An approach to testing in which test cases are designed based
on test objectives and test conditions derived from requirements, e.g. tests that exercise
specific functions or probe non-functional attributes such as reliability or usability.
Risk-based testing: An approach to testing to reduce the level of product risks and inform
stakeholders of their status, starting in the initial stages of a project. It involves the
identification of product risks and the use of risk levels to guide the test process.
Session-based test management: A method for measuring and managing session-based
testing, e.g. exploratory testing.
Session-based testing: An approach to testing in which test activities are planned as
uninterrupted sessions of test design and execution, often used in conjunction with
Test Plan: A document describing the scope, approach, resources and schedule of intended
Level test plan: A test plan that typically addresses one test level
Master test plan: A test plan that typically addresses multiple test levels
Work Breakdown Structure: An arrangement of work elements and their relationship to
each other and to the end product
Entry criteria: The set of generic and specific conditions for permitting a process to go
forward with a defined task, e.g. test phase.
Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders,
for permitting a process to be officially completed.
Suspension criteria: The criteria used to (temporarily) stop all or a portion of the testing
activities on the test items
Resumption criteria: The testing activities that must be repeated when testing is re-started
after a suspension.
Test Approach: The implementation of the test strategy for a specific project. It typically
includes the decisions made that follow based on the (test) project’s goal and the risk
assessment carried out, starting points regarding the test process, the test design techniques
to be applied, exit criteria and test types to be performed.
Test Cycle: Execution of the test process against a single identifiable release of the test object
Manual Testing: is a process of “manually” executing Test Cases without using
Test Automation Tools. (Note: Non-ISTQB definition)
Test Automation: is the use of special software to:
control test execution & the comparison of actual results with expected results.
(Note: Non-ISTQB definition)
Exhaustive testing: A test approach in which the test suite comprises all combinations of
input values and preconditions.
Test Case: A set of input values, execution preconditions, expected results and execution
post conditions, developed for a particular objective or test condition, such as to exercise a
particular program path or to verify compliance with a specific requirement.
High level test case: A test case without concrete (implementation level) values for input data
and expected results.
Low level test case: A test case with concrete (implementation level) values for input data and
Expected result: The behaviour predicted by the specification, or another source, of the
component or system under specified conditions.
Test = Test Set = Test Suite: A set of several test cases for a component or system under test, where the post
condition of one test is often used as the precondition for the next one.
Test Coverage: The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test suite.
Traceability: The ability to identify related items in documentation and software, such as
requirements with associated tests
Test Environment = Test Bed: An environment containing hardware, instrumentation, simulators,
software tools, and other support elements needed to conduct a test.
Test Data: Data that exists (for example, in a database) before a test is executed, and that
affects or is affected by the component or system under test.
Test Input: The data received from an external source by the test object during test execution.
The external source can be hardware, software or human.
Test Design: The process of transforming general testing objectives into tangible test conditions and
Test Design technique: Procedure used to derive and/or select test cases.
Equivalence partitioning: A black box test design technique in which test cases are designed
to execute representatives from equivalence partitions. In principle test cases are designed
to cover each partition at least once.
Boundary value analysis: A black box test design technique in which test cases are designed
based on boundary values
Decision table: A table showing combinations of inputs and/or stimuli (causes) with their
associated outputs and/or actions (effects), which can be used to design test cases.
decision table testing: A black box test design technique in which test cases are designed to
execute the combinations of inputs and/or stimuli (causes) shown in a decision table.
Error guessing: A test design technique where the experience of the tester is used to
anticipate what defects might be present in the component or system under test as a result
of errors made, and to design tests specifically to expose them.
Experience-based test design technique: Procedure to derive and/or select test cases based
on the tester’s experience, knowledge and intuition.
Mind-map: A diagram used to represent words, ideas, tasks, or other items linked to and
arranged around a central key word or idea.
Test Basis: All documents from which the requirements of a component or system can be
inferred. The documentation on which the test cases are based. If a document can be
amended only by way of formal amendment procedure, then the test basis is called a frozen
Specification: A document that specifies, ideally in a complete, precise and verifiable manner,
the requirements, design, behavior, or other characteristics of a component or system, and,
often, the procedures for determining whether these provisions have been satisfied
Use Case: A sequence of transactions in a dialogue between an actor and a component or
system with a tangible result, where an actor can be a user or anything that can exchange
information with the system.
Use Case Testing = Scenario testing: A black box test design technique in which test cases are designed to
execute scenarios of use cases.
Data flow: An abstract representation of the sequence and possible changes of the state of
data objects, where the state of an object is any of: creation, usage, or destruction
Test Procedure = test script: A document specifying a sequence of actions for the execution
of a test.
Test Phase: A distinct set of test activities collected into a manageable phase of a project, e.g.
the execution activities of a test level.
Test session: An uninterrupted period of time spent in executing tests. In exploratory testing,
each test session is focused on a charter, but testers can also explore new opportunities or
issues during a session. The tester creates and executes test cases on the fly and records
Daily Test build: a development activity where a complete system is compiled and linked every
day (usually overnight), so that a consistent system is available at any time including all
Test Execution: The process of running a test on the component or system under test,
producing actual result.
Ad hoc testing: Testing carried out informally; no formal test preparation takes place, no
recognised test design technique is used.
Exploratory testing: An informal test design technique where the tester actively controls the
design of the tests as those tests are performed and uses information gained while testing to
design new and better tests.
Scripted testing: Test execution carried out by following a previously documented sequence
Test Log: A chronological record of relevant details about the execution of tests
Test Run: Execution of a test on a specific version of the test object.
Test Fail: A test is deemed to fail if its actual result does not match its expected result.
Test Pass: A test is deemed to pass if its actual result matches its expected result.
Error: A human action that produces an incorrect result
Bug = Defect = deviation = incident: A flaw in a component or system that can cause the component
or system to fail to perform its required function, e.g. an incorrect statement or data definition.
Failure: A defect, if encountered during execution, may cause a failure of the component or system
Defect management tool = defect tracking tool: A tool that facilitates the recording and status tracking of defects and changes.
Defect report: A document reporting on any flaw in a component or system that can cause the
component or system to fail to perform its required function
Severity: The degree of impact that a defect has on the development or operation of a
component or system.
Priority: The level of (business) importance assigned to an item, e.g. defect
False-positive result: A test result in which a defect is reported although no such defect actually
exists in the test object.
False-negative result: A test result which fails to identify the presence of a defect that is actually
present in the test object.
Metric: A measurement scale and the method used for measurement
Quality: The degree to which a component, system or process meets specified requirements
and user needs.
Quality assurance: Part of quality management focused on providing confidence that quality
requirements will be fulfilled.
Quality gate: A special milestone in a project. Quality gates are located between those phases
of a project strongly depending on the outcome of a previous phase. A quality gate
includes a formal check of the documents of the previous phase.
Validation: Confirmation by examination and through provision of objective evidence that
the requirements for a specific intended use or application have been fulfilled.
Verification: Confirmation by examination and through provision of objective evidence that
specified requirements have been fulfilled
Review: An evaluation of a product or project status to ascertain discrepancies from planned
results and to recommend improvements. Examples include management review, informal
review, technical review, inspection, and walkthrough.
Peer review: A review of a software work product by colleagues of the producer of the
product for the purpose of identifying defects and improvements. Examples are inspection,
technical review and walkthrough.
Inspection: A type of peer review that relies on visual examination of documents to detect
defects, e.g. violations of development standards and non-conformance to higher level
documentation. The most formal review technique and therefore always based on a
Walkthrough: A step-by-step presentation by the author of a document in order to gather
information and to establish a common understanding of its content.
Cost of quality: The total costs incurred on quality activities and issues and often split into
prevention costs, appraisal costs, internal failure costs and external failure costs.
Root cause analysis: An analysis technique aimed at identifying the root causes of defects.