click below
click below
Normal Size Small Size show me how
ISTQB F.L. 1.3 - 1.5
ISTQB Foundation Level Glossary
| Question | Answer |
|---|---|
| A test approach in which the test suite comprises all combinations of input values and preconditions. | Exhaustive Testing |
| The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. | Exit Criteria |
| Any event occurring that requires investigation. | Incident |
| Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made. It is performed when the software or its environment is changed. | Regression Testing |
| All documents from which the requirements of a component or system can be inferred. | Test Basis |
| An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, qualityattribute, or structural element. | Test Condition |
| The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite. | Coverage - Test Coverage |
| Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test. | Test Data |
| The process of running a test on the component or system under test, producing actual results. | Test Execution |
| A chronological record of relevant details about the execution of tests. | Test Log |
| A document describing the scope, approach, resources and schedule of intended test activities. It identifies, amongst others, test items, the features to be tested, the testing tasks, who will do each task, the degree of tester independence, the test envi | Test Plan |
| A high-level document describing the principles, approach and major objectives of the organization regarding testing. | Test Policy |
| A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects). | Test Strategy |
| The implementation of the test strategy for a specific project. It typically includes the decisions made based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be a | Test Approach |
| A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. | Test Control |
| A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. | Test Monitoring |
| A set of input values, execution preconditions, expected results and execution postconditions developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. | Test Case |
| A document specifying the test conditions (coverage items) for a test item, the detailed test approach and the associated high-level test cases. | Test Design Specification |
| A document specifying a sequence of actions for the execution of a test. | Test Procedure Specification |
| A set of several test cases for a component or system under test, where the postcondition of one test is often used as the precondition for the next one. | Test Suite |
| Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions. | Retesting - Confirmation Testing |
| A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. | Test Summary Report |
| Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used | Testware |
| Separation of responsibilities, which encourages the accomplishment of objective testing. | Independence |
| A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them. | Error Guessing |