Question
click below
click below
Question
Normal Size Small Size show me how
ISTQB F.L. 1.1 - 2.2
ISTQB Foundation Level Glossary
Question | Answer |
---|---|
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the __________ criteria and to enable the user, customers or other authorized entity to determine whether or not to _ | Acceptance testing |
Simulated or actual operational testing by potential users/customers or an independent test team at the developers site, but outside the development organization. _____ _______ is often employed for off-the-shelf software as a form of internal acceptanc | Alpha testing |
Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. | Beta testing |
A flaw in a component or system that can cause the component or system to fail to perform its required function. | Bug - Defect - Fault |
The testing of individual software ___________. | Component testing |
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite. | Coverage - Test Coverage |
The process of finding, analyzing and removing the causes of failures in software. | Debugging |
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. | Driver |
A human action that produces an incorrect result. | Error - Mistake |
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them. | Error Guessing |
A test approach in which the test suite comprises all combinations of input values and preconditions. | Exhaustive Testing |
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. | Exit Criteria |
Deviation of the component or system from its expected delivery, service or result. | Failure |
A requirement that specifies a function that a component or system must perform. | Functional requirement |
Any event occurring that requires investigation. | Incident |
A development life cycle where a project is broken into a series of __________. | Incremental development model |
Separation of responsibilities, which encourages the accomplishment of objective testing. | Independence |
The process of combining components or systems into larger assemblies. | Integration |
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. | Integration testing |
A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability. | Non-functional requirement |
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format. | Off-the-shelf software |
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. | Quality |
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made. It is performed when the software or its environment is changed. | Regression Testing |
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. | Requirement |
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions. | Retesting - Confirmation Testing |
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. | Review |
A factor that could result in future negative consequences; usually expressed as impact and likelihood. | Risk |
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. | Robustness |
Testing to determine the __________ of the software product. | Robustness testing |
A skeletal or special-purpose implementation of a software component, used to developer test a component that calls or is otherwise dependent on it. It replaces a called component. | Stub |
The process of testing an integrated ______ to verify that it meets specified requirements. | System testing |
The implementation of the test strategy for a specific project. It typically includes the decisions made based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be a | Test Approach |
All documents from which the requirements of a component or system can be inferred. | Test Basis |
A set of input values, execution preconditions, expected results and execution postconditions developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. | Test Case |
An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, qualityattribute, or structural element. | Test Condition |
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. | Test Control |
Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test. | Test Data |
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and the associated high-level test cases. | Test Design Specification |
A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases. | Test driven development |
An ___________ containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. | Test environment |
The process of running a test on the component or system under test, producing actual results. | Test Execution |
A group of test activities that are organized and managed together. | Test level |
A chronological record of relevant details about the execution of tests. | Test Log |
A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. | Test Monitoring |
A reason or purpose for designing and executing a test. | Test Objective |
A document describing the scope, approach, resources and schedule of intended test activities. It identifies, amongst others, test items, the features to be tested, the testing tasks, who will do each task, the degree of tester independence, the test envi | Test Plan |
A high-level document describing the principles, approach and major objectives of the organization regarding testing. | Test Policy |
A document specifying a sequence of actions for the execution of a test. | Test Procedure Specification |
A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects). | Test Strategy |
A set of several test cases for a component or system under test, where the postcondition of one test is often used as the precondition for the next one. | Test Suite |
A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. | Test Summary Report |
The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that the | Testing |
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used | Testware |
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. | Validation |
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. | Verification |
A framework to describe the software development life cycle activities from requirements specification to maintenance. | V-model |