The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.
The behavior produced or observed when a component or system is tested. Part of the test case.
Testing to verify that a delivered product or service meets the acceptance criteria or satisfies the users of an organization. The focus is on “customer” requirements, which may be defined by business analysts, managers, users, customers, and others.
Verifying that a product works for all audiences and excludes no one due to disability or hardware/software limitations.
An iterative method of software development that emphasizes evolutionary, feedback-driven programming with rapid and continuous testing. This approach aims to make minor design improvements based on customer feedback as soon as possible so that major changes are made before the code has become overly complicated to change.
An initial test phase that is limited in scope, time, and/or several participants, and focuses primarily on internal functionality. It is usually conducted by developers or other members of the development team; anyone outside this group may be involved in beta testing.
A requirement that has more than one possible interpretation you have to figure out which was intended, either by consulting the original author/stakeholder or by testing the feature.
API (application program interface):
A set of routines, protocols, and tools for building application software.
Checkpoints in a program that test certain conditions and are expected to be true or false as per the requirements.
A belief or condition upon which an argument, plan, or action is based.
An inspection or other methods of determining whether a system’s security matches its policy.
Any tests which are performed by software tools or script-based programs without any manual intervention as opposed to human testers (Manual Testing).
A chronological account of all the tests, changes, and bug fixes that have been applied to a program. This is useful in tracking backward through source code files or other versions so that the entire evolution of a program can be reconstructed.
Ad hoc testing:
Informal testing performed without test analysis and test design.
A procedure determining whether a person or a process is, in fact, who or what it is declared to be.
Permission given to a user or process to access resources.
An integration testing strategy in which every component of a system is assembled and tested together; contrasts with other integration testing strategies in which system components are integrated one at a time.
Black box testing:
Testing, either functional or non-functional, without reference to the internal structure of the component or system.
An integration testing strategy in which components are integrated one by one from the lowest level of the system architecture, compare to big-bang integration and top-down integration.
An input value or output value that is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example, the minimum or maximum value of a range.
Boundary value analysis:
A black box test design technique that tests input or output values that are on the edge of what is allowed or at the smallest incremental distance on either side of an edge. For example, an input field that accepts text between 1 and 10 characters has six boundary values: 0, 1, 2, 9, 10, and 11 characters.
A testing standards document containing a glossary of testing terms. BS stands for ‘British Standard’.
A testing standards document that describes the testing process, primarily focusing on component testing. BS stands for ‘British Standard’.
A slang term for fault, defect, or error. A human being can make an error (mistake), which produces a defect (fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail to do what it should do or do something it shouldn’t, causing a failure. Defects in software, systems, or documents may result in failures, but not all defects do so.
A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process.
Basis test set:
A set of test cases derived from the internal structure or specification to ensure that 100% of a specified coverage criterion is achieved.
The response of a component or system to a set of input values and preconditions.
A superior method or innovative practice that contributes to the improved performance of an organization or a system under a given context is usually recognized as ‘best’ by other peer organizations.
Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquire feedback from the market.
Blocked test case:
A test case that cannot be executed because the preconditions for its execution are not fulfilled.
The percentage of branches that have been exercised by a test case. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.
A white box test design technique in which test cases are designed to execute branches.
Any bug that prevents a program or its parts from working, either partially or entirely.
Bug tracking system:
A computer program in which defects or other issues in a program are identified and recorded. Also called issue tracking systems, defect tracking systems.
A collection of software modules, program files, and documentation derived from the source code developed by a specific development team to test or verify it at some point during its life cycle. In addition to compiled binary code, this could also include other deliverables such as white papers, design documents, test plans, or release notes.
Bug Life Cycle:
A defect goes through several stages. It starts when it is first identified and ends when it has been resolved.
A software development technique that involves checking collected bugs for duplicates, or non-valid bugs, and resolving these before any new ones are entered.
Bug Triage Meeting:
An event during software development led by a QA manager or test lead. This meeting is used to prioritize new defects found in production or testing environments. This meeting is also used to re-prioritize closed defects and perform a bug scrubbing process. The purpose of the bug triage meeting is to organize bug defects in order of importance so the most severe bugs can be fixed first.
Build verification test:
A software test that is designed to ensure that a properly built program will execute without unexpected results, and may be used as a final check before the program is distributed.
behavior-driven development (BDD):
A collaborative approach to development in which the team is focusing on delivering the expected behavior of a component or system for the customer, which forms the basis for testing.
A simpler form of the test case, often merely a document with short test instructions (“one-liners”). An advantage of checklists is that they are easy to develop. A disadvantage is that they are less structured than test cases. Checklists can complement test cases well. In exploratory testing, checklists are often used instead of test cases.
A generic term for analysis methods that measure the proportion of code in a system that is executed by testing. Expressed as a percentage, for example, 90% code coverage.
Test level that evaluates the smallest elements of the system. Also known as a unit test, program test, and module test.
The process of ensuring that software will work with other systems or a specific platform. Compatibility testing is usually conducted either manually by performing tests on the software using different computer platforms, or automatically by simulating and running tests in various environments.
The level of competence required to complete a particular phase, activity, or type of testing. Completion criteria are usually documented in some way.
Component integration testing:
A type of system testing that determines whether all components interact correctly when used together during normal operations. This type of testing is usually performed late in the development process after all components have been successfully coded and tested individually, but before integration testing.
A testing method that relies on domain knowledge and heuristics to derive test cases.
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a component or system.
The degree to which a website or web application can function across different browsers and degrade gracefully when browser features are absent or lacking.
An automated software development procedure that merges, integrates, and tests all changes as soon as they are committed.
An approach that involves a process of testing early, testing often, testing everywhere, and automating to obtain feedback on the business risks associated with a software release candidate as rapidly as possible.
The sequence in which operations are performed by a business process, component, or system.
context of use:
Users, tasks, equipment (hardware, software, and materials), and the physical and social environments in which a software product is used.
The degree to which a component or system ensures that data are accessible only to those authorized to have access.
The practice of determining how a security attack has succeeded and assessed the damage caused.
A white-box test technique in which test cases are designed to exercise outcomes of atomic conditions.
Testing to determine the compliance of the component or system.
Adherence of a work product to standards, conventions, or regulations in laws and similar prescriptions.
A type of testing initiated by a modification to a component or system.
A test automation approach in which inputs to the test object are recorded during manual testing to generate automated test scripts that can be executed later.
It tests to ensure that an application can handle all the load amount that it was designed to handle.
A flaw in a component or system that can cause the component or system to fail to perform its required function. A defect, if encountered during execution, may cause a failure of the component or system.
A document that is used to report a defect in a component, system, or document. Also known as an incident report.
Any product that must be delivered to someone other than the author of the product. Examples of deliverables are documentation, code, and the system.
Any reference within one product to another for its proper execution and/or successful completion. In software applications, it usually refers to a requirement upon another module or program that must be satisfied before the given module or program can function correctly.
A test review technique that provides for the systematic examination of a document against its requirements or using some other objective standard. Each requirement is reviewed by one or more reviewers who consider it from two perspectives:
– Did the author correctly understand and apply the requirements?
– Was the document written by procedures, standards, style guides, etc.?
An individual who is knowledgeable and experienced in a particular application area. Such individuals may provide information on the specific requirements for a given component or system; they may also be asked to participate in the testing process, either by serving as product testers themselves or by providing written feedback on test design techniques and results.
decision table testing:
A black-box test technique in which test cases are designed to exercise the combinations of conditions and the resulting actions shown in a decision table.
The number of defects per unit size of a work product.
The process of recognizing, recording, classifying, investigating, resolving, and disposing of defects.
A list of categories designed to identify and classify defects.
denial of service (DoS):
A security attack that is intended to overload the system with requests such that legitimate requests cannot be severed.
Testing that involves the execution of the test item.
A temporary component or tool that replaces another component and controls or calls a test item in isolation.
A scripting technique that uses data files to contain the test data and expected results needed to execute the test scripts.
The protection of personally identifiable information or otherwise sensitive information from undesired disclosure.
Testing that is used to test whether the performance of an application from start to finish conforms with the behaviour that is expected from it. This technique can be used to identify system dependencies and confirm the integrity of data transfer across different system components.
Criteria that must be met before you can initiate testing, such as that the test cases and test plans are complete.
A test design technique that is based on the fact that data in a system is managed in classes, such as intervals. Because of this, you only need to test a single value in every equivalence class. For example, you can assume that a calculator performs all addition operations in the same way; so if you test one addition operation, you have tested the entire equivalence class.
A human action that produces an incorrect result.
Experience-based test design technique where the tester develops test cases based on his/her skill and intuition, and experience with similar systems and technologies.
A test approach in which you test all possible inputs and outputs.
Criteria that must be fulfilled for testing to be considered complete, such as that all high-priority test cases are executed, and that no open high-priority defect remains. Also known as completion criteria.
A description of the test object’s expected status or behaviour after the test steps are completed. Part of the test case.
An experience-based test design technique that is based on the tester’s experience; the tester creates the tests while he/she gets to know the system and executes the tests.
The capability of the software product to provide appropriate performance, relative to the number of resources used under stated conditions.
A test case that is designed to exercise code handling the exceptional or “fringe” conditions of a system.
A hardware or software system that duplicates the functionality of another system.
A large user story that cannot be delivered as defined within a single iteration or is large enough that it can be split into smaller user stories.
Testing to determine the stability of a system under a significant load over a significant period of time within the system’s operational context. Also, known as soak testing.
Testing based on the tester’s experience, knowledge, and intuition.
A security tester using hacker techniques.
The process of encoding information so that only authorized parties can retrieve the original information, usually by means of a specific decryption key or process.
Deviation of the component or system under test from its expected result.
A review that proceeds according to a documented review process that may include, for example, review meetings, formal roles, required preparation steps, and goals. Inspection is an example of a formal review.
Testing of the system’s functionality and behaviour; is the opposite of non-functional testing.
A requirement that specifies a function that a component or system must perform.
It tests an application’s ability to determine whether it can allocate additional resources like additional CPU or servers to make a backup system during failure.
A report that program condition is not present when it actually exists. False-negative errors are sometimes called ‘missed errors’.
A report that program condition is present when it actually does not exist. This type of error may result from an incorrect configuration or testing environment, or from a false assumption made by the program itself in interpreting its results. Error messages about incorrectly formatted input data, for example, may result in a false-positive error if the program contains incorrect assumptions about the format and nature of its inputs.
A discrepancy (incorrectness or incompleteness) between a planned or expected condition and an actual occurrence, such as failure of equipment, resources, or people to perform as intended.
A distinct capability of a software product that provides value in the context of a given business process.
The ability of a software product to adapt to potential changes in business requirements.
The degree to which a component or system operates as intended despite the presence of hardware or software faults.
Testing uses a combination of white box and black box testing techniques to carry out software debugging on a system whose code the tester has limited knowledge of.
A specification language for describing the expected behavior of the software application.
Glass box testing:
Testing that examines program internal structures and processes to detect errors and determine when, where, why, and how they occurred.
Testing that verifies the functionality of a Graphical User Interface.
High-level test case:
A test case without concrete (implementation level) values for input data and expected results.
The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification, and test procedure specification).
A fault or failure resulting from an incorrect application of information, lack of appropriate knowledge, training, and skill on the part of personnel; misuse of equipment; improper installation, operation, and maintenance; carelessness, or negligence.
A person or organization who is actively involved in security attacks, usually with malicious intent.
An international standard for test documentation was published by the IEEE organization. The full name of the standard is IEEE Standard for Software Test Documentation. It includes templates for the test plan, various test reports, and handover documents.
Techniques that help assess the impact of a change. Used to determine the choice and extent of regression tests needed.
A review that isn’t based on a formal procedure.
An example of a formal review technique.
A type of test meant to assess whether the system meets the requirements for installation and uninstallation. This could include verifying that the correct files are copied to the machine and that a shortcut is created in the application menu.
A test level is meant to show that the system’s components work with one another. The goal is to find problems in interfaces and communication between components.
A development cycle consists of a number of phases, from the formulation of requirements to the delivery of part of an IT system. Common phases are analysis, design, development, and testing. The practice of working in iterations is called iterative development.
A type of review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher-level documentation. The most formal review technique and therefore always based on a documented procedure.
The process of testing to determine the interoperability of a software product.
It ensures the application’s ability to work efficiently across multiple regions and cultures.
A method for managing projects and workflow. Each project or task is represented as a card that is moved through columns, with the progress being tracked by an electronic board.
A meeting is held at the start of the project to determine goals and objectives for participants in the project. Sprints should also have one at the start of each sprint. All participants need to be present as it can be used to create a project schedule, receive updates from team members on progress, and also serve as a status report for upper management.
A type of performance testing conducted to evaluate the behaviour of a component or system with increasing load, e.g. numbers of concurrent users and/or numbers of transactions. Used to determine what load can be handled by the component or system.
When an application gets localized for a specific target audience, its standard requires verification. Localization testing helps verify that the standard is adequate and only takes place on a localized software application version.
A measure of how easy a given piece of software code is to modify in order to correct defects, improve or add functionality.
A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.
A measurement scale and the method used for measurement.
A point in time in a project at which defined (intermediate) deliverables and results should be ready.
The leader and the main person responsible for an inspection or other review process.
A software tool or hardware device that runs concurrently with the component or system under test and supervises, records, and/or analyses the behavior of the component or system.
A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.
A human tester executes the test cases manually through their logical reasoning and inferences to seek out several bugs or issues in the software application.
Static analysis aiming to detect and remove malicious code received at an interface.
The standard for creating names for variables, functions, and other parts of a program. For example, strName, sName, and Name are all technically valid names for a variable, but if you don’t adhere to one structure as the standard, maintenance will be very difficult.
A type of testing intended to show that the system works well even if it is not used correctly. For example, if a user enters text in a numeric field, the system should not crash.
Testing of non-functional aspects of the system, such as usability, reliability, maintainability, and performance.
An enhancement to a software product that has not previously been implemented.
A form of licensing in which software is offered free of charge.
The result after a test case has been executed.
The capability of the software product to enable the user to operate and control it.
Hardware and software products installed at users’ or customers’ sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.
A variable (whether stored within a component or outside) that is written by a component.
An instance of an output.
Test approach where two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, work together to find defects. Typically, they share one computer and trade control of it while testing. One tester can act as an observer when the other performs tests.
A test to evaluate whether the system meets performance requirements such as response time or transaction frequency.
A test aimed to show that the test object works correctly in normal situations. For example, a test to show that the process of registering a new customer function correctly when using valid test data.
Environmental and state conditions that must be fulfilled after a test case or test run has been executed.
Environmental and state conditions that must be fulfilled before the component or system can be tested. Also known as prerequisites.
Decision rules used to determine whether a test item (function) or feature has passed or failed a test.
The process of testing to determine the portability of a software product.
The level of (business) importance assigned to an item, e.g. defect.
Project test plan:
A test plan that typically addresses multiple test types, test levels, resources, and action plans.
PO (Product Owner):
A role on the scrum team that owns the product and is responsible for managing its development.
Parallel testing involves running tests at the same time, usually in different environments or on different computers. This allows defects to be identified faster and gives you a higher chance of finding them before release.
A method of quality assurance where developers test the work of other developers.
POC stands for Proof of Concept. This is a quick prototype to determine the feasibility or suitability of the product. A POC allows you to test out your idea, concept, solution, or code quickly and in an inexpensive way before making any major changes or investments in it.
A type of testing to ensure that the release is performed correctly and the application can be deployed.
A testing technique aiming to exploit security vulnerabilities (known or unknown) to gain unauthorized access.
The degree to which a component, system, or process meets specified requirements and/or user/customer needs and expectations.
Quality assurance (QA):
Systematic monitoring and evaluation of various aspects of a component or system to maximize the probability that minimum standards of quality are being attained.
A test activity generally conducted in conjunction with each new release of the system, in order to detect defects that were introduced (or discovered) when prior defects were fixed.
A new version of the system under test. The release can be either an internal release from developers to testers or a release of the system to the client.
A set of activities geared to create new versions of the complete system. Each release is identified by a distinct version number.
A type of non-exhaustive test performed when the system is installed in a new target environment, using a small set of test cases to validate critical functions without going into depth on any one of them. Also called smoke testing.
A document defining what a product or service is expected to do, including functional and non-functional requirements, specifications, and acceptance criteria.
A test to verify that a previously-reported defect has been corrected. Also known as confirmation testing.
A meeting at the end of a project/a sprint during which the team members evaluate the work and learn lessons that can be applied to the next project or sprint.
A static test technique in which the reviewer reads a text in a structured way in order to find defects and suggest improvements. Reviews may cover requirements documents, test documents, code, and other materials, and can range from informal to formal.
A factor that could result in future negative consequences. It’s usually expressed in terms of impact and likelihood.
A structured approach in which test cases are chosen based on risks. Test design techniques like boundary value analysis and equivalence partitioning are risk-based.
A document identifying test items, their configuration, current status, and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase.
The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.
The process of testing to determine the reliability of a software product.
An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.
An activity that attempts to uncover any weaknesses in a fail-safe system, that if activated, will result in the program or system continuing its operations without loss of data or functionality.
The capability that a system or module can be used as part of another system.
A technique for decreasing the load on a system in a measurable and controlled way.
A technique for increasing the load on a system in a measurable and controlled way.
An integration testing strategy in which the system is integrated both top-down and bottom-up simultaneously. Can save time, but is complex.
A component of non-functional testing, used to measure the capability of software to scale up or down in terms of its non-functional characteristics.
A sequence of activities performed in a system, such as logging in, signing up a customer, ordering products, and printing an invoice. You can combine test cases to form a scenario, especially at higher test levels.
An iterative, incremental framework for project management commonly used with agile software development.
The degree of impact that a defect has on the development or operation of a component or system.
State transition testing:
A test design technique in which a system is viewed as a series of states, valid and invalid transitions between those states, and inputs and events that cause changes in state.
Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.
Testing meant to assess how the system reacts to workloads (network, processing, data volume) that exceed the system’s specified requirements. Stress testing shows which system resource (e.g. memory or bandwidth) is first to fail.
System integration testing:
A test level designed to evaluate whether a system can be successfully integrated with other systems. May be included as part of system-level testing, or be conducted as its own test level in between system testing and acceptance testing.
Test level aimed at testing the complete integrated system. Both functional and non-functional tests are conducted.
The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.
Testing to determine the security of the software product.
A device, computer program, or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.
A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test are among industry best practices.
A testing process to ensure that, at the outset of software development, the product works. It’s often used by managers or customers when they believe there is little risk of introducing major defects into a program and it can reduce the time needed for later, more thorough testing.
A list of commands that can be used to control the execution of the program being tested.
It tests the ability of a software application to work out whether it’s capable of handling sudden decrements or increments of the application load.
A defect in a program that prevents it from operating at all. A showstopper is very serious that no testing can be done till it is fixed.
The process of writing programs that perform test steps and verify the result.
The documentation on which test cases are based.
A structured test script that describes how a function or feature should be tested, including test steps, expected results, preconditions, and postconditions.
Information that completes the test steps in a test case e.g. what values to input. In a test case where you add a customer to the system, the test data might be the customer’s name and address. Test data might exist in a separate test data file or in a database.
A development approach in which developers write test cases before writing any code.
The technical environment in which the tests are conducted, including hardware, software, and test tools. Documented in the test plan and/or test strategy.
The process of running test cases on the test object.
A group of test activities organized and carried out together in order to meet stated goals. Examples of levels of testing are component, integration, system, and acceptance test.
A document that describes testing activities in chronological order.
The part or aspects of the system to be tested. Might be a component, subsystem, or the system as a whole.
A document describing what should be tested by whom, when, how, and why. The test plan is bounded in time, describing system testing for a particular version of a system.
A document that describes how an organization runs its testing processes at a high level. It may contain a description of test levels according to the chosen life cycle model, roles and responsibilities, required/expected documents, etc.
The complete set of testing activities, from planning through to completion. The test process is usually described in the test policy. The fundamental test process comprises planning, specification, execution, recording, and checking for completion.
A document describing how a system is usually tested.
A group of test cases e.g. all the test cases for system testing.
A set of activities intended to evaluate software and other deliverables to determine if they meet requirements, to demonstrate that they are fit for purpose, and to find defects.
An integration test strategy, in which the team starts to integrate components at the top level of the system architecture.
A table showing the relationship between two or more baselined documents, such as requirements and test cases, or test cases and defect reports. Used to assess what impact a change will have across the documentation and software, for example, which test cases will need to be run when given requirements change.
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review.
An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element.
The planning, estimating, monitoring, and control of test activities, are typically carried out by a test manager.
Test Maturity Model (TMM):
A five-level staged framework for test process improvement, related to the Capability Maturity Model (CMM) describes the key elements of an effective test process.
Test Process Improvement (TPI):
A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.
A reason or purpose for designing and executing a test.
A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code.
A distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities of a test level.
The activity of establishing or updating a test plan.
Test procedure specification:
A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.
An attribute of a test indicates whether the same results are produced each time the test is executed.
Execution of a test on a specific version of the test object.
Test summary report:
A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
The capability of the software product to enable modified software to be tested.
A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process.
The degree to which a requirement is stated in terms that permit the establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met.
A testbed is a set of tools, programs, and interfaces needed for the testing of a specific component or system.
Test case design technique:
A test case design technique is an approach to designing test cases for a particular objective.
The percentage or ratio of all possible code paths through system code that has been exercised by one or more test cases.
It’s a management activity that approximates how long a testing task would take to complete and how much will it cost.
The result of a test is either positive or negative. A positive result means that the expectation described in the test case was met; a negative result means that it was not met. Test cases whose results are determined to be inconclusive or not applicable are documented as such.
A test scenario is a document that describes the pre-conditions for executing a test case, as well as the expected results.
A test script is a step-by-step document that describes what actions are to be taken and what results should be verified when performing a test or series of tests. Test scripts typically include specific inputs, execution conditions, expected results, and acceptance criteria.
A document that provides detailed information regarding how to execute one or more test cases for a given product under consideration for testing. Test specification documents typically include information on the scope, environment and preparation requirements, pre-requisites, and steps to follow for each test case.
A type of requirements document in which the requirements are written in the form of sequences that describe how various actors in the system interact with the system.
Code that cannot be reached and therefore is impossible to execute.
UML (Unified Modeling Language):
A language used to define and design object-oriented applications; UML is organized around a set of notations, or diagrams, for visualizing and documenting the artifacts produced throughout the software development process.
A method for testing individual software units, or modules. A series of tests are created to verify each module’s functionality and to determine whether the code meets specific quality standards such as high cohesion and low coupling.
Usability testing refers to any type of software testing that determines whether or not the users of a website, application, etc. can do what they want to accomplish quickly and with a minimum amount of effort.
User acceptance testing (UAT):
A phase of testing performed by the end-users of a product to determine whether or not they accept the product’s performance based on what was agreed upon during project planning.
A user story is a description, written from the perspective of an end-user, of one or more features that will be included in a software product. User stories can vary from one to several sentences and are often created during the requirements analysis phase of the SDLC (software development process life cycle). They may also include one or more acceptance criteria.
A sequential software development lifecycle model that describes requirements management, development, and testing on a number of different levels.
Tests are designed to demonstrate that the developers have built the correct system. In contrast with verification, which means testing that the system has been built correctly.
Tests are designed to demonstrate that the developers have built the system correctly. In contrast with validation, which means testing that the correct system has been built.
Various methods for uniquely identifying documents and source files, e.g. with a unique version number. Each time the object changes, it should receive a new version number.
Testing where the system is subjected to large volumes of data.
A computer program that allows a user to interact with an application or system, but without any hardware being present. For example, VMware and Microsoft Virtual PC allow users to run multiple operating systems on a single machine as if they were several separate computers.
A static analyzer that is used to detect particular security vulnerabilities in the code.
Testing that uses image recognition to interact with GUI objects.
A simulation of activities performed according to a user operational profile.
A sequential development approach consists of a series of phases carried out one by one.
A type of testing in which the tester has knowledge of the internal structure of the test object. White box testers may familiarize themselves with the system by reading the program code, studying the database model, or going through the technical specifications. Contrast with black-box testing.
A face-to-face review meeting in which requirements, designs, or code are presented to project team members for planning, or verifying understanding. The meetings can be held periodically (e.g., every two weeks) during development and testing activities.
A workaround is an alternative that a programmer may create to overcome the need for proper documentation. Workarounds can be implemented as a temporary solution or may become part of the final product.
Web Content Accessibility Guidelines (WCAG):
A part of a series of web accessibility guidelines published by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C), the main international standards organization for the internet. They consist of a set of guidelines for making content accessible, primarily for people with disabilities.