sales(@)vihattechnologies{.}com

Glossary

Snap into a glossary!!

A

Acceptance Criteria:

The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.

Actual Result:

The behavior produced or observed when a component or system is tested. Part of the test case.

Acceptance Testing:

Testing to verify that a delivered product or service meets the acceptance criteria or satisfies the users of an organization. The focus is on “customer” requirements, which may be defined by business analysts, managers, users, customers, and others.

Accessibility Testing:

Verifying that a product works for all audiences and excludes no one due to disability or hardware/software limitations.

Agile development:

An iterative method of software development that emphasizes evolutionary, feedback-driven programming with rapid and continuous testing. This approach aims to make minor design improvements based on customer feedback as soon as possible so that major changes are made before the code has become overly complicated to change.

Alpha testing:

An initial test phase that is limited in scope, time, and/or several participants, and focuses primarily on internal functionality. It is usually conducted by developers or other members of the development team; anyone outside this group may be involved in beta testing.

Ambiguous requirement:

A requirement that has more than one possible interpretation you have to figure out which was intended, either by consulting the original author/stakeholder or by testing the feature.

API (application program interface):

It’s a defined interface that outlines how an application program can interact and request services from libraries or operating systems. It specifies the methods and protocols for communication, enabling seamless integration between different software components.

Assertions:

Checkpoints in a program that test certain conditions and are expected to be true or false as per the requirements.

Assumption:

A belief or condition upon which an argument, plan, or action is based.

Audit:

An independent review conducted to assess compliance with requirements, specifications, standards, procedures, codes, contracts, and licensing requirements, among others.

Automated testing:

Any tests which are performed by software tools or script-based programs without any manual intervention as opposed to human testers (Manual Testing).

Availability trail:

A chronological account of all the tests, changes, and bug fixes that have been applied to a program. This is useful in tracking backward through source code files or other versions so that the entire evolution of a program can be reconstructed.

Ad hoc testing:

Informal testing performed without test analysis and test design.

Authentication:

A procedure determining whether a person or a process is, in fact, who or what it is declared to be.

Authorization:

Permission given to a user or process to access resources.

AVAILABILITY:

The level of readiness and availability of a component or system to be used as intended. This is commonly represented as a percentage and indicates how operational and accessible it is.

ACCURACY:

The ability of the software product to deliver the intended or agreed-upon outcomes or effects with the required level of accuracy.

AGILE TESTING:

Agile Testing integrates testing throughout development, emphasizing continuous feedback and collaboration. It employs test-driven development, automation and adapts to changing requirements. The approach catches defects early, maintains code quality, and improves efficiency. Agile Testing delivers high-quality software through rapid iterations and effective communication.

B

Big-bang integration:

An integration testing strategy in which every component of a system is assembled and tested together; contrasts with other integration testing strategies in which system components are integrated one at a time.

Black box testing:

Testing, either functional or non-functional, without reference to the internal structure of the component or system.

Bottom-up integration:

An integration testing strategy in which components are integrated one by one from the lowest level of the system architecture, compare to big-bang integration and top-down integration.

Boundary value:

An input value or output value that is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example, the minimum or maximum value of a range.

Boundary value analysis:

A black box test design technique that tests input or output values that are on the edge of what is allowed or at the smallest incremental distance on either side of an edge. For example, an input field that accepts text between 1 and 10 characters has six boundary values: 0, 1, 2, 9, 10, and 11 characters.

BS 7925-1:

A testing standards document containing a glossary of testing terms. BS stands for ‘British Standard’.

BS 7925-2:

A testing standards document that describes the testing process, primarily focusing on component testing. BS stands for ‘British Standard’.

Bug:

A slang term for fault, defect, or error. A human being can make an error (mistake), which produces a defect (fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail to do what it should do or do something it shouldn’t, causing a failure. Defects in software, systems, or documents may result in failures, but not all defects do so.

Baseline:

A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process.

Basis test set:

A set of test cases derived from the internal structure or specification to ensure that 100% of a specified coverage criterion is achieved.

Behavior:

The response of a component or system to a set of input values and preconditions.

Best practice:

A superior method or innovative practice that contributes to the improved performance of an organization or a system under a given context is usually recognized as ‘best’ by other peer organizations.

Beta testing:

Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquire feedback from the market.

Blocked test case:

A test case that cannot be executed because the preconditions for its execution are not fulfilled.

Branch coverage:

The percentage of branches that have been exercised by a test case. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.

Branch testing:

A white box test design technique in which test cases are designed to execute branches.

Blocker:

Any bug that prevents a program or its parts from working, either partially or entirely.

Bug tracking system:

A computer program in which defects or other issues in a program are identified and recorded. Also called issue tracking systems, defect tracking systems.

Build:

A collection of software modules, program files, and documentation derived from the source code developed by a specific development team to test or verify it at some point during its life cycle. In addition to compiled binary code, this could also include other deliverables such as white papers, design documents, test plans, or release notes.

Bug Life Cycle:

A defect goes through several stages. It starts when it is first identified and ends when it has been resolved.

Bug scrubbing:

A software development technique that involves checking collected bugs for duplicates, or non-valid bugs, and resolving these before any new ones are entered.

Bug Triage Meeting:

An event during software development led by a QA manager or test lead. This meeting is used to prioritize new defects found in production or testing environments. This meeting is also used to re-prioritize closed defects and perform a bug scrubbing process. The purpose of the bug triage meeting is to organize bug defects in order of importance so the most severe bugs can be fixed first.

Build verification test:

A software test that is designed to ensure that a properly built program will execute without unexpected results, and may be used as a final check before the program is distributed.

behavior-driven development (BDD):

A collaborative approach to development in which the team is focusing on delivering the expected behavior of a component or system for the customer, which forms the basis for testing.

BACKLOG:

Work waiting to be done; for IT this includes new applications to be developed and/or enhancements to existing applications. To be included in the development backlog, the work must have been cost-justified and approved for development.

BURN DOWN CHART:

A burndown chart is a graphical representation of work left to do in an epic or sprint in a given time. It is often used in agile software development methodologies such as Scrum.

BUSINESS PROCESS TESTING:

A testing approach based on descriptions or knowledge of business processes to design test cases.

BUILD VERIFICATION TEST (BVT) OR BUILD ACCEPTANCE TEST (BAT):

Tests are performed on each new build to ensure its testability and mainstream functionality before being handed to the testing team.

BUG BASH:

In-house testing involving various individuals from different roles to identify bugs before the software release.

BESPOKE SOFTWARE:

Software developed specifically for a set of users or customers, as opposed to off-the-shelf software.

BENCHMARK TEST:

1. A reference standard used for measurements or comparisons.
2. A test that compares components or systems to each other or a predefined standard.

BEHAVIORAL TESTING:

A testing approach where tests are defined in terms of externally observable inputs, outputs, and events. The design of these tests can utilize various sources of information.

C

Checklist:

A simpler form of the test case, often merely a document with short test instructions (“one-liners”). An advantage of checklists is that they are easy to develop. A disadvantage is that they are less structured than test cases. Checklists can complement test cases well. In exploratory testing, checklists are often used instead of test cases.

Code coverage:

A generic term for analysis methods that measure the proportion of code in a system that is executed by testing. Expressed as a percentage, for example, 90% code coverage.

Component testing:

Test level that evaluates the smallest elements of the system. Also known as a unit test, program test, and module test.

Compatibility testing:

The process of ensuring that software will work with other systems or a specific platform. Compatibility testing is usually conducted either manually by performing tests on the software using different computer platforms, or automatically by simulating and running tests in various environments.

Completion Criteria:

The level of competence required to complete a particular phase, activity, or type of testing. Completion criteria are usually documented in some way.

Component integration testing:

A type of system testing that determines whether all components interact correctly when used together during normal operations. This type of testing is usually performed late in the development process after all components have been successfully coded and tested individually, but before integration testing.

Context-driven testing:

A testing method that relies on domain knowledge and heuristics to derive test cases.

Complexity:

The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.

Consistency:

The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a component or system.

cross-browser compatibility:

The degree to which a website or web application can function across different browsers and its versions and degrade gracefully when browser features are absent or lacking.

continuous-integration:

An automated software development procedure that merges, integrates, and tests all changes as soon as they are committed.

continuous testing:

An approach that involves a process of testing early, testing often, testing everywhere, and automating to obtain feedback on the business risks associated with a software release candidate as rapidly as possible.

control flow:

The sequence in which operations are performed by a business process, component, or system.

context of use:

Users, tasks, equipment (hardware, software, and materials), and the physical and social environments in which a software product is used.

confidentiality:

The degree to which a component or system ensures that data are accessible only to those authorized to have access.

computer forensics:

The practice of determining how a security attack has succeeded and assessed the damage caused.

condition testing:

A white-box test technique in which test cases are designed to exercise outcomes of atomic conditions.

compliance testing:

Testing to determine the compliance of the component or system.

compliance:

Adherence of a work product to standards, conventions, or regulations in laws and similar prescriptions.

change-related testing:

A type of testing initiated by a modification to a component or system.

capture/playback:

A test automation approach in which inputs to the test object are recorded during manual testing to generate automated test scripts that can be executed later.

Capacity Testing:

It tests to ensure that an application can handle all the load amount that it was designed to handle.

CONVERSION TESTING:

It validates the effectiveness of data conversion processes, including field-to-field mapping, and data translation.

CHECK-IN:

It is a process of uploading the information (e.g. source code) from the local machine to the central repository.

CHECK-OUT:

It is a process of retrieving the information (e.g. source code) from the central repository to the local machine.

CROSS-SITE SCRIPTING:

A computer security exploit where information from an untrusted context is inserted into a trusted context to launch an attack.

CRITICAL PATH:

A series of dependent tasks in a project that must be completed as planned to keep the entire project on schedule.

CONFIGURATION MANAGEMENT:

A discipline that applies technical and administrative control to identify, document, and manage changes to the characteristics of a configuration item.

CONCURRENCY TESTING:

Testing conducted to assess how a component or system handles the occurrence of two or more activities within the same time interval, either through interleaving or simultaneous execution.

COMMERCIAL OFF-THE-SHELF SOFTWARE (COTS):

Software products developed for the general market and delivered in an identical format to multiple customers.

Command line interface (cli):

A user interface where commands are entered via keyboard input, and the system provides output on the monitor.

CoDE FREEZE:

The point in the development process where no changes are permitted to the source code of a program.

CoDE COMPLETE:

The stage in which a developer considers all the necessary code for implementing a feature to be checked into source control.

D

Defect:

A flaw in a component or system that can cause the component or system to fail to perform its required function. A defect, if encountered during execution, may cause a failure of the component or system.

Defect report:

A document that is used to report a defect in a component, system, or document. Also known as an incident report.

Deliverable:

Any product that must be delivered to someone other than the author of the product. Examples of deliverables are documentation, code, and the system.

Dependency:

Any reference within one product to another for its proper execution and/or successful completion. In software applications, it usually refers to a requirement upon another module or program that must be satisfied before the given module or program can function correctly.

Document review:

A test review technique that provides for the systematic examination of a document against its requirements or using some other objective standard. Each requirement is reviewed by one or more reviewers who consider it from two perspectives:
– Did the author correctly understand and apply the requirements?
– Was the document written by procedures, standards, style guides, etc.?

Domain Expert:

An individual who is knowledgeable and experienced in a particular application area. Such individuals may provide information on the specific requirements for a given component or system; they may also be asked to participate in the testing process, either by serving as product testers themselves or by providing written feedback on test design techniques and results.

decision table testing:

A black-box test technique in which test cases are designed to exercise the combinations of conditions and the resulting actions shown in a decision table.

defect density:

The number of defects per unit size of a work product.

defect management:

The process of recognizing, recording, classifying, investigating, resolving, and disposing of defects.

defect taxonomy:

A list of categories designed to identify and classify defects.

denial of service (DoS):

A security attack that is intended to overload the system with requests such that legitimate requests cannot be severed.

dynamic testing:

Testing that involves the execution of the test item.

driver:

A temporary component or tool that replaces another component and controls or calls a test item in isolation.

data-driven testing:

A scripting technique that uses data files to contain the test data and expected results needed to execute the test scripts.

data privacy:

The protection of personally identifiable information or otherwise sensitive information from undesired disclosure.

DEBUGGING:

The process of analyzing and correcting syntactic, logical, and other errors identified during testing.

DATA DICTIONARY:

It provides the capability to create test data to test validation for the defined data elements. The test data generated is based on the attributes defined for each data element. The test data will check both the normal variables for each data element as well as abnormal or error conditions for each data element.

DEFECT REMOVAL EFFICIENCY (dre):

The ratio of defects found during development to the total defects, including those discovered in the field after release.

DEFECT REJECTION RATIO (drr):

The ratio of rejected defect reports, which may be due to them not being actual bugs, to the total number of defects.

DEFECT PREVENTION:

Activities involved in identifying and preventing the introduction of defects into a product.

DEFECT MASKING:

Occurs when one defect prevents the detection of another defect.

DEFECT leakage ratio (dlr):

The ratio of undetected defects that make their way into production to the total number of defects.

DECISION COVERAGE:

The percentage of decision outcomes exercised by a test suite. Achieving 100% decision coverage implies complete branch coverage and statement coverage.

E

End-to-end testing:

Testing that is used to test whether the performance of an application from start to finish conforms with the behaviour that is expected from it. This technique can be used to identify system dependencies and confirm the integrity of data transfer across different system components.

Entry criteria:

Criteria that must be met before you can initiate testing, such as that the test cases and test plans are complete.

Equivalence partitioning:

A black box test design technique that is based on the fact that data in a system is managed in classes, such as intervals. Because of this, you only need to test a single value in every equivalence class. For example, you can assume that a calculator performs all addition operations in the same way; so if you test one addition operation, you have tested the entire equivalence class.

Error:

A human action that produces an incorrect result.

Error guessing:

Experience-based test design technique where the tester develops test cases based on his/her skill and intuition, and experience with similar systems and technologies.

Exhaustive testing:

A test approach in which you test all possible inputs and outputs.

Exit criteria:

Criteria that must be fulfilled for testing to be considered complete, such as that all high-priority test cases are executed, and that no open high-priority defect remains. Also known as completion criteria.

Expected result:

A description of the test object’s expected status or behaviour after the test steps are completed. Part of the test case.

Exploratory testing:

An experience-based test design technique that is based on the tester’s experience; the tester creates the tests while he/she gets to know the system and executes the tests.

Efficiency:

The capability of the software product to provide appropriate performance, relative to the number of resources used under stated conditions.

Edge Case:

A test case that is designed to exercise code handling the exceptional or “fringe” conditions of a system.

Emulator:

A hardware or software system that duplicates the functionality of another system.

epic:

A large user story that cannot be delivered as defined within a single iteration or is large enough that it can be split into smaller user stories.

endurance testing:

Testing to determine the stability of a system under a significant load over a significant period of time within the system’s operational context. Also, known as soak testing.

experience-based testing:

Testing based on the tester’s experience, knowledge, and intuition.

ethical hacker:

A security tester using hacker techniques.

encryption:

The process of encoding information so that only authorized parties can retrieve the original information, usually by means of a specific decryption key or process.

eXTREME PROGRAMMING (XP):

An agile software development methodology emphasizing collaboration, customer involvement, and iterative development.

eRROR SEEDING:

Intentionally adding known defects to existing ones in a component or system to monitor detection, removal rates, and estimate the remaining defects.

end user:

The individual or group who will use the system for its intended operational use in its deployed environment.

EATING YOUR ON DOG FOOD:

The practice of a company using pre-release versions of its own software for day-to-day operations, ensuring reliability before selling. It promotes early feedback on product value and usability.

F

Failure:

Deviation of the component or system under test from its expected delivery, service, or result.

Formal review:

A review that proceeds according to a documented review process that may include, for example, review meetings, formal roles, required preparation steps, and goals. Inspection is an example of a formal review.

Functional testing:

Testing of the system’s functionality and behaviour; is the opposite of non-functional testing.

Functional requirement:

A requirement that specifies a function that a component or system must perform.

Failover:

It tests an application’s ability to determine whether it can allocate additional resources like additional CPU or servers to make a backup system during failure.

False-negative:

A report that program condition is not present when it actually exists. False-negative errors are sometimes called ‘missed errors’.

False-positive:

A report that program condition is present when it actually does not exist. This type of error may result from an incorrect configuration or testing environment, or from a false assumption made by the program itself in interpreting its results. Error messages about incorrectly formatted input data, for example, may result in a false-positive error if the program contains incorrect assumptions about the format and nature of its inputs.

Fault:

A discrepancy (incorrectness or incompleteness) between a planned or expected condition and an actual occurrence, such as failure of equipment, resources, or people to perform as intended.

Feature:

A distinct capability of a software product that provides value in the context of a given business process.

Flexibility:

The ability of a software product to adapt to potential changes in business requirements.

fault-tolerance:

The degree to which a component or system operates as intended despite the presence of hardware or software faults.

fuzz TESTING:

A software testing technique that inputs random data (“fuzz”) into a program to detect failures, such as crashes or violated code assertions.

falsification:

The process of evaluating an object to demonstrate that it fails to meet requirements.

faULT INJECTION:

Intentionally introducing errors into code to evaluate the ability to detect such errors.

G

Gray-box testing:

Testing uses a combination of white box and black box testing techniques to carry out software debugging on a system whose code the tester has limited knowledge of.

Gherkin:

A specification language for describing the expected behavior of the software application.

Glass box testing:

Testing that examines program internal structures and processes to detect errors and determine when, where, why, and how they occurred. Also known as white-box testing.

GUI Testing:

Testing that verifies the functionality of a Graphical User Interface.

Graphical user interface (gui):

User interfaces that accept input via devices like a keyboard and mouse, providing graphical output on a monitor.

Gap Analysis:

An evaluation of the disparity between required or desired conditions and the current state of affairs.

H

High-level test case:

A test case without concrete (implementation level) values for input data and expected results.

Horizontal traceability:

The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification, and test procedure specification).

Human error:

A fault or failure resulting from an incorrect application of information, lack of appropriate knowledge, training, and skill on the part of personnel; misuse of equipment; improper installation, operation, and maintenance; carelessness, or negligence.

Hacker:

A person or organization who is actively involved in security attacks, usually with malicious intent.

HOTFIX:

It’s a process of fixing the defect immediately and releasing it into the appropriate environment.

HAPPY PATH:

A default scenario without exceptional or error conditions, representing a well-defined test case that executes without exceptions and produces an expected output.

I

IEEE 829:

An international standard for test documentation was published by the IEEE organization. The full name of the standard is IEEE Standard for Software Test Documentation. It includes templates for the test plan, various test reports, and handover documents.

Impact analysis:

Techniques that help assess the impact of a change. Used to determine the choice and extent of regression tests needed.

Informal review:

A review that isn’t based on a formal procedure.

Installation testing:

A type of test meant to assess whether the system meets the requirements for installation and uninstallation. This could include verifying that the correct files are copied to the machine and that a shortcut is created in the application menu.

Integration testing:

A test level is meant to show that the system’s components work with one another. The goal is to find problems in interfaces and communication between components.

Iteration:

A development cycle consists of a number of phases, from the formulation of requirements to the delivery of part of an IT system. Common phases are analysis, design, development, and testing. The practice of working in iterations is called iterative development.

Inspection:

A type of review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher-level documentation. The most formal review technique and therefore always based on a documented procedure.

Interoperability testing:

The process of testing to determine the interoperability of a software product.

Internationalization Testing:

It ensures the application’s ability to work efficiently across multiple regions and cultures.

InCEDENT:

An operational event outside the norm of system operation, with potential impact on the system.

InCEDENT REPORT:

A document reporting an event occurring during testing that requires investigation.

Input masking:

When a program encounters an error condition on the first invalid variable, subsequent values are not tested.

K

Kanban:

A method for managing projects and workflow. Each project or task is represented as a card that is moved through columns, with the progress being tracked by an electronic board.

Kick-off meeting:

A meeting is held at the start of the project to determine goals and objectives for participants in the project. Sprints should also have one at the start of each sprint. All participants need to be present as it can be used to create a project schedule, receive updates from team members on progress, and also serve as a status report for upper management.

Keyword-driven testing:

A scripting technique using data files containing test inputs, expected outcomes, and keywords related to the application being tested.

L

Load testing:

A type of performance testing conducted to evaluate the behaviour of a component or system with increasing load, e.g. numbers of concurrent users and/or numbers of transactions. Used to determine what load can be handled by the component or system.

Localization Testing:

When an application gets localized for a specific target audience, its standard requires verification. Localization testing helps verify that the standard is adequate and only takes place on a localized software application version.

LATENT BUG:

A bug that exists in the system under test but has not yet been discovered.

M

Maintainability:

A measure of how easy a given piece of software code is to modify in order to correct defects, improve, or add functionality.

Memory leak:

A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.

Metric:

A measurement scale and the method used for measurement.

Milestone:

A point in time in a project at which defined (intermediate) deliverables and results should be ready.

Moderator:

The leader and the main person responsible for an inspection or other review process.

Monitor:

A software tool or hardware device that runs concurrently with the component or system under test and supervises, records, and/or analyses the behavior of the component or system.

Mutation analysis:

A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.

Manual testing:

A human tester executes the test cases manually through their logical reasoning and inferences to seek out several bugs or issues in the software application.

malware scanning:

Static analysis aiming to detect and remove malicious code received at an interface.

Methodology:

A collection of methods, procedures, and standards that integrate engineering approaches to product development.

Monkey TESTING:

Testing by randomly inputting strings or pushing buttons to identify product breakages or vulnerabilities.

N

Naming standard:

The standard for creating names for variables, functions, and other parts of a program. For example, strName, sName, and Name are all technically valid names for a variable, but if you don’t adhere to one structure as the standard, maintenance will be very difficult.

Negative testing:

A type of testing intended to show that the system works well even if it is not used correctly. For example, if a user enters text in a numeric field, the system should not crash.

Non-functional testing:

Testing of non-functional aspects of the system, such as usability, reliability, maintainability, and performance.

New feature:

An enhancement to a software product that has not previously been implemented.

O

Open-source:

A form of licensing in which software is offered free of charge.

Outcome:

The result after a test case has been executed.

Operability:

The capability of the software product to enable the user to operate and control it.

Operational environment:

Hardware and software products installed at users’ or customers’ sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.

Output:

A variable (whether stored within a component or outside) that is written by a component.

Output value:

An instance of an output.

P

Pair testing:

Test approach where two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, work together to find defects. Typically, they share one computer and trade control of it while testing. One tester can act as an observer when the other performs tests.

Performance testing:

A test to evaluate whether the system meets performance requirements such as response time or transaction frequency.

Positive testing:

A test aimed to show that the test object works correctly in normal situations. For example, a test to show that the process of registering a new customer function correctly when using valid test data.

Postconditions:

Environmental and state conditions that must be fulfilled after a test case or test run has been executed.

Preconditions:

Environmental and state conditions that must be fulfilled before the component or system can be tested. Also known as prerequisites.

Pass/fail criteria:

Decision rules used to determine whether a test item (function) or feature has passed or failed a test.

Portability testing:

The process of testing to determine the portability of a software product.

Priority:

The level of (business) importance assigned to an item, e.g. defect.

Project test plan:

A test plan that typically addresses multiple test types, test levels, resources, and action plans.

PO (Product Owner):

A role on the scrum team that owns the product and is responsible for managing its development.

Parallel testing:

Parallel testing involves running tests at the same time, usually in different environments or on different computers. This allows defects to be identified faster and gives you a higher chance of finding them before release.

Peer review:

A method of quality assurance where developers test the work of other developers.

POC:

POC stands for Proof of Concept. This is a quick prototype to determine the feasibility or suitability of the product. A POC allows you to test out your idea, concept, solution, or code quickly and in an inexpensive way before making any major changes or investments in it.

post-release testing:

A type of testing to ensure that the release is performed correctly and the application can be deployed.

penetration testing:

A testing technique aiming to exploit security vulnerabilities (known or unknown) to gain unauthorized access.

pSEUDO RANDOM:

A series of values that appear random but are actually generated according to a predetermined sequence.

pROTOTYPE:

An incomplete implementation of software that mimics the expected behavior of the final product.

pesticide PARABOX:

The phenomenon where the more software is tested, the more immune it becomes to those tests, similar to how insects develop resistance to pesticides.

peRFORMANCE EVALUATION:

Assessing a system or component to determine how effectively it achieves operational objectives.

Q

Quality:

The degree to which a component, system, or process meets specified requirements and/or user/customer needs and expectations.

Quality assurance (QA):

Systematic monitoring and evaluation of various aspects of a component or system to maximize the probability that minimum standards of quality are being attained.

QUALITY CONTROL (QC):

The process by which product quality is compared with applicable standards and the action is taken when nonconformance is detected.

QUALITY FACTOR:

A management-oriented attribute of software that contributes to its overall quality.

QUALITY GATE:

A milestone in a project where a specific level of quality must be achieved before proceeding.

R

Regression testing:

A test activity generally conducted in conjunction with each new release of the system, in order to detect defects that were introduced (or discovered) when prior defects were fixed.

Release:

A new version of the system under test. The release can be either an internal release from developers to testers or a release of the system to the client.

Release management:

A set of activities geared to create new versions of the complete system. Each release is identified by a distinct version number.

Release testing:

A type of non-exhaustive test performed when the system is installed in a new target environment, using a small set of test cases to validate critical functions without going into depth on any one of them. Also called smoke testing.

Requirements specification:

A document defining what a product or service is expected to do, including functional and non-functional requirements, specifications, and acceptance criteria.

Re-testing:

A test to verify that a previously reported defect has been corrected. Also known as confirmation testing.

Retrospective meeting:

A meeting at the end of a project/sprint during which the team members evaluate the work and learn lessons that can be applied to the next project or sprint.

Review:

A static test technique in which the reviewer reads a text in a structured way in order to find defects and suggest improvements. Reviews may cover requirements documents, test documents, code, and other materials, and can range from informal to formal.

Risk:

A factor that could result in future negative consequences. It’s usually expressed in terms of impact and likelihood.

Risk-based testing:

A structured approach in which test cases are chosen based on risks. Test design techniques like boundary value analysis and equivalence partitioning are risk-based.

Release note:

A document identifying test items, their configuration, current status, and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase.

Reliability:

The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for specified environmental conditions.

Reliability testing:

The process of testing to determine the reliability of a software product.

Root cause:

An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.

Recovery testing:

An activity that attempts to uncover any weaknesses in a fail-safe system, that if activated, will result in the program or system continuing its operations without loss of data or functionality.

Reusability:

The capability that a system or module can be used as part of another system.

ramp-down:

A technique for decreasing the load on a system in a measurable and controlled way.

ramp-up:

A technique for increasing the load on a system in a measurable and controlled way.

REPOSITORY:

It is basically a server, that can be accessed only by authorized people, and used to store and retrieve information.

ROBUSTNESS:

The degree to which a system or component can function correctly in the presence of invalid inputs or challenging environmental conditions.

RAINY-DAY TESTING:

Checking whether a system effectively prevents, detects, and recovers from operational problems such as network failures, database unavailability, equipment issues, and operator errors.

Release candidate:

A product build undergoing final testing before shipment, where all code is complete and known bugs have been fixed.

RePETATION TESTING (DURATION TESTING):

A technique that involves repeating a function or scenario until reaching a specified limit or threshold, or until an undesirable action occurs.

S

Sandwich integration:

An integration testing strategy in which the system is integrated both top-down and bottom-up simultaneously. Can save time, but is complex.

Scalability testing:

A component of non-functional testing, used to measure the capability of software to scale up or down in terms of its non-functional characteristics.

Scenario:

A sequence of activities performed in a system, such as logging in, signing up a customer, ordering products, and printing an invoice. You can combine test cases to form a scenario, especially at higher test levels.

Scrum:

An iterative, incremental framework for project management commonly used with agile software development.

Severity:

The degree of impact that a defect has on the development or operation of a component or system.

State transition testing:

A black box test design technique in which a system is viewed as a series of states, valid and invalid transitions between those states, and inputs and events that cause changes in state.

Static testing:

Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.

Stress testing:

Testing meant to assess how the system reacts to workloads (network, processing, data volume) that exceed the system’s specified requirements. Stress testing shows which system resource (e.g. memory or bandwidth) is first to fail.

System integration testing:

A test level designed to evaluate whether a system can be successfully integrated with other systems. May be included as part of system-level testing, or be conducted at its own test level in between system testing and acceptance testing.

System testing:

Test level aimed at testing the complete integrated system. Both functional and non-functional tests are conducted.

Scribe:

The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.

Security testing:

Testing to determine the security of the software product.

Simulator:

A device, computer program, or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs.

Smoke test:

A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertain that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test are among industry best practices.

Sanity testing:

A testing process to ensure that, at the outset of software development, the product works. It’s often used by managers or customers when they believe there is little risk of introducing major defects into a program and it can reduce the time needed for later, more thorough testing.

Script:

A list of commands that can be used to control the execution of the program being tested.

Spike Testing:

It tests the ability of a software application to work out whether it’s capable of handling sudden decrements or increments of the application load.

Showstopper:

A defect in a program that prevents it from operating at all. A showstopper is very serious that no testing can be done till it is fixed.

SHIFT-LEFT TESTING:

Shift-left testing is a software testing approach in which testing is performed earlier in the lifecycle to find and prevent defects early in the software delivery process (i.e. moved left on the project timeline).

SHIFT-RIGHT TESTING:

Shift-right testing is a method of continuously testing software application while it is in a post-production environment. Also known as “testing in production”.

SYSTEM UNDER TEST (SUT):

The system that is the target of the testing process. It refers to the specific software system being tested.

STUB:

A simplified or specialized implementation of a software component used during development or testing to replace a dependent component and facilitate testing.

Statement coverage:

The percentage of executable statements that have been executed by a test suite.

SQL INJECTION:

A hacking technique that attempts to pass SQL commands through a web application’s user interface for execution by the backend database.

SOAK Testing:

Testing a system under a significant load over an extended period to observe its behavior under sustained use.

T

Test automation:

The process of writing programs that perform test steps and verify the result.

Test basis:

The documentation on which test cases are based.

Test case:

A structured test script that describes how a function or feature should be tested, including test steps, expected results, preconditions, and postconditions.

Test data:

Information that completes the test steps in a test case e.g. what values to input. In a test case where you add a customer to the system, the test data might be the customer’s name and address. Test data might exist in a separate test data file or in a database.

Test-driven development:

A development approach in which developers write test cases before writing any code. The goal is to achieve rapid feedback and follow an iterative approach to software development.

Test environment:

The technical environment in which the tests are conducted, including hardware, software, and test tools. Documented in the test plan and/or test strategy.

Test execution:

The process of running test cases on the test object.

Test level:

A group of test activities organized and carried out together in order to meet stated goals. Examples of levels of testing are component, integration, system, and acceptance tests.

Test log:

A document that describes testing activities in chronological order.

Test object:

The part or aspects of the system to be tested. Might be a component, subsystem, or the system as a whole.

Test plan:

A document describing what should be tested by whom, when, how, and why. The test plan is bounded in time, describing system testing for a particular version of a system.

Test policy:

A document that describes how an organization runs its testing processes at a high level. It may contain a description of test levels according to the chosen life cycle model, roles and responsibilities, required/expected documents, etc.

Test process:

The complete set of testing activities, from planning through to completion. The test process is usually described in the test policy. The fundamental test process comprises planning, specification, execution, recording, and checking for completion.

Test strategy:

A document describing how a system is usually tested.

Test suite:

A group of test cases e.g. all the test cases for system testing.

Testing:

A set of activities intended to evaluate software and other deliverables to determine if they meet requirements, to demonstrate that they are fit for purpose, and to find defects.

Top-down integration:

An integration test strategy, in which the team starts to integrate components at the top level of the system architecture.

Traceability matrix:

A table showing the relationship between two or more baselined documents, such as requirements and test cases, or test cases and defect reports. Used to assess what impact a change will have across the documentation and software, for example, which test cases will need to be run when given requirements change.

Technical review:

A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review.

Test condition:

An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element.

Test management:

The planning, estimating, monitoring, and control of test activities, are typically carried out by a test manager.

Test Maturity Model (TMM):

A five-level staged framework for test process improvement, related to the Capability Maturity Model (CMM) describes the key elements of an effective test process.

Test Process Improvement (TPI):

A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.

Test objective:

A reason or purpose for designing and executing a test.

Test oracle:

A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code.

Test phase:

A distinct set of test activities is collected into a manageable phase of a project, e.g. the execution activities of a test level.

Test planning:

The activity of establishing or updating a test plan.

Test procedure specification:

A document specifying a sequence of actions for the execution of a test. Also known as a test script or manual test script.

Test repeatability:

An attribute of a test indicates whether the same results are produced each time the test is executed.

Test run:

Execution of a test on a specific version of the test object.

Test summary report:

A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.

Testability:

The capability of the software product to enable modified software to be tested.

Testability review:

A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process.

Testable requirements:

The degree to which a requirement is stated in terms that permit the establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met.

Test Bed:

A testbed is a set of tools, programs, interfaces, and environments needed for the testing of a specific component or system.

Test case design technique:

A test case design technique is an approach to designing test cases for a particular objective.

Test coverage:

The percentage or ratio of all possible code paths through system code that has been exercised by one or more test cases.

Test Estimation:

It’s a management activity that approximates how long a testing task would take to complete and how much will it cost.

Test result:

The result of a test is either positive or negative. A positive result means that the expectation described in the test case was met; a negative result means that it was not met. Test cases whose results are determined to be inconclusive or not applicable are documented as such.

Test scenario:

A test scenario is a document that describes the pre-conditions for executing a test case, as well as the expected results.

Test script:

A test script is a step-by-step document that describes what actions are to be taken and what results should be verified when performing a test or series of tests. Test scripts typically include specific inputs, execution conditions, expected results, and acceptance criteria.

Test Specification:

A document that provides detailed information regarding how to execute one or more test cases for a given product under consideration for testing. Test specification documents typically include information on the scope, environment, and preparation requirements, prerequisites, and steps to follow for each test case.

Test CYCLE:

Test cases are grouped into manageable and schedulable units called test cycles. Grouping is according to the relation of objectives to one another, timing requirements, and on the best way to expedite defect detection during the testing event. Often test cycles are linked with the execution of a batch process.

Test DATA GENERATOR:

A software package that creates test transactions for testing application systems and programs. The type of transactions that can be generated is dependent upon the options available in the test data generator. With many current generators, the prime advantage is the ability to create a large number of transactions to volume test application systems.

Tester:

A skilled professional involved in the testing of a software component or system.

Test HARNESS:

A test environment consisting of stubs and drivers required to execute a test.

Test DESIGN:

The process of selecting and specifying a set of test cases to meet the testing objectives or coverage criteria.

Test Asset:

Any work product generated during the testing process, such as test plans, test scripts, and test data.

U

Use case:

A type of requirements document in which the requirements are written in the form of sequences that describe how various actors in the system interact with the system.

Unreachable code:

Code that cannot be reached and therefore is impossible to execute.

UML (Unified Modeling Language):

A language used to define and design object-oriented applications; UML is organized around a set of notations, or diagrams, for visualizing and documenting the artifacts produced throughout the software development process.

Unit testing:

A method for testing individual software units, or modules. A series of tests are created to verify each module’s functionality and to determine whether the code meets specific quality standards such as high cohesion and low coupling.

Usability testing:

Usability testing refers to any type of software testing that determines whether or not the users of a website, application, etc. can do what they want to accomplish quickly and with a minimum amount of effort.

User acceptance testing (UAT):

A phase of testing performed by the end-users of a product to determine whether or not they accept the product’s performance based on what was agreed upon during project planning.

User story:

A user story is a description, written from the perspective of an end-user, of one or more features that will be included in a software product. User stories can vary from one to several sentences and are often created during the requirements analysis phase of the SDLC (software development process life cycle). They may also include one or more acceptance criteria.

User Interface (UI):

The means by which people interact with a system, providing input and output functionalities.

V

V-model:

A sequential software development lifecycle model that describes requirements management, development, and testing on a number of different levels.

Validation:

Tests are designed to demonstrate that the developers have built the correct system. In contrast with verification, which means testing that the system has been built correctly.

Verification:

Tests are designed to demonstrate that the developers have built the system correctly. In contrast with validation, which means testing that the correct system has been built.

Versioning:

Various methods for uniquely identifying documents and source files, e.g. with a unique version number. Each time the object changes, it should receive a new version number.  This method is called version control.

Volume testing:

Testing where the system is subjected to large volumes of data.

Virtualization:

A computer program that allows a user to interact with an application or system, but without any hardware being present. For example, VMware and Microsoft Virtual PC allow users to run multiple operating systems on a single machine as if they were several separate computers.

vulnerability scanner:

A static analyzer that is used to detect particular security vulnerabilities in the code.

visual testing:

Testing that uses image recognition to interact with GUI objects.

virtual user:

A simulation of activities performed according to a user operational profile.

W

Waterfall model:

A sequential development approach consists of a series of phases carried out one by one.

White-box testing:

A type of testing in which the tester has knowledge of the internal structure of the test object. White box testers may familiarize themselves with the system by reading the program code, studying the database model, or going through the technical specifications. Contrast with black-box testing. It’s also known as structured-based testing or structural testing.

Walkthrough:

A face-to-face review meeting in which requirements, designs, or code are presented to project team members for planning, or verifying understanding. The meetings can be held periodically (e.g., every two weeks) during development and testing activities.

Workaround:

A workaround is an alternative that a programmer may create to overcome the need for proper documentation. Workarounds can be implemented as a temporary solution or may become part of the final product.

Web Content Accessibility Guidelines (WCAG):

A part of a series of web accessibility guidelines published by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C), the main international standards organization for the internet. They consist of a set of guidelines for making content accessible, primarily for people with disabilities.

OTHER RELATED SERVICES:

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.