TESTING TERMS DEFINITIONS
Acceptance criteria
The expected results or performance characteristics that define whether the
test case passed or failed.
Acceptance Testing / User Acceptance Testing
An acceptance test is a test that a user/sponsor and manufacturer/producer
jointly perform on a finished, engineered product/system through black-box
testing (i.e., the user or tester need not know anything about the internal
workings of the system). It is often referred to as a(n) functional test, beta
test, QA test, application test, confidence test, final test, or end user test
Accessibility Testing
Verifying a product is accessible to the people having disabilities (deaf,
blind, mentally disabled etc.).
Ad-hoc Testing
Testing carried out using no recognised test case design technique. It is also
known as Exploratory Testing
Agile Testing
Testing practice for projects using agile methodologies, treating development
as the customer of testing and emphasizing a test-first design paradigm
Alpha Testing
In software development, testing is usually required before release to the
general public. This phase of development is known as the alpha phase. Testing
during this phase is known as alpha testing. In the first phase of alpha
testing, developers test the software using white box techniques. Additional
inspection is then performed using black box or grey box techniques.
Arc Testing / Branch Testing
A test case design technique for a component in which test cases are designed
to execute branch outcomes. A test method satisfying coverage criteria that
require that for each decision point, each possible branch be executed at least
once.
AUT
Application Under Test
Authorization Testing
Involves testing the systems responsible for the initiation and maintenance of
user sessions. This will require testing the Input validation of login fields
,Cookie security,and Lockout testing .This is performed to discover whether the
login system
can be forced into permitting unauthorised access. The
testing will also reveal whether the system is susceptible to denial of service
attacks using the same techniques.
Back-to-back testing
Testing in which two or more variants of a component or system are executed
with the same inputs, the outputs compared, and analyzed in cases of
discrepancies
Basis Path Testing
A white box test case design technique that uses the algorithmic flow of the
program to design tests
Benchmark Testing
Tests that use representative sets of programs and data designed to evaluate
the performance of computer hardware and software in a given configuration
Beta Testing / Field Testing
Once the alpha phase is complete, development enters the beta phase. Versions
of the software, known as beta-versions, are released to a limited audience
outside of the company to ensure that the product has few faults or bugs. Beta
testing, is generally constrained to black box techniques although a core of
test engineers are likely to continue with white box testing in parallel to the
beta tests.
Big Bang Testing
Integration testing where no incremental testing takes place prior to all the
system's components being combined to form the system.
Black Box Testing / Functional Testing
Black box testing, concrete box or functional testing is used to check that the
outputs of a program, given certain inputs, conform to the functional
specification of the program. It performs testing based on previously
understood requirements (or understood functionality), without knowledge of how
the code executes.
Bottom-up Testing
An approach to integration testing where the lowest level components are tested
first, then used to facilitate the testing of higher level components. The
process is repeated until the component at the top of the hierarchy is tested.
Boundary value analysis/ testing
A test case design technique for a component in which test cases are designed
which include representatives of boundary values. A testing technique using
input values at, just below, and just above, the defined limits of an input
domain; and with input values causing outputs to be at, just below, and just
above, the defined limits of an output domain.
Breadth Testing
A test suite that exercises the full functionality of a
product but does not test features in detail
Bug
Bugs arise from mistakes and errors, made by people, in either a program's
source code or its design that prevents it from working correctly or produces
an incorrect result
Business process-based testing
An approach to testing in which test cases are designed based on descriptions
and/or knowledge of business processes
CAST
Computer Aided Software Testing
Code Coverage
An analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been
executed and therefore may require additional attention.
Compatibility Testing
Testing whether the system is compatible with other systems with which it
should communicate.
Component Testing
The testing of individual software components.
Concurrency Testing
Multi-user testing geared towards determining the effects of accessing the same
application code, module or database records. Identifies and measures the level
of locking, deadlocking and use of single-threaded code and locking semaphores
Conformance Testing / Compliance Testing / Standards Testing
Conformance testing or type testing is testing to determine whether a system
meets some specified standard. To aid in this, many test procedures and test
setups have been developed, either by the standard's maintainers or external
organizations, specifically for testing conformance to standards. Conformance
testing is often performed by external organizations, sometimes the standards
body itself, to give greater guarantees of compliance. Products tested in such a
manner are then advertised as being certified by that external organization as
complying with the standard
Context Driven Testing
The context-driven school of software testing is flavor of Agile Testing that
advocates continuous and creative evaluation of testing opportunities in light
of the potential
information revealed and the value of that information to
the organization right now.
Conversion Testing / Migration Testing
Testing of programs or procedures used to convert data from existing systems for
use in replacement systems.
Coverage Testing
Coverage testing is concerned with the degree to which test cases exercise or
cover the logic (source code) of the software module or unit. It is also a
measure of coverage of code lines, code branches and code branch combinations
Cyclomatic Complexity
A measure of the logical complexity of an algorithm, used in white-box testing
Data flow Testing
Testing in which test cases are designed based on variable usage within the
code.
Data integrity and Database integrity Testing
Data integrity and database integrity test techniques verify that data is being
stored by the system in a manner where the data is not compromised by updating,
restoration, or retrieval processing
Data-Driven Testing
Testing in which the action of a test case is parameterized by externally
defined data values, maintained as a file or spreadsheet. A common technique in
Automated Testing
Decision condition testing
A white box test design technique in which test cases are designed to execute
condition outcomes and decision outcomes
Decision table testing
A black box test design techniques in which test cases are designed to execute
the combinations of inputs and/or stimuli (causes) shown in a decision table
Decision testing
A white box test design technique in which test cases are designed to execute
decision outcomes
Defect
An anomaly, or flaw, in a delivered work product. Examples include such things
as omissions and imperfections found during early lifecycle phases and symptoms
of faults contained in software sufficiently mature for test or operation. A
defect can be any kind of issue you want tracked and resolved.
Defect density
The number of defects identified in a component or system divided by the size
of the component or system (expressed in standard measurement terms, e.g.
lines-ofcode, number of classes or function points)
Dependency Testing
Examines an application's requirements for pre-existing software, initial
states and configuration in order to maintain proper functionality
Depth Testing
A test that exercises a feature of a product in full detail.
Design based Testing
Designing tests based on objectives derived from the architectural or detail
design of the software (e.g., tests that execute specific invocation paths or
probe the worst case behaviour of algorithms).
Development testing
Formal or informal testing conducted during the implementation of a component
or system, usually in the development environment by developers
Documentation Testing
Testing concerned with the accuracy of documentation.
Dynamic Testing
Testing software through executing it.
Efficiency testing
The process of testing to determine the efficiency of a software product
End-to-end Testing
Test activity aimed at proving the correct implementation of a required
function at a level where the entire hardware/software chain involved in the
execution of the function is available.
Endurance Testing
Checks for memory leaks or other problems that may occur with prolonged
execution
Equivalence Class
A portion of a component's input or output domains for which the component's
behaviour is assumed to be the same from the component's specification
Equivalence partition Testing
A test case design technique for a component in which test cases are designed
to execute representatives from equivalence classes.
Equivalence Partitioning
A test case design technique for a component in which test cases are designed
to execute representatives from equivalence classes
Exhaustive Testing
Testing which covers all combinations of input values and preconditions for an
element of the software under test
Exploratory Testing
This technique for testing computer software does not require significant
advanced planning and is tolerant of limited documentation for the
target-of-test. Instead, the technique relies mainly on the skill and knowledge
of the tester to guide the testing, and uses an active feedback loop to guide
and calibrate the effort. It is also known as ad hoc testing
Failure
The inability of a system (or) component to perform its required functions within
specified performance requirements. A
failure is characterized by the observable symptoms of one or more defects that
have a root cause in one or more faults.
Fault
An accidental condition that causes the failure of a component in the
implementation model to perform its required behavior. A fault is the root
cause of one or more defects identified by observing one or more failures.
Fuzz Testing
Fuzz testing is a software testing technique. The basic idea is to attach the
inputs of a program to a source of random data. If the program fails (for example,
by crashing, or by failing built-in code assertions), then there are defects to
correct. The great advantage of fuzz testing is that the test design is
extremely simple, and free of preconceptions about system behavior.
Gamma Testing
Gamma testing is a little-known informal phrase that refers derisively to the
release of "buggy" (defect-ridden) products. It is not a term of art
among testers, but rather an example of referential humor. Cynics have referred
to all software releases as "gamma testing" since defects are found
in almost all commercial, commodity and publicly available software eventually.
Gorilla Testing
Testing one particular module, functionality heavily
Grey Box Testing
The typical grey box tester is permitted to set up or manipulate the testing
environment, like seeding a database, and can view the state of the product
after their actions, like performing a SQL query on the database to be certain
of the values of columns. It is used almost exclusively of client-server
testers or others who use a database as a repository of information,or who has
to manipulate XML files (DTD or an actual XML file) or configuration files
directly, or who know the internal workings or algorithm of the software under
test and can write tests specifically for the anticipated results.
GUI Testing
GUI testing is the process of testing a graphical user interface to ensure it
meets its written specifications
Heuristic evaluations
Heuristic evaluations are one of the most informal method of usability
inspection in the field of human-computer interaction. It helps identifying the
usability problems in a user interface (UI) design. It specifically involves
evaluators examining the interface and judging its compliance with recognized
usability principles (the "heuristics").
High Order Tests
Black-box tests conducted once the software has been integrated
Incremental Testing
"Integration testing where system components are integrated into the
system one at a time until the entire system is integrated.
Installation Testing
Installation testing can simply be defined as any testing that occurs outside
of the development environment. Such testing will frequently occur on the
computer system the software product will eventually be installed on. While the
ideal installation might simply appear to be to run a setup program, the
generation of that setup program itself and its efficacy in a variety of
machine and operating system environments can require extensive testing before
it can be used with confidence
Integration Testing
Integration testing is the phase of software testing in which individual
software modules are combined and tested as a group. It follows unit testing
and precedes system testing.
Interface Testing
Testing conducted to evaluate whether systems or components pass data and
control correctly to each other.
Interoperability testing
The process of testing to determine the interoperability of a software product
Invalid testing
Testing using input values that should be rejected by the component or system
Isolation Testing
Component testing of individual components in isolation from surrounding
components, with surrounding components being simulated by stubs
Keyword driven Testing
A scripting technique that uses data files to contain not only test data and
expected results, but also keywords related to the application being tested.
The keywords are interpreted by special supporting scripts that are called by
the control script for the test
Load Testing
Load testing is the act of testing a system under load. It generally refers to
the practice of modeling the expected usage of a software program by simulating
multiple users accessing the program's services concurrently. This testing is
most relevant for multi-user systems, often one built using a client/server
model, such as web servers
Localization Testing
This term refers to making software specifically designed for a specific
locality
Logic coverage Testing / Logic driven Testing / Structural
test case design
Test case selection that is based on an analysis of the internal structure of
the component. Also known as white-box testing
Loop Testing
A white box testing technique that exercises program loops
Maintainability Testing / Serviceability Testing
Testing whether the system meets its specified objectives for maintainability.
Maintenance testing
Testing the changes to an operational system or the impact of a changed
environment to an operational system
Model Based Testing
Model-based testing refers to software testing where test cases are derived in
whole or in part from a model that describes some (usually functional) aspects
of the system under test.
Monkey Testing
Testing a system or an Application on the fly, i.e just few tests here and
there to ensure the system or an application does not crash out
Mutation testing
A testing methodology in which two or more program mutations are executed using
the same test cases to evaluate the ability of the test cases to detect
differences in the mutations
N+ Testing
A variation of Regression Testing. Testing conducted with multiple cycles in
which errors found in test cycle N are resolved and the solution is retested in
test cycle N+. The cycles are typically repeated until the solution reaches a
steady state and there are no errors
Negative Testing / Dirty Testing
Testing aimed at showing software does not work.
Operational Testing
Testing conducted to evaluate a system or component in its operational
environment.
Pair testing
Two testers work together to find defects. Typically, they share one computer
and trade control of it while testing
Parallel Testing
The process of feeding test data into two systems, the modified system and an
alternative system (possibly the original system) and comparing results.
Path coverage
Metric applied to all path-testing strategies: in a hierarchy by path length,
where length is measured by the number of graph links traversed by the path or
path segment; e.g. coverage with respect to path segments two links long, three
links long, etc. Unqualified, this term usually means coverage with respect to
the set of entry/exit paths. Often used erroneously as synonym for statement
coverage
Path Testing
Testing in which all paths in the program source code are tested at least once.
Penetration Testing
The portion of security testing in which the evaluators attempt to circumvent
the security features of a system
Performance Testing
Performance testing is testing that is performed to determine how fast some
aspect of a system performs under a particular workload.Performance testing can
serve different purposes. It can demonstrate that the system meets performance
criteria. It can compare two systems to find which performs better. Or it can
measure what parts of the system or workload cause the system to perform badly
Playtest
A playtest is the process by which a game designer tests a new game for bugs
and improvements before bringing it to market
Portability Testing
Testing aimed at demonstrating the software can be ported to specified hardware
or software platforms.
Post-conditions
Cleanup steps after the test case is run, to bring it back to a known state.
Precondition
Dependencies that are required for the test case to run
Progressive Testing
Testing of new features after regression testing of previous features
Quality Control
Quality control and quality engineering are involved in developing systems to
ensure products or services are designed and produced to meet or exceed
customer requirements and expectations
Ramp Testing
Continuously raising an input signal until the system breaks down
Random Testing
Testing a program or part of a program using test data that has been chosen at
random
Recovery Testing
Confirms that the program recovers from expected or unexpected events without
loss of data or functionality. Events can include shortage of disk space,
unexpected loss of communication, or power out conditions
Regression Testing
Regression testing is any type of software testing which seeks to uncover bugs
that occur whenever software functionality that previously worked as desired
stops working or no longer works in the same way that was previously planned.
Release Candidate
A pre-release version, which contains the desired functionality of the final
version, but which needs to be tested for bugs
Reliability Testing
Testing to determine whether the system/software meets the specified
reliability requirements.
Requirements based Testing
Designing tests based on objectives derived from requirements for the software
component
Resource utilization testing
The process of testing to determine the Resource-utilization of a software
product
Risk-based testing
Testing oriented towards exploring and providing information about product
risks
Sanity Testing
Brief test of major functional elements of a piece of software to determine if
its basically operational
Scalability Testing
Performance testing focused on ensuring the application under test gracefully
handles increases in work load
Scenario Testing
A scenario test is a test based on a hypothetical story used to help a person
think through a complex problem or system. They can be as simple as a diagram
for a testing environment or they could be a description written in prose.
Security Testing
Tests focused on ensuring the target-of-test data (or systems) are accessible
only to those actors for which they are intended.
Session-based Testing
Session-based testing is ideal when formal requirements are non present,
incomplete, or changing rapidly. It can be used to introduce measurement and
control to an immature test process, and can form a foundation for significant
improvements in productivity and error detection. It is more closely related to
Exploratory testing. It is a controlled and improved ad-hoc testing that is
able to use the knowledge gained as a basis for ongoing, product sustained
improvement
Simulator
A device, computer program or system used during testing, which behaves or
operates like a given system when provided with a set of controlled inputs
Smart testing
Tests that based on theory or experience are expected to have a high
probability of detecting specified classes of bugs; tests aimed at specific bug
types
Smoke Testing
A sub-set of the black box test is the smoke test. A smoke test is a cursory
examination of all of the basic components of a software system to ensure that
they work. Typically, smoke testing is conducted immediately after a software
build is made. The term comes from electrical engineering, where in order to
test electronic equipment, power is applied and the tester ensures that the
product does not spark or smoke.
Soak Testing
Running a system at high load for a prolonged period of time. For example, running
several times more transactions in an entire day (or night) than would be
expected in a busy day, to identify and performance problems that appear after
a large number of transactions have been executed
Soap-opera testing
A technique for defining test scenarios by reasoning about dramatic and
exaggerated usage scenarios. When defined in collaboration with experienced
users, soap operas help to test many functional aspects of a system quickly
and-because they are not related directly to either the systems formal
specifications, or to the systems features-they have a high rate of success in
revealing important yet often unanticipated problems.
Software Quality Assurance
Software testing is a process used to identify the correctness, completeness
and quality of developed computer software. Actually, testing can never
establish the correctness of computer software, as this can only be done by
formal verification (and only when there is no mistake in the formal
verification process). It can only find defects, not prove that there are none.
Stability Testing
Stability testing is an attempt to determine if an application will crash.
State Transition Testing
A test case design technique in which test cases are designed to execute state
transitions.
Statement Testing
Testing designed to execute each statement of a computer program.
Static Testing
Analysis of a program carried out without executing the program
Statistical Testing
A test case design technique in which a model is used of the statistical
distribution of the input to construct representative test cases.
Storage Testing
Testing whether the system meets its specified storage objectives.
Stress Testing
Stress testing is a form of testing that is used to determine the stability of
a given system or entity. It involves testing beyond normal operational
capacity, often to a breaking point, in order to observe the results.Stress
testing a subset of load testing.
Structural Testing
White box testing, glass box testing or structural testing is used to check
that the outputs of a program, given certain inputs, conform to the structural
specification of the program
SUT
System Under Test
Syntax Testing
A test case design technique for a component or system in which test case
design is based upon the syntax of the input.
System Testing
System testing is testing conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. System
testing falls within the scope of Black box testing
Technical Requirements Testing
Testing of those requirements that do not relate to functionality. i.e.
performance, usability, etc.
Test Approach
The implementation of the test strategy for a specific project. It typically
includes the decisions made that follow based on the (test) project's goal and
the risk assessment carried out, starting points regarding the test process,
the test design techniques to be applied, exit criteria and test types to be
performed
Test Automation
Test automation is the use of software to control the execution of tests, the
comparison of actual outcomes to predicted outcomes, the setting up of test
preconditions, and other test control and test reporting functions.
Test Bed
An execution environment configured for testing. May consist of specific
hardware, OS, network topology, configuration of the product under test, other
application or system software, etc. Same as Test environment
Test Case
The specification (usually formal) of a set of test inputs, execution
conditions, and expected results, identified for the purpose of making an
evaluation of some particular aspect of a Target Test Item.
Test Cycle
A formal test cycle consists of all tests performed. In software development,
it can consist of, for example, the following tests: unit/component testing,
integration testing, system testing, user acceptance testing and the code
inspection.
Test Data
The definition (usually formal) of a collection of test input values that are
consumed during the execution of a test, and expected results referenced for
comparative purposes
Test Driven Development
Test-driven development (TDD) is a Computer programming technique that involves
writing tests first and then implementing the code to make them pass. The goal
of test-driven development is to achieve rapid feedback and implements the
"illustrate the main line" approach to constructing a program. This
technique is heavily emphasized in Extreme Programming.
Test Driver
A program or test tool used to execute a tests. Also known as a Test Harness
Test Environment
The hardware and software environment in which tests will be run, and any other
software with which the software under test interacts when under test including
stubs and test drivers.
Test Harness
In software testing, a test harness is a collection of software tools and test
data configured to test a program unit by running it under varying conditions
and monitor its behavior and outputs.
Test Idea
A brief statement identifying a test that is potentially useful to conduct. The
test idea typically represents an aspect of a given test: an input, an execution
condition or an expected result, but often only addresses a single aspect of a
test.
Test Log
A collection of raw output captured during a unique execution of one or more
tests, usually representing the output resulting from the execution of a Test
Suite for a single test cycle run.
Test Plan
A document describing the scope, approach, resources, and schedule of intended
testing activities. It identifies test items, the features to be tested, the
testing tasks, who will do each task, and any risks requiring contingency
planning.
Test Procedure
The procedural aspect of a given test, usually a set of detailed instructions
for the setup and step-by-step execution of one or more given test cases. The
test procedure is captured in both test scenarios and test scripts
Test Report
A document that summarizes the outcome of testing in terms of items tested,
summary of results , effectiveness of testing and lessons learned.
Test Scenario
A sequence of actions (execution conditions) that identifies behaviors of
interest in the context of test execution.
Test Script
A collection of step-by-step instructions that realize a test, enabling its
execution. Test scripts may take the form of either documented textual
instructions that are executed manually or computer readable instructions that
enable automated test execution.
Test Specification
A document specifying the test approach for a software feature or combination
or features and the inputs, predicted results and execution conditions for the
associated tests
Test Strategy
Defines the strategic plan for how the test effort will be conducted against
one or more aspects of the target system.
Test Suite
A package-like artifact used to group collections of test scripts , both to
sequence the execution of the tests and to provide a useful and related set of
Test Log information from which Test Results can be determined
Test Tools
Computer programs used in the testing of a system, a component of the system,
or its documentation
Testalibity
The degree to which a system or component facilitates the establishment of test
criteria and the performance of tests to determine whether those criteria have
been met
Thread Testing
A variation of top-down testing where the progressive integration of components
follows the implementation of subsets of the requirements, as opposed to the
integration of components by successively lower levels
Top-down testing
An incremental approach to integration testing where the component at the top
of the component hierarchy is tested first, with lower level components being
simulated by stubs. Tested components are then used to test lower level
components. The process is repeated until the lowest level components have been
tested
Traceability Matrix
A document showing the relationship between Test Requirements and Test Cases
Unit Testing
A unit test is a procedure used to verify that a particular module of source
code is working properly
Usability Testing
Usability testing is a means for measuring how well people can use some
human-made object (such as a web page, a computer interface, a document, or a
device) for its intended purpose, i.e. usability testing measures the usability
of the object. If usability testing uncovers difficulties, such as people
having difficulty understanding instructions, manipulating parts, or
interpreting feedback, then developers should improve the design and test it
again
Use case testing
A black box test design technique in which test cases are designed to execute
user scenarios
Validation
The word validation has several related meanings:* In general, validation is
the process of checking if something satisfies a certain criterion. Examples
would be: checking if a statement is true, if an appliance works as intended,
if a computer system is secure, or if computer data is compliant with an open
standard. This should not be confused with verification.
Verification
In the context of hardware and software systems,formal verification is the act
ofproving or disproving the correctness of a systemwith respect to a certain
formal specification or property,using formal methods.
Volume Testing
Testing which confirms that any values that may become large over time (such as
accumulated counts, logs, and data files), can be accommodated by the program
and will not cause the program to stop working or degrade its operation in any
manner
White Box testing / Glass box
Testing
White box testing, glass box testing or structural testing is used to check
that the outputs of a program, given certain inputs, conform to the structural
specification of the program. It uses information about the structure of the
program to check that it
Workflow Testing
Scripted end-to-end testing which duplicates specific workflows which are
expected to be utilized by the end-user
What is 'Software Quality Assurance'?
Software QA involves the entire software development PROCESS - monitoring and
improving the process, making sure that any agreed-upon standards and
procedures are followed, and ensuring that problems are found and dealt with.
What is 'Software Testing'?
Testing involves operation of a system or application under controlled
conditions and evaluating the results. Testing should intentionally attempt to
make things go wrong to determine if things happen when they shouldn't or
things don't happen when they should.
Does every software project need testers?
It depends on the size and context of the project, the risks, the development
methodology, the skill and experience of the developers. If the project is a
short-term, small, low risk project, with highly experienced programmers
utilizing thorough unit testing or test-first development, then test engineers
may not be required for the project to succeed. For non-trivial-size projects
or projects with non-trivial risks, a testing staff is usually necessary. The
use of personnel with specialized skills enhances an organization's ability to
be successful in large, complex, or difficult tasks. It allows for both a)
deeper and stronger skills and b) the contribution of differing perspectives.
What is Regression testing?
Retesting of a previously tested program following modification to ensure that
faults have not been introduced or uncovered as a result of the changes made.
Why does software have bugs?
Some of the reasons are:
Miscommunication or no communication.
Programming errors
Changing requirements
Time pressures
How can
new Software QA processes be introduced in an existing Organization?
It depends on the size of the organization and the risks involved.
For small groupsor projects, a more ad-hoc process may be
appropriate, depending on the type of customers and projects.
By incremental self managed team approaches.
What is
verification? Validation?
Verification typically involves reviews and meetings to evaluate documents,
plans, code, requirements, and specifications. This can be done with
checklists, issues lists, walkthroughs, and inspection meetings. Validation
typically involves actual testing and takes place after verifications are
completed.
What is a 'walkthrough'? What's an 'inspection'?
A 'walkthrough' is an informal meeting for evaluation or informational
purposes. Little or no preparation is usually required. An inspection is more
formalized than a 'walkthrough', typically with 3-8 people including a
moderator, reader, and a recorder to take notes. The subject of the inspection
is typically a document such as a requirements spec or a test plan, and the
purpose is to find problems and see what's missing, not to fix anything.
What kinds of testing should be considered?
Some of the basic kinds of testing involve:
Blackbox testing, Whitebox testing, Integration testing, Functional testing,
smoke testing, Acceptance testing, Load testing, Performance testing, User
acceptance testing.
What are 5 common problems in the software development
process?
Poor requirements
Unrealistic Schedule
Inadequate testing
Changing requirements
Miscommunication
What are 5
common solutions to software development problems?
Solid requirements
Realistic Schedule
Adequate testing
Clarity of requirements
Good communication among the Project team
What is
software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget,
meets requirements and/or expectations, and is maintainable
What are some recent major computer system failures caused
by software bugs?
Trading on a major Asian stock exchange was brought to a halt in November of
2005, reportedly due to an error in a system software upgrade. A May 2005
newspaper article reported that a major hybrid car manufacturer had to install
a software fix on 20,000 vehicles due to problems with invalid engine warning
lights and occasional stalling. Media reports in January of 2005 detailed
severe problems with a $170 million high-profile U.S. government IT systems
project. Software testing was one of the five major problem areas according to
a report of the commission reviewing the project.
What is 'good code'? What is 'good design'?
'Good code' is code that works, is bug free, and is readable and maintainable.
Good internal design is indicated by software code whose overall structure is
clear, understandable, easily modifiable, and maintainable; is robust with
sufficient error-handling and status logging capability; and works correctly
when implemented. Good functional design is indicated by an application whose
functionality can be traced back to customer and end-user requirements.
What is SEI? CMM? CMMI? ISO? Will it help?
These are all standards that determine effectiveness in delivering quality
software. It helps organizations to identify best practices useful in helping
them increase the maturity of their processes.
What steps are needed to develop and run software tests?
Obtain requirements, functional design, and internal design
specifications and other necessary documents
Obtain budget and schedule requirements.
Determine Project context.
Identify risks.
Determine testing approaches, methods, test environment,
test data.
Set Schedules, testing documents.
Perform tests.
Perform reviews and evaluations
Maintain and update documents
What's a
'test plan'? What's a 'test case'?
A software project test plan is a document that describes the objectives,
scope, approach, and focus of a software testing effort. A test case is a
document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly.
What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested, and determinations
made regarding requirements for regression testing to check that fixes didn't
create problems elsewhere
Will automated testing tools make testing easier?
It depends on the Project size. For small projects, the time needed to learn
and implement them may not be worth it unless personnel are already familiar
with the tools. For larger projects, or on-going long-term projects they can be
valuable.
What's the best way to choose a test automation tool?
Some
of the points that can be noted before choosing a tool would be:
Analyze the non-automated testing situation to determine the
testing activity that is being performed.
Testing procedures that are time consuming and repetition.
Cost/Budget of tool, Training and implementation factors.
Evaluation of the chosen tool to explore the benefits.
How can it
be determined if a test environment is appropriate?
Test environment should match exactly all possible hardware, software, network,
data, and usage characteristics of the expected live environments in which the
software will be used.
What's the best approach to software test estimation?
The 'best approach' is highly dependent on the particular organization and
project and the experience of the personnel involved
Some of the following approaches to be considered are:
Implicit Risk Context Approach
Metrics-Based Approach
Test Work Breakdown Approach
Iterative Approach
Percentage-of-Development Approach
What if
the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of
reporting whatever bugs or blocking-type problems initially show up, with the
focus being on critical bugs.
How can it be known when to stop testing?
Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a
specified point
Bug rate falls below a certain level
Beta or alpha testing period ends
What if
there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be
focused.
Determine the important functionalities to be tested.
Determine the high risk aspects of the project.
Prioritize the kinds of testing that need to be performed.
Determine the tests that will have the best
high-risk-coverage to time-required ratio.
What if
the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. The tester
might then do ad hoc testing, or write up a limited test plan based on the risk
analysis.
How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple
dependencies among clients, data communications, hardware, and servers,
especially in multi-tier systems. Load/stress/performance testing may be useful
in determining client/server application limitations and capabilities.
How can World Wide Web sites be tested?
Some of the considerations might include:
Testing the expected loads on the server
Performance expected on the client side
Testing the required securities to be implemented and
verified.
Testing the HTML specification, external and internal links
cgi programs, applets, javascripts, ActiveX components, etc.
to be maintained, tracked, controlled
How is
testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to
internal design to functional design to requirements. If the application was
well-designed this can simplify test design.
What is Extreme Programming and what's it got to do with
testing?
Extreme Programming (XP) is a software development approach for small teams on
risk-prone projects with unstable requirements. For testing ('extreme testing',
programmers are expected to write unit and functional test code first - before
writing the application code. Customers are expected to be an integral part of
the project team and to help develop scenarios for acceptance/black box
testing.
What makes a good Software Test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the
point of view of the customer, a strong desire for quality, and an attention to
detail. Tact and diplomacy are useful in maintaining a cooperative relationship
with developers, and an ability to communicate with both technical (developers)
and non-technical (customers, management) people is useful.
What makes a good Software QA engineer?
They must be able to understand the entire software development process and how
it can fit into the business approach and goals of the organization.
Communication skills and the ability to understand various sides of issues are
important. In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find problems as
well as to see 'what's missing' is important for inspections and reviews
What's the role of documentation in QA?
QA practices should be documented such that they are repeatable. Specifications,
designs, business rules, inspection reports, configurations, code changes, test
plans, test cases, bug reports, user manuals, etc. should all be documented.
Change management for documentation should be used.
What is a test strategy? What is the purpose of a test
strategy?
It is a plan for conducting the test effort against one or more aspects of the
target system.
A test strategy needs to be able to convince management and other stakeholders
that the approach is sound and achievable, and it also needs to be appropriate
both in terms of the software product to be tested and the skills of the test
team.
What information does a test strategy captures?
It captures an explanation of the general approach that will be used and the
specific types, techniques, styles of testing
What is test data?
It is a collection of test input values that are consumed during the execution
of a test, and expected results referenced for comparative purposes during the
execution of a test
What is Unit testing?
It is implemented against the smallest testable element (units) of the
software, and involves testing the internal structure such as logic and
dataflow, and the unit's function and observable behaviors
How can the test results be used in testing?
Test Results are used to record the detailed findings of the test effort and to
subsequently calculate the different key measures of testing
What is Developer testing?
Developer testing denotes the aspects of test design and implementation most
appropriate for the team of developers to undertake.
What is independent testing?
Independent testing denotes the test design and implementation most
appropriately performed by someone who is independent from the team of
developers.
What is Integration testing?
Integration testing is performed to ensure that the components in the
implementation model operate properly when combined to execute a use case
What is System testing?
A series of tests designed to ensure that the modified program interacts
correctly with other system components. These test procedures typically are
performed by the system maintenance staff in their development library.
What is Acceptance testing?
User acceptance testing is the final test action taken before deploying the
software. The goal of acceptance testing is to verify that the software is
ready, and that it can be used by end users to perform those functions and
tasks for which the software was built
What is the role of a Test Manager?
The Test Manager role is tasked with the overall responsibility for the test
effort's success. The role involves quality and test advocacy, resource
planning and management, and resolution of issues that impede the test effort
What is the role of a Test Analyst?
The Test Analyst role is responsible for identifying and defining the required
tests, monitoring detailed testing progress and results in each test cycle and
evaluating the overall quality experienced as a result of testing activities.
The role typically carries the responsibility for appropriately representing
the needs of stakeholders that do not have direct or regular representation on
the project.
What is the role of a Test Designer?
The Test Designer role is responsible for defining the test approach and
ensuring its successful implementation. The role involves identifying the
appropriate techniques, tools and guidelines to implement the required tests,
and to give guidance on the corresponding resources requirements for the test
effort
What are the roles and responsibilities of a Tester?
The Tester role is responsible for the core activities of the test effort,
which involves conducting the necessary tests and logging the outcomes of that
testing. The tester is responsible for identifying the most appropriate
implementation approach for a given test, implementing individual tests,
setting up and executing the tests, logging outcomes and verifying test
execution, analyzing and recovering from execution errors.
What are the skills required to be a good tester?
A tester should have knowledge of testing approaches and techniques, diagnostic
and problem-solving skills, knowledge of the system or application being
tested, and knowledge of networking and system architecture
What is test coverage?
Test coverage is the measurement of testing completeness, and it's based on the
coverage of testing expressed by the coverage of test requirements and test
cases or by the coverage of executed code.
What is a test script?
The step-by-step instructions that realize a test, enabling its execution. Test
Scripts may take the form of either documented textual instructions that are
executed manually or computer readable instructions that enable automated test
execution.