Test
management is the phrase given to the procedure of managing the resources,
materials and artifacts related with testing a product or system under
development. Good test management depends on implementing and accomplishing a reliable
well thought out process. With an effectual test management process in place a
development team can be convinced in delivering excellent quality product
releases to clients.
There
is a number of core test management principles allied with managing test cases.
The core comes down to listed major test management principles:
1. Tracking
details about the product
2. Developing a depository of reusable test cases
3. Grouping test cases in some way to generate runs
4. Dividing the testing up into logical parts
5. Recording outcome against a run
2. Developing a depository of reusable test cases
3. Grouping test cases in some way to generate runs
4. Dividing the testing up into logical parts
5. Recording outcome against a run
Tracking
details of the product or system under test means recording features of your
system like requirements it is projected to meet, components that make up the
system and the dissimilar versions of the system created. In tracking these
aspects about your system the overall aspire is to build up a picture of
requirements enclosed components of the system covered and the versions of the
system that the test cases were performed against.
At
the same time as many products can be trailed simply in terms of a version,
complexities can happen here. For example where the end product is going to be
a group of sub products it may be essential to track the reports of all the sub
products. In this particular situation of the test management process needs to target
on how consequences are logged against versions of these numerous sub projects.
The elementary approach is usually to have a single overall version that then
references all the versions of the sub project. Even as this tracking of
versions numbers is significant to the test management process this aspect
really depends on a high-quality configuration management process.
In
tracking the requirements that the tests cover you can build up a necessities
traceability matrix that allows you to see which necessities have failed
results logged against them and which requirements are fully tested before a
release. The same goes for tracking against the workings of a product, in so
much as you can see which components have failed or passed test cases logged
against them. The rationale behind tracking the versions and/or builds is so
that individual results can be logged against a precise version of the product
being tested. Clearly dissimilar versions of the product may exceed or fail
different tests when they are executed.
In
building a repository of cases the goal of the test management process is to
allow tests to be reused on a scheduled basis against different versions of the
system. In fact this capability to reuse cases is the feature of good test management
that allows testers to run an efficient and effectual test management process.
Being able to identify cases for reuse against different versions of the system
meets the need for a system to have comprehensive regression tests run against
every accounts of the system.
With
a repository of test cases created it is common for these cases to be grouped
in to logical sets so that the group can be executed in one go. This grouping
may be based on similar types of tests, ranges of disparate tests in the case
of creating a regression run or tests aimed at covering a specific
requirement/component of the product. In testing these groups of tests can be
referred to as a suite, a script or a run. Terminology differs but the end
result is the same; a group of similar cases that are expected to be run
together.
To
promote the process, testing is usually separated up into logical areas. For
example; functional, non-functional (e.g. usability), performance and load
testing are all common titles given to different types of testing. Separating the
cases extensively helps to organize the test management process. Categorizing
the test management process in this way helps with aspects like reporting and
allocation. So a meticulous category, say performance, may be given to one team
lead to handle. Each category can then be reported on separately. This permits
users interested in the test management process to view the status for each group
of testing. From this status information resources can then be allocated as
required to the different team leads.
A
group of test cases can then be executed in sequence and the results documented.
In documenting the results against a particular version of the product the objective
is to find defects with the product. Tests that fail will usually consequence
in a defect or issue record being raised in a defect tracking tool. This is the
point in the test management process where test management links together with
the defect management process. Providing traceability between the cases and
defects is indispensable in helping with many aspects of the development
process, not least of which is the procedure of using a test case to retest a
fixed defect.
In brief the procedure
of running the test management function is hub to the success of a product or
system release. The aptitude to develop reusable test cases delivers the
ability to complete consistent regression runs. The procedure of grouping these
tests then allows for runs to be executed with a group of similar test cases. Documenting
these results against a run ultimately allows a development team to evaluate
the quality of a system before release. Connecting all these aspects together
with a good test management process helps ensure high quality system releases.