continue with FAQs on QA Testing Part-III
What can be done
if requirements are changing continuously?
Work with management early on to understand how requirements might
change, so that alternate test plans and strategies can be worked out in
advance. It is helpful if the application’s initial design allows for some
adaptability, so that later changes do not require redoing the application from
scratch. Additionally, try to…
- Ensure the code is well commented and well documented; this makes changes easier for the developers.
- Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.
- In the project’s initial schedule, allow for some extra time to commensurate with probable changes.
- Move new requirements to a ‘Phase 2′ version of an application and use the original requirements for the ‘Phase 1′ version.
- Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.
- Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that’s their job.
- Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
- Design some flexibility into automated test scripts;
- Focus initial automated testing on application aspects that are most likely to remain unchanged;
- Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;
- Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;
- Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.
What if the
application has functionality that wasn’t in the requirements?
It may take serious effort to determine if an application has significant
unexpected or hidden functionality, which it would indicate deeper problems in
the software development process. If the functionality isn’t necessary to the
purpose of the application, it should be removed, as it may have unknown
impacts or dependencies that were not taken into account by the designer or the
customer.
If not removed, design information will be needed to determine added
testing needs or regression testing needs. Management should be made aware of
any significant added risks as a result of the unexpected functionality. If the
functionality only affects areas, such as minor improvements in the user
interface, it may not be a significant risk.
How can software
QA processes be implemented without stifling productivity?
Implement QA processes slowly over time. Use consensus to reach agreement
on processes and adjust and experiment as an organization grows and matures.
Productivity will be improved instead of stifled. Problem prevention will
lessen the need for problem detection. Panics and burnout will decrease and
there will be improved focus and less wasted effort. At the same time, attempts
should be made to keep processes simple and efficient, minimize paperwork,
promote computer-based processes and automated tracking and reporting, minimize
time required in meetings and promote training as part of the QA process.
However, no one, especially talented technical types, like bureaucracy and in
the short run things may slow down a bit. A typical scenario would be that more
days of planning and development will be needed, but less time will be required
for late-night bug fixing and calming of irate customers.
What is
parallel/audit testing?
Parallel/audit testing is testing where the user reconciles the output of
the new system to the output of the current system to verify the new system
performs the operations correctly. Let us say, for example, the currently
software is in the mainframe system which calculates the interest rate. The company
wants to change this mainframe system to web-based application. While testing
the new web based application, we need to verify that the web-based application
calculates the same interest rate. This is parallel testing.
What is system
testing?
System testing is black box testing, performed by the Test Team, and at
the start of the system testing the complete system is configured in a
controlled environment. The purpose of system testing is to validate an
application’s accuracy and completeness in performing the functions as
designed. System testing simulates real life scenarios that occur in a
“simulated real life” test environment and test all functions of the system
that are required in real life. System testing is deemed complete when actual
results and expected results are either in line or differences are explainable
or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before
system testing, all unit and integration test results are reviewed by Software
QA to ensure all problems have been resolved. For a higher level of testing it
is important to understand unresolved problems that originate at unit and
integration test levels. You CAN learn system testing, with little or no
outside help. Get CAN get free information. Click on a link!
What is
end-to-end testing?
Similar to system testing, the *macro* end of the test scale is testing a
complete application in a situation that mimics real world use, such as
interacting with a database, using network communication, or interacting with
other hardware, application, or system.
What is
security/penetration testing?
Security/penetration testing is testing how well the system is protected
against unauthorized internal or external access, or willful damage. This type
of testing usually requires sophisticated testing techniques.
What is
recovery/error testing?
Answer: Recovery/error testing is testing how well a system recovers from
crashes, hardware failures, or other catastrophic problems.
What is compatibility testing?
Answer: Compatibility testing is testing how well software performs in a
particular hardware, software, operating system, or network environment.
What is comparison testing?
Answer: Comparison testing is testing that compares software weaknesses
and strengths to those of competitors’ products.
0 comments:
Post a Comment