Tuesday 16 July 2013

FAQs on QA Testing Part-II

FAQs on QA Testing Part-II
continue the FAQs on QA Testing Part-II
What will you do if developer does not accept the bug?
If the developer does not accept the defect, then he will reject it. Once it is rejected, then it comes back to the tester. Now, the tester will ask for clarification with the developer why the defect is rejected. Since everything is based on the requirement documents, both tester and developer will have to look at the requirement document, validate it and then reopen it if necessary or close.



What are the different tests that can be done for Client Server Application and Web-based application, give details.
For both client server and web based applications, the testing is the same except one thing: We test web based applications in different browsers, for example, Internet Explorer (will test in different versions like IE 5.0, IE 6.0, IE 7.0), Firefox, Safari (for Mac) and so on where as for client server, we don’t need to test in the browsers.

What is an inspection?
An inspection is a formal meeting, more formalized than a walkthrough and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.


Give me five common problems that occur during software development.
Poorly written requirements, unrealistic schedules, inadequate testing, add new features after development is underway and poor communication. Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems. The schedule is unrealistic if too much work is crammed in too little time.
Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes. It’s extremely common that new features are added after development is underway. Miscommunication either means the developers don’t know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed

What is the role of documentation in QA?
Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible.

What if the software is so buggy it can’t be tested at all?
In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem.

How do you know when to stop testing?
This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are…
Deadlines, e.g. release deadlines, testing deadlines;
Test cases completed with certain percentage passed;
Test budget has been depleted;
Coverage of code, functionality, or requirements reaches a specified point;
Bug rate falls below a certain level; or
Beta or alpha testing period ends.

What if there isn’t enough time for thorough testing?
Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:
• Which functionality is most important to the project’s intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?

What can be done if requirements are changing continuously?
Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application’s initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to…
• Ensure the code is well commented and well documented; this makes changes easier for the developers.
• Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.
• In the project’s initial schedule, allow for some extra time to commensurate with probable changes.
• Move new requirements to a ‘Phase 2′ version of an application and use the original requirements for the ‘Phase 1′ version.
• Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.
• Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that’s their job.
• Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
• Design some flexibility into automated test scripts;
• Focus initial automated testing on application aspects that are most likely to remain unchanged;
• Devote appropriate effort to risk analysis of changes, in order to minimize regression-
testing needs;
• Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;

• Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

0 comments:

Post a Comment