Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Software Quality Assurance (SQA)

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

WHAT is QUALITY ASSURANCE?

In developing products and services, quality assurance is any systematic process of checking to see whether a product or service being developed is meeting specified requirements. Many companies have a separate department devoted to quality assurance. A quality assurance system is said to increase customer confidence and a company's credibility, to improve work processes and efficiency, and to enable a company to better compete with others. Software quality assurance (SQA)
Though controversial, software testing is a part of the software quality assurance (SQA) process.
[13]

In

SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the amount of faults that end up in the delivered software: the so-called defect rate. What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
Quality Assurance makes sure the project will be completed based on the previously agreed specifications, standards and functionality required without defects and possible problems. It monitors and tries to improve the development process from the beginning of the project to ensure this. It is oriented to "prevention"

WHAT DO YOU BASE YOUR TEST CASE ON ? RE: How do you determine what to test? I determine what to test in the following ways:
1) By learning as much about the product as I possibly can. 2) By asking a lot of questions and gaining insight from: - clients - customers - members of the business sector - marketing - engineers - software developers 3) I use any information which identifies our company's intent for the product as long as it can be

verified with a true or false (pass or fail) result. I find this information in: - Software Requirements Specification - Functional Requirements Document - Business Requirements Document - Work-flow Diagrams - Wireframes - Mock-ups - Claims made in Marketing materials and brochures - End-User License Agreement (EULA) - Information found in Help text - Customer Support documentation and knowledge base 4) I conduct risk-based analysis in order to determine my test priorities. I consider: - things that have changed - core functionalities - common usage - high impact areas - most wanted areas

The testable item is purely based on the SRS (software requirement document)and the test plan which contains the scope and the items to be tested will be given.
How do you decide you have tested enough?
When the 90% of requirements are covered , Maximum defects are rectified except (some)low level defects are not covered , customer satisfy that project and time is less, then we are closing the testing.When maximum number of test cases are executed and have been passed,when deadlines are reached

As of my knowledge following parameters should be consider to decide a S/W tested enough 1) Bug Rate 2) Schedule 3) Budget 4) Test Case Covarage 5) Risk Analysis ( Means all risk areas covered or not) Any one criteria of above will not serve the purpose.Based on company priorities we have to take optimal decision by considers all above parameters ================================================================
You have tested enough when the company's Exit Criteria have been met. This typically includes (not an exhaustive list):

1) All test cases have been run or are otherwise accounted for 2) There are no blocked or partially completed test cases. 3) All test cases have passed unless they are reported as defects 4) There are no 'Open' defects 5) All showstopping defects must be 'Closed' 6) The number of 'Unresolved' defects by Severity and Priority meet the thresholds which were agreed upon by all relevant stakeholders in the client and provider organizations

There are various points which we can consider to decide that we have tested enough. 1. When the system is highly stable and no errors are found . 2. When certain amount of test cases are run then there is no error found that time we can say that we have tasted enough 3. Probability of finding errors is lace then 4. When all the test cases are run But we can not consider the deadline and budget depleted part over here because this question defines that when we think that we have tested enough not when to stop testing.
What if there isn't enough time for thorough testing?

http://www.softwareqatest.com/qatfaq2.html#FAQ2_12

Use risk analysis, along with discussion with project stakeholders, to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include: Which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application?

What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

What is Black Box Testing? Refer to following links and page 107 http://www.guru99.com/black-box-testing.html, http://www.softwaretestinghelp.com/black-boxtesting/ WHAT IS WHITE BOX TESTING? http://www.softwaretestinghelp.com/white-box-testing/ What is Bug Life cycle Links : http://www.guru99.com/defect-life-cycle.html; http://www.softwaretestinghelp.com/bug-life-cycle/ http://www.exforsys.com/tutorials/testing/bug-life-cycle-guidelines.html
What are the different types of Bugs we normally see in any of the projects? Also include their severity. On what basis do we give priority and severity for a bug. Provide an example for high priority and low severity and high severity and low priority?

http://www.exforsys.com/tutorials/testing/bug-life-cycle-guidelines.html What are the content of Bug Report form http://www.guru99.com/software-defect.html

What is User Acceptance Testing?

http://www.exforsys.com/tutorials/testing/what-is-user-acceptance-testing.html

What is the software testing lifecycle and explain each of its phases?
http://www.learn.geekinterview.com/it/sdlc/systems-development-life-cycle.html http://commoninterview.com/Testing_Interview_Questions/what-is-sdlc-software-development-lifecycle/ http://searchsoftwarequality.techtarget.com/definition/systems-development-life-cycle

What is Negative testing?


http://smartbear.com/support/articles/testcomplete/negative-testing-with-testcomplete/

What is the Waterfall Development Method and do you agree with all the steps?
http://searchsoftwarequality.techtarget.com/definition/waterfall-model

What is the V-Model Development Method and do you agree with this model?
V-model gives the equal importance for both testing and development. It says testing need not to be occurred once the code is delivered, instead it can start early with requirements in preparation of test criteria.

What did you include in a test plan?


http://www.vietnamesetestingboard.org/zbxe/?document_srl=417262&mid=Testing_Question_Answer &listStyle=&cpage= http://www.softwaretestinghelp.com/test-plan-sample-softwaretesting-and-quality-assurancetemplates/

Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams. How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
What is difference between a Test Plan, a Test Strategy, A Test Scenario, and a Test Case? What?s their order of succession in the STLC?

http://www.coolinterview.com/interview/10736/ http://www.allinterview.com/showanswers/36034.html
Test Scenario: Test scenario is like laying out plans for testing the product, environmental condition, number of team members required, making test plans, making test cases and what all features are to be tested for the product. Test scenario is very much dependent on the product to be tested. Test scenario is made before the actual testing starts. Test Case: Test case is a document which provides the steps to be executed which has been planned earlier. It also depends on the type of product to be tested. Number of test cases is not fixed for any product.

Test Plan:

A test plan is a systematic approach to testing a software. The plan typically contains a detailed understanding of what the eventual workflow will be. Test plan gives detailed testing information regarding an upcoming testing effort, including? Scope of testing ? Schedule ? Test Deliverables ? Release Criteria ? Risks and Contingencies. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. A document that drives all the future testing activities, and all the testing activities are planned at the beginning itself and are documented,

WHAT do you include in test plan


The content of a test plan varies widely between industry, company and project. Nevertheless, I often include the following: 1) Introduction - Purpose - Product description - System architecture 2) Scope - Test coverage - Features to be tested (in scope) - Features not to be tested (out of scope) - Compatibility matrix - Assumptions - Risks 3) Approach - Test objectives - Test strategies - Test environment - System dependencies - Test tools 4) Process - Test process and guidelines - Priority and Severity definitions - Pass/fail criteria - Test entry/exit criteria - Test resources - Deliverables required for testing - Roles and responsibilities - Approvals and sign off

Difference Between Test Case and Scenario http://softwaretestingexpertise.blogspot.com/2009/10/difference-between-test-case-and-test.html

What is defect ?

How do you perform regression testing?


http://www.softwaretestinghelp.com/regression-testing-tools-and-methods/

Performance ,Load, Stress Testing


http://www.softwaretestinghelp.com/what-is-performance-testing-load-testing-stress-testing/ What is the Test Case http://hateinterview.com/software-testing/manual-testing/whats-a-test-case-2/6532.html What is tracebility matrix http://hateinterview.com/software-testing/what-is-tracebility-matrix/517.html What is test deliverables? http://hateinterview.com/software-testing/manual-testing/what-is-test-deliverables/6712.html

You might also like