Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo

1

If you don't have patience to test the system the system will surely test your patience

2

Testing Without Requirements—Impossible?  QA Team perform their job without the slightest clue of what the application under test is supposed to do. Less than ample documentation also can be blamed on changing user requirements, on-the-fly development, legacy systems for which documentation is lost or destroyed, and extremely large or expansive systems. "A large system may not be documented well enough for the tester to form complete test scenarios."

3

QA job depends on the accuracy of available documents Asking a tester to test with no documentation is the same as asking them to guess."  Even with the lack of documents, there are certain things that can help us to do efficient testing. It doesn't substitute for having documents available, but at least helps in shipping a product with a reasonable level of confidence."

4

Paths to Do…… UI team prepares screenshots of the expected user interface screens. At the same time, the development team prepares technical design documents based on the requirements documents. The test and QA teams are given the requirements documents and screenshots and asked to develop test artifacts, a test plan, test cases and other documents. A detailed report is prepared at project completion.

5

Better Approach: Coordination in team(Daily Meetings)  In the daily meetings, each team member explains what's been done since the last meeting, what (if anything) has impeded their progress, and what they plan to do until the next meeting. The daily meetings are used in place of documentation.  In the meetings, you will be getting feature names and maybe a short description. Find out what you are supposed to test, figure out the exact requirements and prioritize them. To do thorough testing, you will need more information than initially supplied to you

6

  Develop test ideas and tactics by asking questions Product What is this product? What can I control and observe? What should I test? Tests What would constitute a diversified and practical test strategy? How can I improve my understanding of how well or poorly this product works? If there were an important problem here, how would I uncover it? What document to load? Which button to push? What number to enter? How powerful is this test? What have I learned from this test that helps me perform powerful new tests? What just happened? How do I examine that more closely? Problems What quality criteria matter? What kinds of problems might I find in this product? Is what I see, here, a problem? If so, why? How important is this problem? Why should it be fixed?

7

Exploratory Testing : Exploratory Testing is not a way of testing; It's a way of thinking about testing. Any testing technique can be used in an exploratory way. The Exploratory Approach is Heuristic

8

Many factors contribute to a tester’s choice of exploration style

9

“ Hunches” (past bug experiences, recent changes) invariance (tests that change things that should have no impact on the application) interference (finding ways to interrupt or divert the program’s path) error handling (checking that errors are handled correctly) troubleshooting (bug analysis (such as simplifying, clarifying, or strengthening a bug report), test variance when checking that a bug was fixed) group insights (brainstorming, group discussions of related components, paired testing) Different styles of exploration

10

Exploratory Testing:  Simultaneously: Learn about the product Learn about the market Learn about the ways the product could fail Learn about the weaknesses of the product Learn about how to test the product Test the product Report the problems Advocate for repairs Develop new tests based on what you have learned so far

11

Exploratory Testing:

12

Challenges: Learning (How do we get to know the program?) Visibility (How to see below the surface?) Control (How to set internal data values?) Risk / selection (Which are the best tests to run?) Execution (What’s the most efficient way to run the tests?) Logistics (What environment is needed to support test execution?) The oracle problem (How do we tell if a test result is correct?) Reporting (How can we replicate a failure and report it effectively?) Documentation (What test documentation do we need?) Measurement (What metrics are appropriate?) Stopping (How to decide when to stop testing?) Training and Supervision (How to help testers become effective, and how to tell whether they are?)

13

How to Proceed??? Find out what to test Figure Out How Much to Test Know Whom to Ask

14

Myth1: Testing starts at the end ( requirements come first) Testing can start right at the start Thinking about testing early improves requirement specifications early Don’t have to wait to get benefits of a tester view

15

What We (SAM) Do..

16

Myth 2 : Can’t test till it’s there ( testing the system needs to have the system ) Testing is more than testing, and starts before testing Misconception: testing = test execution

17

What We (SAM) Do..

18

Myth 3: Requirements to test is a one-way street(testing uses requirements, not vice versa ) Thinking about testing raises questions on the requirements. Test design can lead to improved requirements Boundary value analysis

19

What We (SAM) Do..

20

Myth 4: Tests are for testers only ( writing good tests is purely a testing concern) Ambiguous specifications – not testable Non-functional quality attributes e.g. “user friendly”, “very reliable” If you don’t know how to test it, how can you know how to build it?

21

What We (SAM) Do..

22

Myth 5: Minor changes are minor ( minor requirements changes don’t matter ) Impact on implementation (e.g. database, checking) Impact on testing What unexpected side-effects? Size of change NOT = size of testing

23

What We (SAM) Do..

24

Myth 6: Can’t test without requirements  testers MUST HAVE requirements  Not an excuse to avoid testing More responsibility on the tester Exploratory testing is designed for severe time pressure and poor or non-existent requirements

25

What We (SAM) Do..

26

Myth 8: Follow the elephant ( mainstream is more important ) Yes, normal use is important But exceptions must also work correctly

27

What We (SAM) Do..

28

Heuristics The strategy for causing the best change in a poorly understood or uncertain situation within the available resources. Although difficult to define, a heuristic has four signatures that make Its easy to recognize: –  A heuristic does not guarantee a solution; –  It may contradict other heuristics; –  It reduces the search time in solving a problem; and –  Its acceptance depends on the immediate context instead of on an absolute absolute standard."

29

Suggestion and Feedback

More Related Content

SAM

  • 1. If you don't have patience to test the system the system will surely test your patience
  • 2. Testing Without Requirements—Impossible? QA Team perform their job without the slightest clue of what the application under test is supposed to do. Less than ample documentation also can be blamed on changing user requirements, on-the-fly development, legacy systems for which documentation is lost or destroyed, and extremely large or expansive systems. "A large system may not be documented well enough for the tester to form complete test scenarios."
  • 3. QA job depends on the accuracy of available documents Asking a tester to test with no documentation is the same as asking them to guess." Even with the lack of documents, there are certain things that can help us to do efficient testing. It doesn't substitute for having documents available, but at least helps in shipping a product with a reasonable level of confidence."
  • 4. Paths to Do…… UI team prepares screenshots of the expected user interface screens. At the same time, the development team prepares technical design documents based on the requirements documents. The test and QA teams are given the requirements documents and screenshots and asked to develop test artifacts, a test plan, test cases and other documents. A detailed report is prepared at project completion.
  • 5. Better Approach: Coordination in team(Daily Meetings) In the daily meetings, each team member explains what's been done since the last meeting, what (if anything) has impeded their progress, and what they plan to do until the next meeting. The daily meetings are used in place of documentation. In the meetings, you will be getting feature names and maybe a short description. Find out what you are supposed to test, figure out the exact requirements and prioritize them. To do thorough testing, you will need more information than initially supplied to you
  • 6. Develop test ideas and tactics by asking questions Product What is this product? What can I control and observe? What should I test? Tests What would constitute a diversified and practical test strategy? How can I improve my understanding of how well or poorly this product works? If there were an important problem here, how would I uncover it? What document to load? Which button to push? What number to enter? How powerful is this test? What have I learned from this test that helps me perform powerful new tests? What just happened? How do I examine that more closely? Problems What quality criteria matter? What kinds of problems might I find in this product? Is what I see, here, a problem? If so, why? How important is this problem? Why should it be fixed?
  • 7. Exploratory Testing : Exploratory Testing is not a way of testing; It's a way of thinking about testing. Any testing technique can be used in an exploratory way. The Exploratory Approach is Heuristic
  • 8. Many factors contribute to a tester’s choice of exploration style
  • 9. “ Hunches” (past bug experiences, recent changes) invariance (tests that change things that should have no impact on the application) interference (finding ways to interrupt or divert the program’s path) error handling (checking that errors are handled correctly) troubleshooting (bug analysis (such as simplifying, clarifying, or strengthening a bug report), test variance when checking that a bug was fixed) group insights (brainstorming, group discussions of related components, paired testing) Different styles of exploration
  • 10. Exploratory Testing: Simultaneously: Learn about the product Learn about the market Learn about the ways the product could fail Learn about the weaknesses of the product Learn about how to test the product Test the product Report the problems Advocate for repairs Develop new tests based on what you have learned so far
  • 12. Challenges: Learning (How do we get to know the program?) Visibility (How to see below the surface?) Control (How to set internal data values?) Risk / selection (Which are the best tests to run?) Execution (What’s the most efficient way to run the tests?) Logistics (What environment is needed to support test execution?) The oracle problem (How do we tell if a test result is correct?) Reporting (How can we replicate a failure and report it effectively?) Documentation (What test documentation do we need?) Measurement (What metrics are appropriate?) Stopping (How to decide when to stop testing?) Training and Supervision (How to help testers become effective, and how to tell whether they are?)
  • 13. How to Proceed??? Find out what to test Figure Out How Much to Test Know Whom to Ask
  • 14. Myth1: Testing starts at the end ( requirements come first) Testing can start right at the start Thinking about testing early improves requirement specifications early Don’t have to wait to get benefits of a tester view
  • 16. Myth 2 : Can’t test till it’s there ( testing the system needs to have the system ) Testing is more than testing, and starts before testing Misconception: testing = test execution
  • 18. Myth 3: Requirements to test is a one-way street(testing uses requirements, not vice versa ) Thinking about testing raises questions on the requirements. Test design can lead to improved requirements Boundary value analysis
  • 20. Myth 4: Tests are for testers only ( writing good tests is purely a testing concern) Ambiguous specifications – not testable Non-functional quality attributes e.g. “user friendly”, “very reliable” If you don’t know how to test it, how can you know how to build it?
  • 22. Myth 5: Minor changes are minor ( minor requirements changes don’t matter ) Impact on implementation (e.g. database, checking) Impact on testing What unexpected side-effects? Size of change NOT = size of testing
  • 24. Myth 6: Can’t test without requirements testers MUST HAVE requirements Not an excuse to avoid testing More responsibility on the tester Exploratory testing is designed for severe time pressure and poor or non-existent requirements
  • 26. Myth 8: Follow the elephant ( mainstream is more important ) Yes, normal use is important But exceptions must also work correctly
  • 28. Heuristics The strategy for causing the best change in a poorly understood or uncertain situation within the available resources. Although difficult to define, a heuristic has four signatures that make Its easy to recognize: – A heuristic does not guarantee a solution; – It may contradict other heuristics; – It reduces the search time in solving a problem; and – Its acceptance depends on the immediate context instead of on an absolute absolute standard."