Manual Testing Imp
Manual Testing Imp
In software development, the terms "error", "defect", "bug", and "failure" are
closely related and describe different stages in the software testing process:
Error
A mistake made by a developer in the code, such as a syntax or logical error
Defect
A broader issue with the software's design or functionality that's found during the
development phase
Bug
A coding error that's found during the testing phase and results in unexpected behavior
Failure
When the software doesn't perform as intended or deliver the expected results during
execution
Errors can lead to defects, and defects that go undetected can lead to failure. For example, a
developer might make a mistake in the code, which could result in a defect in the software. If
the defect goes undetected and is executed, the system might fail to do what it's supposed to
do.
Some common causes of errors include: Carelessness, Miscommunication, Inexperience, and
Complex algorithms.
Software defects, also known as bugs, can have many causes, including:
Human error: Developers may make mistakes like typing the wrong letter, confusing
variables, or using the wrong comparison operator.
Misunderstanding: Developers may not understand how the software should
behave.
Inadequate testing: Testing may not be comprehensive or rigorous enough to detect
defects.
Poor quality control: Quality standards, processes, or tools may not be followed or
updated properly.
Lack of skills or knowledge: Developers or testers may not have the necessary skills
or experience.
Ever-changing requirements: Software requirements may change due to dynamic
business environments or evolving market needs.
Miscommunication: There may be a lack of communication or miscommunication
between team members.
Software complexity: The software may be complex.
Time pressures: Time pressures may contribute to defects.
To prevent software defects, you can try: Test-driven development,
Behavior-driven development, and Continuous integration and continuous
testing.
To fix defects, you can try Root Cause Corrective Action (RCCA), which
involves identifying the root cause of the problem and taking action to
eliminate it.
The cost of defects can be measured by the impact of the defects and when we
find them. Earlier the defect is found lesser is the cost of defect. For example if
error is found in the requirement specifications during requirements gathering
and analysis, then it is somewhat cheap to fix it. The correction to the
requirement specification can be done and then it can be re-issued.
Testing and quality assurance (QA) are related but distinct processes that
work together to ensure the quality of a product or service:
Testing
A specific type of quality assurance that verifies whether code works as
expected. Testing involves finding bugs and assessing quality based on criteria.
Quality assurance
A larger process that involves tracking and evaluating software development to
ensure it meets quality criteria. QA includes activities like requirement analysis, test
organizing, defect tracking, and report writing. QA's goal is to eliminate bugs and
ensure the product performs as expected.
Defect clustering
Defects tend to concentrate in specific areas of the software, so testing should
focus on those areas.
Pesticide paradox
Repeatedly using the same set of test cases can cause them to fail to find new
issues. To overcome this, regularly review and revise the tests.
Testing is context dependent
The methods and types of testing can depend on the context of the software or
systems.
Absence-of-errors fallacy
The belief that the absence of errors means the software is working properly is
misguided.
These principles help ensure that software is effective, accurate, usable,
and meets business and customer requirements. They apply to all types of
software testing, including manual and automated testing.
The Economics of Testing Software testing has a definite economic impact. One economic
impact is from the cost of defects. This is a very real and very tangible cost. Another
economic impact is from the way we perform testing. It is possible to have very good
motivations and testing goals while testing in a very inefficient way. In this section, we will
examine the economic impact of defects and ways to economize testing
Where Defects Originate To understand the dynamics and costs of defects, we need to
know some things about them.
One of the most commonly understood facts about defects is that most defects originate in
the requirements definition phase of a project. The next runner-up is the design phase.
Some problems in getting accurate, clear, and testable requirements are:
• Many people do not have a solid requirements-gathering process
• Few people have been trained in or understand the dynamics of requirements
• Projects, people, and the world around us change very quickly
• The English language is ambiguous and even what we consider clear language can be
interpreted differently by different people.
[File downloaded]
It's important to understand the requirements for the test and when it can be
started.
Using different types of testing
There are different types of testing, such as beta testing, end-to-end testing, and
white box testing.
Using automated tools
Automated tools can help track defects, measure their impact, and uncover related
issues.
Using reporting and analytics
Reporting and analytics can help teams share status, goals, and test results.
How Testing is conducted?
Software testing has evolved from manual processes to automated ones,
and from a single phase to an integral part of the software development life
cycle:
Past
In the early days of software testing, debugging was the main method. Testers had
to rely on their skills and knowledge to manually test code and find bugs. One
technique was equivalence class partitioning (ECP), where testers divided inputs
into data classes and tested data from each class.
Present
Software testing is now more sophisticated and automated. Organizations can use
tools, best practices, and emerging technologies like AI and machine learning to
improve their testing capabilities. Some software testing methodologies include:
Waterfall model
Agile model
Iterative model
DevOps approach and continuous testing
Static analysis
Unit testing
Integration testing
System testing
We have discussed the definition of risk and how to calculate the risk
levels. If you haven't read our article on Risk Definition, then I would
suggest you read that first before you jump on to this one. In this article,
we will talk about Project Risk and Product Risk with some practical
examples, so you get a solid understanding of this topic.
Before we answer this question, we need to see what all risks can occur in
a project. It's crucial that you understand these risks and how they can
affect testing.
Project Issues:
Delays in Delivery and Task completion - Heard this story before?
The testing team was supposed to get stories on Monday, and it's already
Friday !! It's the most common project risk where there is a delay in
completing the Development Task for a story. Therefore, there is a delay
in the story delivery to the testing team.
Cost Challenges - Project funds and resources might get allocated to
other high priority projects in the organization. There could also be cost-
cutting across an organization, which could lead to reduced funds and
resources for the project. It will impact testing resources, as well. Can you
relate to the risk that six folks would now do a work that was supposed to
be done by ten people? Of course - the timeline always remains the same!
Inaccurate Estimates - Estimation of Home Page development for a
website was 20 days of development and 7 days of testing. When actual
work started, the team figures out that they will need 35 days of
development and 12 days of testing. Can you relate to this? When a
project begins, then the high-level estimation happens according to which
allocation of resources and funds takes place. These estimates likely turn
out to be inaccurate when actual work starts. It could lead to delays,
quality issues, or cost overruns for both development and testing teams.
Organizational Issues:
Resource Skillset Issues - Imagine you are on an automation project
which requires 10 QA resources skilled with Selenium. You end up with 3
resources who know Selenium and rest 7 useful resources who got three
days of Selenium training. Sounds familiar? It is a vital issue where the
skill set of resources doesn't match project needs. The training is also not
sufficient to bridge that gap. Quite evident that this will lead to quality and
on-time delivery issues.
Personnel issues - This could be HR and people-oriented policies of the
organization. It also includes any workplace discrimination that may affect
people
Resources Availability - Often, business users, subject matter experts,
or key developers/testers may not be available due to personal issues or
conflicting business priorities. It has a cascading impact on all the teams.
Political Issues:
Team Skills - What will happen if Developers and testers don't talk to
each other? There will be back and forth on defects and will lead to
everybody's time wastage. It often happens that Dev Managers and Test
Managers don't get along well. It cascades down to the team as well, and
such an environment hampers effective test execution.
Lack Of Appreciation - What if you do all the hard work, but only
development efforts get appreciation in team meetings?
Technical Issues:
Poor Requirements - Poor definition of the requirements leads to
different interpretations by the clients and development/testing teams. It
leads to additional defects and quality issues.
Tech Feasibility - The requirements may not be feasible for
implementation. There could be technical constraints due to which some
of the client requirements may not meet as expected.
Environment Readiness - If the Test Environment is not ready in time,
then it would lead to delays in testing. Moreover, at times, the testing
team might need to test in a dev environment, which could lead to data
issues.
Data Issues - If there is a delay in Data conversion and Data migration,
then it would affect the testing team's ability to test with real data. E.g., If
we are moving our website from one platform to another, it would need
data migration. If a delay happens in this activity, in the testing
environment, then it would affect testing.
Development Process - Weakness in the development process could
impact the quality of deliverables. It could be due to a lack of skill set of
the architect. It's could also be due to scrum master not setting up the
right processes.
Defect Management - Poor Defect Management by Scrum Master or
Project Managers could also lead to accumulated defects. Sometimes, the
team picks up lower priority defects, which are easy to fix. It helps them
to show up numbers. However, the high priority defects pile up. It leads to
a lot of regression issues, and defect reopens might also occur.
Supplier Issues:
Delivery Constraints - A third party that is required to supply services or
infrastructure is not able to deliver in time. It could lead to delays and
quality issues. E.g. For an eCommerce website, a third party provides
images. The site is all ready and tested, but cannot go live as images are
not ready!
Contractual Issues - Contractual issues with Suppliers could also affect
the deliverable. E.g., A supplier comes on contract to fix any defect in 7
days. However, the project team needs to fix P1 defects in 2 days. It is a
classic example where the contract does not happen as per project needs.
It leads to delays in delivering the software.
These were the broad set of risks that come under project risk. I hope you
got a good understanding of these. Subsequently, we will discuss yet
another risk called product risk.
Let's discuss some practical cases of product risk, so you get a better
understanding of it.
As you can see, product risks are nothing but defects that can occur in
production
Creating a Code of Ethics for testers is crucial to ensure professionalism, integrity, and
accountability in the software testing field. Here's a sample outline for a Code of Ethics
tailored for software testers:
2. Quality Focus
3. Confidentiality
4. Professional Competence
5. Objectivity
Base testing decisions on objective evidence rather than personal feelings or opinions.
Ensure that all testing results are reported accurately, without manipulation or bias.
7. Accountability
Take responsibility for your actions and decisions in the testing process.
Be open to feedback and willing to learn from mistakes.
Advocate for the end-user experience and the importance of quality in software products.
Communicate testing results effectively to highlight risks and issues.
Engage with the testing community to share knowledge and best practices.
Contribute to the advancement of the profession through mentorship and collaboration.
Conclusion
Adhering to this Code of Ethics will not only enhance the credibility of testers but also
contribute to the overall quality and success of software products. By committing to these
principles, testers can ensure they perform their roles responsibly and effectively.
limitations of software testing:
Software testing is a crucial aspect of the software development process, but it has its
limitations. Here are some key limitations of software testing:
1. Incomplete Testing
Finite Resources: Time, budget, and manpower constraints can lead to incomplete testing
coverage.
Complexity: The complexity of software can result in scenarios that go untested.
Testing cannot guarantee the absence of defects. Even after rigorous testing, some bugs may
remain.
3. Changing Requirements
4. Environmental Limitations
5. Human Error
Testers may overlook scenarios or make mistakes in test case design, execution, or
reporting.
Subjectivity in interpreting results can affect the outcome.
6. Tool Limitations
Testing tools may not cover all aspects of the application or may have limitations in
functionality.
Reliance on automation can lead to missed issues that require human intuition.
7. Performance Limitations
Testing may not adequately assess how software behaves under extreme loads or during
peak usage times.
Performance bottlenecks may only be identified after deployment.
Areas like usability, security, and compatibility often require specialized testing that may be
overlooked.
Non-functional requirements can be harder to quantify and test effectively.
9. Complexity of Integration Testing
Issues may arise from the interaction between different components or systems that are
difficult to test in isolation.
Integration testing can be particularly challenging in distributed systems.
Teams may place too much trust in testing results, leading to complacency in other quality
assurance practices.
This can result in critical issues being missed during development.
Conclusion
While software testing is essential for quality assurance, it's important to recognize its
limitations. Understanding these limitations can help teams adopt complementary practices,
such as code reviews, continuous integration, and user feedback, to enhance overall software
quality.
1. Software Developer :
Software Developer, as name suggests, is person who is responsible for
writing and maintaining source code of computer programming to develop
software. It allows user to perform particular tasks on computer devices
and also help in maintaining and updating programmer. He/She designs
each and every piece of software and then plan how these pieces will
work together to develop software.
2. Software Tester :
Software Tester, as name suggests, is person who is responsible for
identifying correctness completeness and quality of developed software. It
is very important because it ensures that provide developed and delivered
is error free which guarantees quality of software. It also reduces
maintenance costs, provide better usability and improved functionality.
Difference between Software Developer and Software Tester :
Its main aim to make software that is Its aim is to find bugs and errors in
free from errors and bugs. software application if present.
skills, proficiency at writing code, time system that is being developed, good
management skills, etc. communication skills, critical thinking, etc.
They mainly focus on user’s They mainly focus on behavior of end user
requirement while developing software. while testing software application.
Test Levels
The most common types of testing levels include – unit testing,
integration testing, system testing, and acceptance testing. Unit
tests focus on individual components, such as methods and
functions, while integration tests check if these components work
together properly (PPT ELF)
Component Testing
Integration Testing
System Testing
Acceptance Testing [PPT ELF]
Functional Testing
Non-functional Testing
Here’s a concise comparison highlighting the differences between functional testing
and non-functional testing:
What the system does (features and How the system performs under various
Focus
functions). conditions (quality attributes).
Aspect Functional Testing Non-Functional Testing
Validates that the software meets Validates that the software meets quality
Outcome
functional criteria. and performance standards.
User Mimics user actions to validate Evaluates user experience and satisfaction
Perspective features. with the software’s performance.
Conclusion
While both functional and non-functional testing are crucial for ensuring software
quality, they focus on different aspects of the application. Functional testing ensures
that the software performs its intended tasks, while non-functional testing evaluates
how well the software performs those tasks under various conditions. A
comprehensive testing strategy includes both types to ensure a robust and user-
friendly product.
White-box Testing
Smoke & Sanity Testing( ELF PPT)
Key Objectives
1. Verify Stability: Ensure that existing features continue to work correctly after changes.
2. Identify Side Effects: Detect unintended consequences of code modifications that may affect
other parts of the application.
3. Ensure Quality: Maintain overall software quality by validating that new changes align with
quality standards.
Types of Change-Related Testing
1. Regression Testing:
o Definition: Focuses specifically on verifying that previously developed and tested
software still performs after a change.
o Approach: Involves re-running a selection of test cases that cover affected areas of
the application.
2. Impact Testing:
o Definition: Evaluates the impact of specific changes on related functionalities.
o Approach: Involves analyzing the code changes and assessing which parts of the
application could be affected.
3. Smoke Testing:
o Definition: A preliminary test to check whether the basic functions of the software
are working after changes.
o Approach: Involves running a subset of test cases to verify critical functionalities.
Conclusion