Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
9 views

Manual Testing Imp

Uploaded by

Sindhura S
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Manual Testing Imp

Uploaded by

Sindhura S
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Error-Failure-Defect:

In software development, the terms "error", "defect", "bug", and "failure" are
closely related and describe different stages in the software testing process:
 Error
A mistake made by a developer in the code, such as a syntax or logical error
 Defect
A broader issue with the software's design or functionality that's found during the
development phase
 Bug
A coding error that's found during the testing phase and results in unexpected behavior
 Failure
When the software doesn't perform as intended or deliver the expected results during
execution
Errors can lead to defects, and defects that go undetected can lead to failure. For example, a
developer might make a mistake in the code, which could result in a defect in the software. If
the defect goes undetected and is executed, the system might fail to do what it's supposed to
do.
Some common causes of errors include: Carelessness, Miscommunication, Inexperience, and
Complex algorithms.

Causes of Software Defects

Software defects, also known as bugs, can have many causes, including:
 Human error: Developers may make mistakes like typing the wrong letter, confusing
variables, or using the wrong comparison operator.
 Misunderstanding: Developers may not understand how the software should
behave.
 Inadequate testing: Testing may not be comprehensive or rigorous enough to detect
defects.
 Poor quality control: Quality standards, processes, or tools may not be followed or
updated properly.
 Lack of skills or knowledge: Developers or testers may not have the necessary skills
or experience.
 Ever-changing requirements: Software requirements may change due to dynamic
business environments or evolving market needs.
 Miscommunication: There may be a lack of communication or miscommunication
between team members.
 Software complexity: The software may be complex.
 Time pressures: Time pressures may contribute to defects.
To prevent software defects, you can try: Test-driven development,
Behavior-driven development, and Continuous integration and continuous
testing.
To fix defects, you can try Root Cause Corrective Action (RCCA), which
involves identifying the root cause of the problem and taking action to
eliminate it.

Cost of Software Defects

The cost of defects can be measured by the impact of the defects and when we
find them. Earlier the defect is found lesser is the cost of defect. For example if
error is found in the requirement specifications during requirements gathering
and analysis, then it is somewhat cheap to fix it. The correction to the
requirement specification can be done and then it can be re-issued.

Similarly if a requirement specification error is made and the consequent defect is


found in the design phase then the design can be corrected and reissued with
relatively little expense.

 What does Software Testing reveal?


Software testing is a process that evaluates a software product to ensure it
meets its expected requirements and is free of defects. It can reveal a
number of things, including:
 Bugs: Software testing can help identify and fix bugs and errors in the software.
 Performance: Software testing can help improve the performance of the
software. For example, performance testing can help determine how a website will
react to a large number of users.
 Usability: Software testing can help evaluate how easy a software function is to
use.
 Security: Software testing can help identify vulnerabilities in the software, such as
authentication issues.
 Stability: Software testing can help ensure that modules work together cohesively to
create a stable software system.
 Efficiency: Software testing can help improve the efficiency of the software.
 Accuracy: Software testing can help measure the accuracy of the software.

Importance of Software Testing

Software testing is a crucial activity in the software development


life cycle that aims to evaluate and improve the quality of
software products. Thorough testing is essential to ensure
software systems function correctly, are secure, meet
stakeholders’ needs, and ultimately provide value to end users.
The importance of software testing stems from multiple key
factors:

 Risk Mitigation – Testing helps identify defects and failures early


in the development process when they are less expensive to fix.
This reduces project risks related to quality, security,
performance, etc.
 Confidence – Executing a well–planned software test strategy
provides confidence that the software works as intended before
its release.
 Compliance – Testing can ensure that software adheres to
standards, regulations, and compliance requirements. This is
especially critical for safety–critical systems.
 User Satisfaction – Rigorous testing from a user perspective can
verify usability, functionality, and compatibility. This increases
customer/user satisfaction and reduces negative impacts to an
organization’s reputation or finances from poor quality products.
 Optimization – Testing provides vital feedback that can be used to
continuously improve software quality, user experience, security,
performance and other product attributes.
 Cost Savings – Investing in testing activities reduces downstream
costs related to defects found post–release. It is much cheaper to
find and fix bugs earlier in the development cycle

 Testing and quality assurance

Testing and quality assurance (QA) are related but distinct processes that
work together to ensure the quality of a product or service:
 Testing
A specific type of quality assurance that verifies whether code works as
expected. Testing involves finding bugs and assessing quality based on criteria.
 Quality assurance
A larger process that involves tracking and evaluating software development to
ensure it meets quality criteria. QA includes activities like requirement analysis, test
organizing, defect tracking, and report writing. QA's goal is to eliminate bugs and
ensure the product performs as expected.

quality perception teting


Quality perception testing, also known as perceptual evaluation or perceptual
assessment, is the process of evaluating the quality of images or videos by human
operators:
 Perceptual evaluation
A rigorous methodology is used to ensure unbiased results that are comparable to those
obtained through objective testing methods.
 Perceptual assessment
A variety of tests and tools are used to evaluate an individual's visual perception and
cognition.
 Perceptual quality
The subjective assessment of the quality of images or videos by viewers. Factors that
influence visual perception are taken into account.

Here are some examples of quality perception testing:


 Image quality testing: Perceptual evaluation is used to evaluate image quality attributes.
 Perceptual assessment: The Motor Free Visual Perception Test and Picture Completion are
examples of tools used to evaluate visual perception and cognition.
 Perceptual evaluation of video quality: An algorithm and other components are used to
evaluate video quality.
 Taste perception assessment: Formal testing of taste perception can be used to assess the
ability to perceive different tastes.

The seven principles of software testing are:


 Testing shows the presence of defects
Testing identifies errors in software, indicating that it isn't working as expected.
 Exhaustive testing is impossible
Testing every possible scenario is impossible, so it's important to prioritize and
focus on the most critical functionalities.
 Early testing
 Initiating testing at the beginning of the development life cycle allows for the early identification and
resolution of defects, preventing issues from escalating.

 Defect clustering
Defects tend to concentrate in specific areas of the software, so testing should
focus on those areas.
 Pesticide paradox
Repeatedly using the same set of test cases can cause them to fail to find new
issues. To overcome this, regularly review and revise the tests.
 Testing is context dependent
The methods and types of testing can depend on the context of the software or
systems.
 Absence-of-errors fallacy
The belief that the absence of errors means the software is working properly is
misguided.
These principles help ensure that software is effective, accurate, usable,
and meets business and customer requirements. They apply to all types of
software testing, including manual and automated testing.

The Economics of Testing Software testing has a definite economic impact. One economic
impact is from the cost of defects. This is a very real and very tangible cost. Another
economic impact is from the way we perform testing. It is possible to have very good
motivations and testing goals while testing in a very inefficient way. In this section, we will
examine the economic impact of defects and ways to economize testing
Where Defects Originate To understand the dynamics and costs of defects, we need to
know some things about them.
One of the most commonly understood facts about defects is that most defects originate in
the requirements definition phase of a project. The next runner-up is the design phase.
Some problems in getting accurate, clear, and testable requirements are:
• Many people do not have a solid requirements-gathering process
• Few people have been trained in or understand the dynamics of requirements
• Projects, people, and the world around us change very quickly
• The English language is ambiguous and even what we consider clear language can be
interpreted differently by different people.
[File downloaded]

How Testing is conducted?

Software testing is a complex process that involves a series of steps to


ensure that a program or application is accurate and precise:
 Test planning: Create a document that outlines the test's objectives, approach, and
scope. This document also defines the risks and resources needed for testing, and
schedules tasks for evaluation and design.
 Test case design and development: Design and develop test cases.
 Test environment setup: Set up the test environment.
 Test execution: Execute the tests.
 Test closure: Close the test.
 Understanding requirements

It's important to understand the requirements for the test and when it can be
started.
 Using different types of testing
There are different types of testing, such as beta testing, end-to-end testing, and
white box testing.
 Using automated tools
Automated tools can help track defects, measure their impact, and uncover related
issues.
 Using reporting and analytics
Reporting and analytics can help teams share status, goals, and test results.
How Testing is conducted?
Software testing has evolved from manual processes to automated ones,
and from a single phase to an integral part of the software development life
cycle:
 Past
In the early days of software testing, debugging was the main method. Testers had
to rely on their skills and knowledge to manually test code and find bugs. One
technique was equivalence class partitioning (ECP), where testers divided inputs
into data classes and tested data from each class.
 Present
Software testing is now more sophisticated and automated. Organizations can use
tools, best practices, and emerging technologies like AI and machine learning to
improve their testing capabilities. Some software testing methodologies include:
 Waterfall model
 Agile model
 Iterative model
 DevOps approach and continuous testing
 Static analysis
 Unit testing
 Integration testing
 System testing

 Scope of Software Testing

 .What is the software scope?


Software scope is a well-defined boundary, encompassing all the activities to develop and
deliver the software product.
 The software scope clearly defines all functionalities and artefacts to be delivered as a part
of the software.
The scope identifies what the product will do and what it will not do, what the end product
will contain and what it will not contain.

Factors Influencing the Scope of Testing

You can use various factors, such as the system complexity,


risk level, quality standards, testing budget, testing schedule,
testing environment, and testing team, to determine the most
suitable and effective testing strategy and approach for your
system.

Risk-based testing (RBT)


Risk-based testing (RBT) is a software testing method that prioritizes tests
based on the risk of failure and the impact of that failure on the user or
system. RBT is a strategic and dynamic approach that helps identify critical
areas early in the testing cycle.

Here are some key aspects of RBT:


 Risk identification: Identify potential risks that could negatively impact the product's
usability, functionality, or performance.
 Risk analysis: Analyze the identified risks to determine their likelihood and impact.
 Prioritization: Plot risks on a matrix to visually assess their relative importance and
prioritize testing efforts.
 Efficient use of resources: Focus on the riskiest areas to efficiently use limited
resources and time.

We have discussed the definition of risk and how to calculate the risk
levels. If you haven't read our article on Risk Definition, then I would
suggest you read that first before you jump on to this one. In this article,
we will talk about Project Risk and Product Risk with some practical
examples, so you get a solid understanding of this topic.

 What is Project Risk?


 What is Product Risk?
 Who should identify Product/Project risk?

What is Project Risk?


Project risks are uncertain situations that can impact the project's ability
to achieve its objectives. What are these Objectives? Every Software
project has an objective. It could be building a new eCommerce website
with a defined set of acceptance criteria. It includes functional and non-
functional characteristics of the software. Any event that may risk these
objectives classifies as Project Risk.

There is often confusion on whether a Test Manager should involve


himself in Project Risks, or should he limit himself to testing risks?

Testing is a part of the project, like development or product management.


Any risk that will impact the development could have an impact on testing
as well. As such, the QA Manager must be aware of all the project risks
that can have an impact on testing. Who Identifies these risks?

Before we answer this question, we need to see what all risks can occur in
a project. It's crucial that you understand these risks and how they can
affect testing.

Project Issues:
 Delays in Delivery and Task completion - Heard this story before?
The testing team was supposed to get stories on Monday, and it's already
Friday !! It's the most common project risk where there is a delay in
completing the Development Task for a story. Therefore, there is a delay
in the story delivery to the testing team.
 Cost Challenges - Project funds and resources might get allocated to
other high priority projects in the organization. There could also be cost-
cutting across an organization, which could lead to reduced funds and
resources for the project. It will impact testing resources, as well. Can you
relate to the risk that six folks would now do a work that was supposed to
be done by ten people? Of course - the timeline always remains the same!
 Inaccurate Estimates - Estimation of Home Page development for a
website was 20 days of development and 7 days of testing. When actual
work started, the team figures out that they will need 35 days of
development and 12 days of testing. Can you relate to this? When a
project begins, then the high-level estimation happens according to which
allocation of resources and funds takes place. These estimates likely turn
out to be inaccurate when actual work starts. It could lead to delays,
quality issues, or cost overruns for both development and testing teams.

Organizational Issues:
 Resource Skillset Issues - Imagine you are on an automation project
which requires 10 QA resources skilled with Selenium. You end up with 3
resources who know Selenium and rest 7 useful resources who got three
days of Selenium training. Sounds familiar? It is a vital issue where the
skill set of resources doesn't match project needs. The training is also not
sufficient to bridge that gap. Quite evident that this will lead to quality and
on-time delivery issues.
 Personnel issues - This could be HR and people-oriented policies of the
organization. It also includes any workplace discrimination that may affect
people
 Resources Availability - Often, business users, subject matter experts,
or key developers/testers may not be available due to personal issues or
conflicting business priorities. It has a cascading impact on all the teams.

Political Issues:
 Team Skills - What will happen if Developers and testers don't talk to
each other? There will be back and forth on defects and will lead to
everybody's time wastage. It often happens that Dev Managers and Test
Managers don't get along well. It cascades down to the team as well, and
such an environment hampers effective test execution.
 Lack Of Appreciation - What if you do all the hard work, but only
development efforts get appreciation in team meetings?

Technical Issues:
 Poor Requirements - Poor definition of the requirements leads to
different interpretations by the clients and development/testing teams. It
leads to additional defects and quality issues.
 Tech Feasibility - The requirements may not be feasible for
implementation. There could be technical constraints due to which some
of the client requirements may not meet as expected.
 Environment Readiness - If the Test Environment is not ready in time,
then it would lead to delays in testing. Moreover, at times, the testing
team might need to test in a dev environment, which could lead to data
issues.
 Data Issues - If there is a delay in Data conversion and Data migration,
then it would affect the testing team's ability to test with real data. E.g., If
we are moving our website from one platform to another, it would need
data migration. If a delay happens in this activity, in the testing
environment, then it would affect testing.
 Development Process - Weakness in the development process could
impact the quality of deliverables. It could be due to a lack of skill set of
the architect. It's could also be due to scrum master not setting up the
right processes.
 Defect Management - Poor Defect Management by Scrum Master or
Project Managers could also lead to accumulated defects. Sometimes, the
team picks up lower priority defects, which are easy to fix. It helps them
to show up numbers. However, the high priority defects pile up. It leads to
a lot of regression issues, and defect reopens might also occur.

Supplier Issues:
 Delivery Constraints - A third party that is required to supply services or
infrastructure is not able to deliver in time. It could lead to delays and
quality issues. E.g. For an eCommerce website, a third party provides
images. The site is all ready and tested, but cannot go live as images are
not ready!
 Contractual Issues - Contractual issues with Suppliers could also affect
the deliverable. E.g., A supplier comes on contract to fix any defect in 7
days. However, the project team needs to fix P1 defects in 2 days. It is a
classic example where the contract does not happen as per project needs.
It leads to delays in delivering the software.

These were the broad set of risks that come under project risk. I hope you
got a good understanding of these. Subsequently, we will discuss yet
another risk called product risk.

What is Product Risk?


Product risks result from problems with the delivered product. Product
Risks associate with specific quality characteristics of the product.
Therefore they are also known as Quality Risks. These characteristics
are :

 Functionality as per client requirements


 Reliability of software
 Performance Efficiency
 Usability
 Security of software
 Compatibility
 Ease of Maintenance
 Portability

Examples of Product Risks

Let's discuss some practical cases of product risk, so you get a better
understanding of it.

 Software doesn't perform some functions as per specification. E.g., When


you place an order on an eCommerce website, then order confirmation
email triggers, but SMS functionality does not work.
 Software does not function as per user expectations. E.g., A user intends
to buy a product and adds it to his Cart. During checkout, the product
goes out of stock and subtracts from the Cart. However, the user is not
shown any message to tell him what went wrong.
 Computation Errors - These are common issues that a developer has not
written the correct computational logic in code. E.g., When a discount
applies to a product, then the discount gets applied to shipping costs as
well.
 *A loop structure is coded wrong - E.g., A loop that was to run nine times,
runs ten times as the developer has used the condition <=10 instead of
<10
 Response times are high for critical functions. E.g., You are placing an
order on a website. The order is successful, and your money deducts.
However, the order confirmation screen takes a min to load. This response
time is too high for a customer.*
 User Experience of the product doesn't meet customer expectations. E.g.,
When a user is searching for a product on a website, he wants to filter out
results. E.g. Filter with Size = Medium , Brand = Nike , Color = Blue.
However, he cannot select all these filters at one go. He Selects Size, and
the page refreshes. He then selects Brand and page again gets refreshed
with updated results. It works as per functional requirements. However, it
leads to poor user experience.

As you can see, product risks are nothing but defects that can occur in
production

Independent testing is important because it provides an unbiased perspective on a


product, which can lead to better quality and fewer issues:
 Objectivity
Independent testers are not part of the development team, so they can provide an impartial
evaluation of the software.
 Specialized skills
Independent testers often have specialized skills and experience in testing methodologies,
tools, and best practices.
 Quality of testing
Independent testing can lead to more thorough testing and find more defects than testing
performed by the project team.
 Time to market
Independent testers can bring tried and tested testing processes to the project.
 Total cost of ownership
Independent testing can reduce the total cost of ownership of the product by eliminating the
need to set up hardware and software.
 Less management effort
Independent testing eliminates the need to recruit and prepare testers.
 Advantages of Independent Software Testing Service

The fundamental test process includes the following activities:


 Test planning and control
The first and most important stage, where the objectives, goals, scopes, and risks
of the testing are determined.
 Test analysis and design
Test cases are created using the information and output from the planning stage.
 Test implementation and execution
The actual work is done, where test cases with test data are executed. This can be
done manually or with an automated test tool.

 Evaluating exit criteria and reporting


This phase ensures that testing activities are well-managed and decisions
regarding test completion are based on predefined criteria and project
considerations.
 Exploratory testing
This is a significant part of every software testing process, where the software is
judged based on the user's outlook

 Test Planning and Control.


 Test Analysis and Design.
 Test Implementation and Execution.
 Evaluating Exit Criteria and Reporting.
 Test Closure.

Some attributes of a good tester include:


 Attention to detail
Testers need to be able to identify small errors, such as how a character renders in
a scene.
 Communication
Testers need to be able to communicate clearly and concisely with developers and
business leaders. They also need to be able to read body language and tone of
voice.
 Curiosity
Testers need to be curious about the software and be able to think outside the box
to find problems.
 Problem-solving
Testers need to be able to use problem-solving skills to determine what is causing
an error and how to fix it.
 Time management
Testers need to be able to analyze and prioritize tasks, especially when working
under tight deadlines.
 Adaptability
Testers need to be able to adapt to different subjects and types of testing.
Other attributes of a good tester include:
Creativity, Analytical skills, Patience, Persistence, Passion for quality, and
Keen learning.
psychology of testing

The psychology of testing in software testing is a key factor in the success


of a project. It can help improve the process, create a better work
environment, and enhance test performance.

Here are some aspects of the psychology of testing in software:


 Mindset: The mindset of the testers and developers, and the quality of
communication between them, are important factors.
 Optimism bias: Test engineers may be biased towards expecting the code to work,
rather than looking for issues.
 Communication: Testers need to be able to communicate the bad news in a
constructive manner.
 Patience: Testers need to be patient and prepared to retest software for new errors.
 Flexibility: Testers need to be flexible and open to new strategies.
 Organization: Testers need to be excellent organizers, with checklists, facts, and
figures to support their findings.
 Well-being: Understanding the psychology of testing can help organizations create
an environment that supports testers' well-being.
Other aspects of the psychology of testing include: test anxiety, motivation,
self-efficacy, cognitive processes, and stereotype threat.

Creating a Code of Ethics for testers is crucial to ensure professionalism, integrity, and
accountability in the software testing field. Here's a sample outline for a Code of Ethics
tailored for software testers:

Code of Ethics for Software Testers


1. Integrity

 Act with honesty and integrity in all testing activities.


 Avoid conflicts of interest and disclose any potential biases.

2. Quality Focus

 Commit to delivering high-quality results.


 Continuously seek to improve testing processes and methodologies.

3. Confidentiality

 Protect sensitive information and data encountered during testing.


 Ensure that findings are communicated responsibly and shared only with authorized
stakeholders.

4. Professional Competence

 Maintain and improve professional skills through continuous learning.


 Stay updated on industry best practices, tools, and technologies.

5. Objectivity

 Base testing decisions on objective evidence rather than personal feelings or opinions.
 Ensure that all testing results are reported accurately, without manipulation or bias.

6. Respect and Collaboration

 Treat all colleagues, stakeholders, and team members with respect.


 Promote teamwork and collaboration across disciplines to enhance testing effectiveness.

7. Accountability

 Take responsibility for your actions and decisions in the testing process.
 Be open to feedback and willing to learn from mistakes.

8. Advocacy for Quality

 Advocate for the end-user experience and the importance of quality in software products.
 Communicate testing results effectively to highlight risks and issues.

9. Legal and Ethical Compliance

 Adhere to all applicable laws and regulations regarding software testing.


 Respect intellectual property rights and avoid unauthorized use of proprietary materials.

10. Community Engagement

 Engage with the testing community to share knowledge and best practices.
 Contribute to the advancement of the profession through mentorship and collaboration.

Conclusion

Adhering to this Code of Ethics will not only enhance the credibility of testers but also
contribute to the overall quality and success of software products. By committing to these
principles, testers can ensure they perform their roles responsibly and effectively.
limitations of software testing:

Software testing is a crucial aspect of the software development process, but it has its
limitations. Here are some key limitations of software testing:

1. Incomplete Testing

 Finite Resources: Time, budget, and manpower constraints can lead to incomplete testing
coverage.
 Complexity: The complexity of software can result in scenarios that go untested.

2. Assumption of Defect-Free Code

 Testing cannot guarantee the absence of defects. Even after rigorous testing, some bugs may
remain.

3. Changing Requirements

 Frequent changes in requirements can render previous tests obsolete or inadequate.


 This can lead to increased maintenance efforts and the potential for missed defects.

4. Environmental Limitations

 Differences between testing and production environments can lead to discrepancies in


behavior.
 Some issues only manifest under specific conditions that may not be replicated in testing.

5. Human Error

 Testers may overlook scenarios or make mistakes in test case design, execution, or
reporting.
 Subjectivity in interpreting results can affect the outcome.

6. Tool Limitations

 Testing tools may not cover all aspects of the application or may have limitations in
functionality.
 Reliance on automation can lead to missed issues that require human intuition.

7. Performance Limitations

 Testing may not adequately assess how software behaves under extreme loads or during
peak usage times.
 Performance bottlenecks may only be identified after deployment.

8. Non-Functional Testing Gaps

 Areas like usability, security, and compatibility often require specialized testing that may be
overlooked.
 Non-functional requirements can be harder to quantify and test effectively.
9. Complexity of Integration Testing

 Issues may arise from the interaction between different components or systems that are
difficult to test in isolation.
 Integration testing can be particularly challenging in distributed systems.

10. Overconfidence in Testing

 Teams may place too much trust in testing results, leading to complacency in other quality
assurance practices.
 This can result in critical issues being missed during development.

Conclusion

While software testing is essential for quality assurance, it's important to recognize its
limitations. Understanding these limitations can help teams adopt complementary practices,
such as code reviews, continuous integration, and user feedback, to enhance overall software
quality.

Software Development Lifecycle Models (SDLC) from ELF PPT

Software Development and Software Testing

Testing Throughout the Software Development Lifecycle

Software Development and Software Testing


Difference between Software
Developer and Software Tester


1. Software Developer :
Software Developer, as name suggests, is person who is responsible for
writing and maintaining source code of computer programming to develop
software. It allows user to perform particular tasks on computer devices
and also help in maintaining and updating programmer. He/She designs
each and every piece of software and then plan how these pieces will
work together to develop software.
2. Software Tester :
Software Tester, as name suggests, is person who is responsible for
identifying correctness completeness and quality of developed software. It
is very important because it ensures that provide developed and delivered
is error free which guarantees quality of software. It also reduces
maintenance costs, provide better usability and improved functionality.
Difference between Software Developer and Software Tester :

Software Developer Software Tester

It is responsible for creating individual It is responsible for evaluating individual


software. software.

It simply develops software through It simply evaluates functionality of


successive phases in orderly way. software application.

Software developers generally write Software tester generally test whether or


code to develop software. not code runs as we expected it to run.

They are responsible for helping


They are responsible for quality of
business to be more efficient to produce
software development and deployment.
system that can be sold on open market.

Its main aim to make software that is Its aim is to find bugs and errors in
free from errors and bugs. software application if present.

They understand problem as soon as They report problem as soon as possible,


possible, improve quality of business, helps in saving money, provide security,
reduce costs, increase flexibility, etc. and guarantee quality of software.

They not only develop the best software


They not only find bugs, but also find its
applications but also provide
root cause so that it can be resolved
suggestions to improve software
permanently.
applications.

Developers should have programming Testers should have deep knowledge of


Software Developer Software Tester

skills, proficiency at writing code, time system that is being developed, good
management skills, etc. communication skills, critical thinking, etc.

They generally work with new software so


They generally develop new software
that errors can be reduced or repaired if
product to meet user’s requirement.
present.

They mainly focus on user’s They mainly focus on behavior of end user
requirement while developing software. while testing software application.

Test Levels
The most common types of testing levels include – unit testing,
integration testing, system testing, and acceptance testing. Unit
tests focus on individual components, such as methods and
functions, while integration tests check if these components work
together properly (PPT ELF)
 Component Testing
 Integration Testing
 System Testing
 Acceptance Testing [PPT ELF]

 Functional Testing
 Non-functional Testing
 Here’s a concise comparison highlighting the differences between functional testing
and non-functional testing:

Aspect Functional Testing Non-Functional Testing

Tests attributes like performance, usability,


Tests the functionality of the software
Definition and security that are not related to specific
against defined requirements.
functions.

Ensure that the application behaves as


Assess the software’s performance,
Objective expected and meets functional
reliability, and user experience.
requirements.

What the system does (features and How the system performs under various
Focus
functions). conditions (quality attributes).
Aspect Functional Testing Non-Functional Testing

Based on criteria such as performance


Based on requirements and
Testing Basis benchmarks and user experience
specifications.
standards.

Unit Testing, Integration Testing,


Examples of Performance Testing, Security Testing,
System Testing, User Acceptance
Testing Usability Testing, Compatibility Testing.
Testing (UAT).

Tools Selenium, QTP, TestComplete, JUnit. JMeter, LoadRunner, SoapUI, Gatling.

Validates that the software meets Validates that the software meets quality
Outcome
functional criteria. and performance standards.

Typically performed by testers focused Often requires specialized skills (e.g.,


Responsibility
on functionality. performance engineers, security analysts).

User Mimics user actions to validate Evaluates user experience and satisfaction
Perspective features. with the software’s performance.

 Conclusion

 While both functional and non-functional testing are crucial for ensuring software
quality, they focus on different aspects of the application. Functional testing ensures
that the software performs its intended tasks, while non-functional testing evaluates
how well the software performs those tasks under various conditions. A
comprehensive testing strategy includes both types to ensure a robust and user-
friendly product.

 White-box Testing
 Smoke & Sanity Testing( ELF PPT)

What is Change-Related Testing?

Change-related testing involves assessing software after modifications—whether due to bug


fixes, new features, enhancements, or updates—to ensure that these changes do not introduce
new defects or disrupt existing functionalities.

Key Objectives

1. Verify Stability: Ensure that existing features continue to work correctly after changes.
2. Identify Side Effects: Detect unintended consequences of code modifications that may affect
other parts of the application.
3. Ensure Quality: Maintain overall software quality by validating that new changes align with
quality standards.
Types of Change-Related Testing

1. Regression Testing:
o Definition: Focuses specifically on verifying that previously developed and tested
software still performs after a change.
o Approach: Involves re-running a selection of test cases that cover affected areas of
the application.

2. Impact Testing:
o Definition: Evaluates the impact of specific changes on related functionalities.
o Approach: Involves analyzing the code changes and assessing which parts of the
application could be affected.

3. Smoke Testing:
o Definition: A preliminary test to check whether the basic functions of the software
are working after changes.
o Approach: Involves running a subset of test cases to verify critical functionalities.

Best Practices for Change-Related Testing

1. Automation: Utilize automated testing tools to efficiently run regression tests,


especially in continuous integration/continuous deployment (CI/CD) environments.
2. Prioritization: Focus on high-risk areas or critical functionalities that are more likely
to be affected by changes.
3. Maintain Test Cases: Regularly update and refine test cases to ensure they remain
relevant and effective for the current version of the software.
4. Collaborate with Developers: Engage with developers to understand the scope and
implications of changes, aiding in identifying relevant test cases.
5. Document Changes: Keep detailed records of changes made to the codebase to
facilitate targeted testing and understanding of potential impacts.

Tools for Change-Related Testing

 Selenium: For automated functional testing.


 JUnit/NUnit: For unit testing in Java and .NET environments.
 Postman: For API testing.
 LoadRunner: For performance and regression testing.

Conclusion

Change-related testing is essential for maintaining software quality in dynamic development


environments. By systematically verifying that modifications do not negatively impact
existing functionality, teams can ensure a stable and reliable software product while
facilitating ongoing development and enhancements.

You might also like