Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SVVT Paper Unit 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

SVVT Paper Unit 1

Q 1 - What does the term 'software evolution' refer to? what reason
make software evolution necessary? (8)

A - Software Evolution
Software evolution refers to the process of developing and modifying software over time to
adapt to changing requirements, correct faults, improve performance, or enhance other
attributes. This process is continuous and is necessary to ensure that the software remains
useful, relevant, and efficient in a dynamic environment.

Reasons for Software Evolution -


1. Changing User Requirements
- Adaptation to Business Needs:
- Customization

2. Technological Advancements
- Hardware Upgrades:
- Emerging Technologies

3. Bug Fixes and Maintenance


- Correction of Faults
- Security Enhancements

4. Performance Improvements
- Optimization
- Scalability
5. Regulatory Compliance
- Legal and Regulatory Changes

6. Competition in the Market


- Market Dynamics

Q 2 - Describe the steps included in SDLC framework. (8)

A - Steps in the Software Development Life Cycle (SDLC) Framework

The Software Development Life Cycle (SDLC) is a systematic process used in software
engineering to design, develop, test, and deploy high-quality software. It includes a series of
well-defined steps that help in the structured and efficient development of software
projects.

1. Requirement Gathering and Analysis


- Objective: The first step in the SDLC is to gather and analyze the requirements from
stakeholders, including clients, end-users, and business analysts.
- Key Activities:
- Identifying user needs
- Documenting requirements
- Feasibility analysis to determine if the project is achievable
- Outcome: A detailed requirement specification document that serves as a guideline for
the next phases.

2. System Design
- Objective: To create a blueprint of the system that meets the specified requirements.
- Key Activities:
- Designing the system architecture
- Developing data models, process models, and entity-relationship diagrams
- Designing user interfaces and system interfaces
- Outcome: Design documents, including a system architecture blueprint and detailed
design specifications.

3. Implementation (Coding)
- Objective: To transform the design into a functional software product.
- Key Activities:
- Writing code in the appropriate programming language
- Integrating different modules according to the design specifications
- Conducting preliminary testing by developers
- Outcome: Executable software components or modules.

4. Testing
- Objective: To identify and fix defects in the software to ensure it meets the specified
requirements.
- Key Activities:
- Performing various levels of testing such as unit testing, integration testing, system
testing, and acceptance testing
- Reporting and resolving bugs and issues
- Verifying that the software functions as intended under different scenarios
- Outcome: A thoroughly tested software product ready for deployment.

5. Deployment
- Objective: To release the software to the production environment where it can be used
by the end-users.
- Key Activities:
- Installing the software on the production servers
- Configuring the system for optimal performance
- Conducting a final round of testing in the live environment (if needed)
- Outcome: The software is live and accessible to the intended users.

6. Maintenance
- Objective: To ensure the software continues to function correctly and efficiently after
deployment.
- Key Activities:
- Monitoring the system for bugs, performance issues, and security vulnerabilities
- Providing updates and patches to fix issues or add new features
- Ensuring the software adapts to changing user needs and technological environments
- Outcome: An updated and smoothly functioning software product over time.

Q 3 - What is the difference between verification and validation. (5)

A - Difference Between Verification and Validation

Verification and validation are two critical processes in the software development lifecycle
that ensure the quality and correctness of the software product. While they are often
mentioned together, they serve different purposes and are conducted at different stages of
the development process. Below are five key differences between verification and validation:

1. Definition and Purpose


- Verification: Ensures that the software is being built correctly, according to the specified
design and requirements. It focuses on the process of development.
- Validation: Ensures that the right software is being built, meaning the final product meets
the user's needs and expectations. It focuses on the product itself.

2. Process Stage
- Verification: Occurs during the early stages of development, such as during design,
coding, and before actual testing begins. It is a static process.
- Validation: Occurs after verification, usually during or after the testing phase, when the
software is evaluated in a dynamic environment to check its functionality.

3. Activities Involved
- Verification: Involves activities such as reviews, inspections, walkthroughs, and desk-
checking. It does not involve actual code execution.
- Validation: Involves activities such as functional testing, system testing, user acceptance
testing (UAT), and beta testing, which require running the code.

4. Focus Area
- Verification: Focuses on internal aspects of the software, such as ensuring that the design
and code are correct and consistent with the requirements.
- Validation: Focuses on the external aspects of the software, ensuring that the final
product meets user needs and performs as expected in the real world.

5. Outcome
- Verification: The outcome of verification is a set of documents or artifacts that confirm
the software has been developed according to the specified requirements and design.
- Validation: The outcome of validation is a working software product that is confirmed to
meet the end-user's needs and is ready for deployment.

6. Nature of Process
• Verification: It is a preventive process that aims to catch defects early in the
development cycle, reducing the cost and effort required to fix them later.
• Validation: It is a corrective process that identifies defects in the final product and
ensures the software functions correctly before release.

Q 4 - How is Software Testing defined? (5)


A - Software Testing is a critical process in the software development lifecycle that involves
evaluating and verifying that a software application or system meets the specified
requirements and functions correctly. It is designed to identify any errors, gaps, or missing
requirements in contrast to the actual requirements. The primary goal of software testing is
to ensure that the software is of high quality, reliable, and performs as expected.

Key Points in the Definition of Software Testing

1. Purpose of Software Testing


- Error Detection: Software testing aims to identify defects or bugs in the software that
could potentially cause it to fail or produce incorrect results.
- Quality Assurance: It ensures that the software meets the required standards and quality
benchmarks before it is released to the end-users.

2. Types of Software Testing


- Manual Testing: Involves human testers manually executing test cases without the use of
automation tools.
- Automated Testing: Utilizes automation tools to run tests on the software, which is
particularly useful for repetitive and regression testing.

3. Levels of Software Testing


- Unit Testing: Testing individual components or modules of the software to ensure they
function correctly.
- Integration Testing: Ensuring that different modules or components work together as
expected.
- System Testing: Testing the complete and integrated software system to verify that it
meets the specified requirements.
- Acceptance Testing: Conducted to determine whether the software is ready for
deployment, often involving end-users or stakeholders.

4. Testing Techniques
- Black Box Testing: Focuses on testing the software's functionality without considering the
internal code structure.
- White Box Testing: Involves testing the internal structures or workings of an application,
as opposed to its functionality.
- Grey Box Testing: A combination of both black-box and white-box testing techniques.

5. Importance of Software Testing


- Risk Mitigation: By identifying and fixing defects early, software testing helps in reducing
the risks associated with software failures.
- Customer Satisfaction: Ensures that the final product meets user expectations, leading to
higher customer satisfaction and trust.

Q 5 - What is the significance of test cases and test oracles in


software testing? (5)
A - Significance of Test Cases and Test Oracles in Software Testing

In software testing, test cases and test oracles are fundamental components that play a
crucial role in ensuring the effectiveness and accuracy of the testing process. Understanding
their significance is essential for producing reliable and high-quality software.

1. Test Cases

Definition
- A test case is a set of conditions or variables under which a tester determines whether a
software application is working as expected. It includes inputs, execution conditions, and
expected outcomes that help in verifying the software's functionality.

Significance
- Ensures Comprehensive Coverage: Test cases help in systematically testing all aspects of
the software, ensuring that all functional and non-functional requirements are met.
- Facilitates Consistency: By providing a clear set of instructions, test cases ensure that
testing is conducted in a consistent manner, regardless of who is performing the test.
- Documentation of Testing Process: Test cases serve as documentation for the testing
process, making it easier to understand what has been tested, how it was tested, and what
the outcomes were.
- Aids in Regression Testing: Test cases can be reused for regression testing, ensuring that
new changes do not negatively impact existing functionalities.
- Traceability: Test cases can be traced back to the original requirements, ensuring that all
requirements have been addressed during testing.

2. Test Oracles

Definition
- A test oracle is a mechanism or principle that determines whether the outcomes of a test
are correct. It provides the expected results against which the actual results of the software
are compared during testing.

Significance
- Verification of Correctness: Test oracles are essential for verifying the correctness of test
outcomes, ensuring that the software behaves as expected.
- Reduces Human Error: By providing a predefined expected outcome, test oracles reduce
the likelihood of human error in evaluating test results.
- Automated Testing: In automated testing, test oracles play a crucial role in automatically
determining the pass or fail status of a test case, enhancing the efficiency and reliability of
the testing process.
- Facilitates Early Bug Detection: With a clear expected outcome, test oracles help in
quickly identifying discrepancies between the actual and expected results, leading to early
detection of defects.
- Supports Complex Testing Scenarios: For complex testing scenarios where the expected
results are not straightforward, test oracles provide a reliable reference for validating the
outcomes.

Q 6 - List the popular SDLC models followed in industry and give a


brief overview of each (8)

A - Popular SDLC Models Followed in the Industry

The Software Development Life Cycle (SDLC) is a process used by the software industry to
design, develop, and test high-quality software. Various SDLC models are employed by
organizations based on the specific needs of the project, each offering different approaches
to software development. Below is a list of popular SDLC models followed in the industry,
along with a brief overview of each.

1. Waterfall Model

Overview
- Sequential Approach: The Waterfall model is one of the oldest and most traditional SDLC
models. It follows a linear and sequential approach where each phase must be completed
before the next one begins.
- Phases: The phases typically include Requirements Gathering, System Design,
Implementation, Integration and Testing, Deployment, and Maintenance.
- Characteristics: It is easy to manage due to its rigidity, as each phase has specific
deliverables. However, it is not flexible to accommodate changes after the initial stages.

2. V-Model (Validation and Verification Model)

Overview
- Simultaneous Testing and Development: The V-Model, also known as the Verification and
Validation model, is an extension of the Waterfall model. In this model, for every development
stage, there is a corresponding testing phase.
- Phases: The development phases include Requirements Analysis, System Design, and
Implementation, which are mirrored by corresponding testing phases such as Unit Testing,
Integration Testing, and System Testing.
- Characteristics: The V-Model emphasizes testing at each stage, making it ideal for projects
with well-defined requirements. However, like the Waterfall model, it is not well-suited for
projects where requirements are likely to change.

3. Iterative Model

Overview
- Incremental Development: The Iterative model focuses on building the software
incrementally, where the software is developed and refined through repeated cycles or
iterations.
- Phases: Each iteration involves Planning, Design, Implementation, Testing, and Evaluation.
- Characteristics: This model allows partial implementation of the system and refines it
through iterations based on feedback. It is useful when the requirements are not fully
understood at the beginning of the project.

4. Spiral Model

Overview
- Risk-Driven Approach: The Spiral model combines elements of both iterative and waterfall
models, with a strong emphasis on risk analysis. It is represented as a spiral with multiple
loops, each loop representing a phase in the process.
- Phases: The main phases include Planning, Risk Analysis, Engineering, and Evaluation. Each
loop of the spiral can be seen as a cycle that covers these phases.
- Characteristics: This model is particularly useful for large, complex, and high-risk projects.
It provides flexibility to incorporate changes and reduce risks through continuous refinement.

5. Agile Model
Overview
- Flexibility and Adaptability: The Agile model is a highly flexible and iterative approach that
promotes continuous development and testing. It emphasizes collaboration, customer
feedback, and small, frequent releases.
- Phases: Agile does not have fixed phases but involves repeated cycles of Planning,
Development, Testing, and Review within short time frames, known as sprints.
- Characteristics: Agile is ideal for projects where requirements are expected to change
frequently. It allows for adaptive planning and encourages rapid and flexible responses to
change.

6. DevOps Model

Overview
- Integration of Development and Operations: The DevOps model focuses on continuous
integration and continuous delivery (CI/CD) by combining development and operations
processes.
- Phases: The DevOps lifecycle includes Continuous Development, Continuous Testing,
Continuous Integration, Continuous Deployment, and Continuous Monitoring.
- Characteristics: DevOps emphasizes collaboration between development and operations
teams, automating the processes to enhance speed and efficiency. It is well-suited for projects
that require fast deployment and frequent updates.

Q 7 – Error, Fault and Failure (8)

A - Description of Error, Fault, and Failure in the Context of Software Testing

In software testing, the terms "Error," "Fault," and "Failure" are often used to describe
different aspects of issues that can arise in software systems. Understanding these concepts
is essential for effectively identifying and addressing problems during the software
development process.

1. Error

Definition
- An error refers to a human mistake or oversight made during the software development
process. It typically occurs during activities such as requirement gathering, design, coding, or
documentation.

Significance in Software Testing


- Errors are the root cause of faults in the software. If an error is not detected and
corrected early in the development process, it can lead to faults in the software, which may
eventually cause failures.
- Example: A developer may incorrectly implement a calculation formula in the code,
leading to an error.

2. Fault (Bug/Defect)

Definition
- A fault, also known as a bug or defect, is a manifestation of an error in the software. It
represents a flaw in the software code or logic that can potentially cause the system to
behave incorrectly or produce incorrect results.

Significance in Software Testing


- Faults are identified during the testing phase. Detecting and fixing faults is crucial to
improving software quality and preventing failures in the final product.
- Example: The incorrect calculation formula from the previous error may result in a fault
that causes the software to produce incorrect outputs.

3. Failure

Definition
- A failure occurs when the software does not perform its intended function or produces
incorrect results during execution. A failure is the visible result of one or more underlying
faults in the software.

Significance in Software Testing


- Failures are what end-users experience when the software does not work as expected.
The goal of testing is to uncover faults before they lead to failures in the production
environment.
- Example: When the faulty calculation formula is executed in the software, it leads to a
failure where the user sees incorrect results.

You might also like