Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ia 2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

OOSE Important questions:

2 marks:
1.Define the Command pattern in software design.
1.What is the purpose of the Strategy pattern?
2.Explain the Observer pattern briefly.
3.Define the Proxy pattern and its usage.
4.What is a Facade in software architecture?
5.Explain Regression testing
6.Define verification, validation,testing, debugging
7.Compare blackbox testing and whitebox testing
8.Define Unit Testing and its importance in software development.
9.Explain symbolic execution testing
10.List out various factors affect project plan
11.What are the main components of a Deployment Pipeline in DevOps?
12.write about Devops
13.Explain PERT
14. milestone and deliverables
15. Public subscribe design pattern
16. Different architectural design
Big questions:
1.blackbox,integration and system testing where it is used and procedure with example
2.Regression testing
3.Symbloic Execution with example
4.Explain software configuration management
5.Deployment pipeline in Devops
6.Model view controller
7.Different Architectural styles

BIG QUESTIONS:
1&2
Software Testing:
Software testing is a process of identifying the correctness of software by considering its all attributes
(Reliability, Scalability, Portability, Re-usability, Usability) and evaluating the execution of software
components to find the software bugs or errors or defects.
The different types of Software Testing:
The purpose of having a testing type is to confirm the AUT (Application Under Test).
The software testing mainly divided into two parts, which are as follows:

Manual Testing:
Testing any software or an application according to the client's needs without using any automation tool
is known as manual testing.
Classification of Manual Testing
In software testing, manual testing can be further classified into three different types of testing, which
are as follows:
 White Box Testing
 Black Box Testing
 Grey Box Testing

White Box Testing


In white-box testing, the developer will inspect every line of code before handing it over to the testing
team or the concerned test engineers.

Black Box Testing


Another type of manual testing is black-box testing. In this testing, the test engineer will analyze the
software against requirements, identify the defects or bug, and sends it back to the development team.
Then, the developers will fix those defects, do one round of White box testing, and send it to the testing
team.
The main objective of implementing the black box testing is to specify the business needs or the
customer's requirements.
In other words, we can say that black box testing is a process of checking the functionality of an
application as per the customer requirement. The source code is not visible in this testing; that's why it
is known as black-box testing.
Types of Black Box Testing
Black box testing further categorizes into two parts, which are as discussed below:
 Functional Testing
 Non-function Testing

Functional Testing
The test engineer will check all the components systematically against requirement specifications is
known as functional testing.
Functional testing is also known as Component testing.
In functional testing, all the components are tested by giving the value, defining the output, and
validating the actual output with the expected value.
Functional testing is a part of black-box testing as its emphases on application requirement rather than
actual code. The test engineer has to test only the program instead of the system.
Types of Functional Testing
The diverse types of Functional Testing contain the following:
 Unit Testing
 Integration Testing
 System Testing

1. Unit Testing
Unit testing is the first level of functional testing in order to test any software. In this, the test engineer
will test the module of an application independently or test all the module functionality is called unit
testing.
2. Integration Testing
Once we are successfully implementing the unit testing, we will go integration testing.
It is the second level of functional testing, where we test the data flow between dependent modules or
interface between two features is called integration testing.
Types of Integration Testing
Integration testing is also further divided into the following parts:
 Incremental Testing
 Non-Incremental Testing
Incremental Integration Testing
1. Whenever there is a clear relationship between modules, we go for incremental integration
testing. Suppose, we take two modules and analysis the data flow between them if they are
working fine or not.
2. If these modules are working fine, then we can add one more module and test again.
3. And we can continue with the same process to get better results.
In other words, we can say that incrementally adding up the modules and test the data flow between the
modules is known as Incremental integration testing.
Types of Incremental Integration Testing
Incremental integration testing can further classify into two parts, which are as follows:
 Top-down Incremental Integration Testing
 Bottom-up Incremental Integration Testing
1. Top-down Incremental Integration Testing
In this approach, we will add the modules step by step or incrementally and test the data flow between
them. We have to ensure that the modules we are adding are the child of the earlier ones.
2. Bottom-up Incremental Integration Testing
In the bottom-up approach, we will add the modules incrementally and check the data flow between
modules. And also, ensure that the module we are adding is the parent of the earlier ones.

Non-Incremental Integration Testing/ Big Bang Method


Whenever the data flow is complex and very difficult to classify a parent and a child, we will go for the
non-incremental integration approach. The non-incremental method is also known as the Big Bang
method.

3. System Testing
Whenever we are done with the unit and integration testing, we can proceed with the system testing.
In system testing, the test environment is parallel to the production environment. It is also known as
end-to-end testing.

Non-function Testing
The next part of black-box testing is non-functional testing. It provides detailed information on
software product performance and used technologies.
Non-functional testing will help us minimize the risk of production and related costs of the software.
Non-functional testing is a combination of performance, load, stress, usability and, compatibility
testing.

Types of Non-functional Testing


Non-functional testing categorized into different parts of testing, which we are going to discuss further:
 Performance Testing
 Usability Testing
 Compatibility Testing
1. Performance Testing
In this type of non-functional testing, the test engineer will only focus on several aspects, such as
Response time, Load, scalability, and Stability of the software or an application.
Classification of Performance Testing
Performance testing includes the various types of testing, which are as follows:
 Load Testing
 Stress Testing
 Scalability Testing
 Stability Testing
Load Testing
While executing the performance testing, we will apply some load on the particular application to check
the application's performance, known as load testing. Here, the load could be less than or equal to the
desired load.
Stress Testing
It is used to analyze the user-friendliness and robustness of the software beyond the common functional
limits.
Scalability Testing
To analysis, the application's performance by enhancing or reducing the load in particular balances is
known as scalability testing.
Stability Testing
Stability testing is a procedure where we evaluate the application's performance by applying the load
for a precise time.
2.Usability Testing
Another type of non-functional testing is usability testing. In usability testing, we will analyze the user-
friendliness of an application and detect the bugs in the software's end-user interface.
3. Compatibility Testing
In compatibility testing, we will check the functionality of an application in specific hardware and
software environments. Once the application is functionally stable then only, we go for compatibility
testing.
Grey Box Testing
Another part of manual testing is Grey box testing. It is a collaboration of black box and white box
testing.
Automation Testing
The most significant part of Software testing is Automation testing. It uses specific tools to automate
manual design test cases without any human interference.
Automation testing is the best way to enhance the efficiency, productivity, and coverage of Software
testing.
It is used to re-run the test scenarios, which were executed manually, quickly, and repeatedly.

Regression Testing:
Regression testing is a black box testing techniques. It is used to authenticate a code change in the
software does not impact the existing functionality of the product. Regression testing is making sure
that the product works fine with new functionality, bug fixes, or any change in the existing feature.
Types of Regression Testing
The different types of Regression Testing are as follows:
1) Unit Regression Testing [URT]
In this, we are going to test only the changed unit, not the impact area, because it may affect the
components of the same module.
2) Regional Regression testing [RRT]
In this, we are going to test the modification along with the impact area or regions, are called the
Regional Regression testing. Here, we are testing the impact area because if there are dependable
modules, it will affect the other modules also.
3) Full Regression testing [FRT]
During the second and the third release of the product, the client asks for adding 3-4 new features, and
also some defects need to be fixed from the previous release. Then the testing team will do the Impact
Analysis and identify that the above modification will lead us to test the entire product.

Advantages of Regression Testing


 Regression Testing increases the product's quality.
 It ensures that any bug fix or changes do not impact the existing functionality of the
product.
 Automation tools can be used for regression testing.
 It makes sure the issues fixed do not occur again.
Disadvantages of Regression Testing
 There are several advantages of Regression Testing though there are disadvantages as well.
 Regression Testing should be done for small changes in the code because even a slight
change in the code can create issues in the existing functionality.
 If in case automation is not used in the project for testing, it will time consuming and
tedious task to execute the test again and again.
5)
What is a Deployment Pipeline and How to
Build it?

A deployment pipeline is an essential DevOps testing strategy that


automates the software delivery process, ensuring rapid and reliable
application deployments. In this article, we will explore the concept of a
deployment pipeline, its benefits, key components, main stages, how to
build one, and popular tools to implement it effectively.

What is a Deployment Pipeline?


A deployment pipeline is a crucial concept in the field of software
development and DevOps. It represents an automated and streamlined
process that facilitates the continuous integration, testing, and
deployment of code changes from development to production
environments. The primary goal of a deployment pipeline is to ensure that
software releases are efficient, reliable, and maintain consistent quality.

● At its core, a deployment pipeline is a series of automated stages that the


code changes must pass through before being deployed to production.
● These stages typically include code compilation, automated testing, and
deployment to various environments, such as staging and production.
● Each stage is designed to verify the quality and functionality of the code,
and any issues identified during the process are addressed before
proceeding to the next stage.

Benefits of a Deployment Pipeline


The key benefits of a well-implemented deployment pipeline are listed
below:

● First and foremost, it enables faster time-to-market for new features and
bug fixes by automating and expediting the software delivery process.
● The pipeline allows for early detection and resolution of defects, reducing
the risk of deployment failures and costly rollbacks.
● It fosters collaboration between development, testing, and operations
teams, promoting a culture of shared responsibility and continuous
improvement.
● Automated processes reduce manual interventions, enabling faster and
more frequent releases, accelerating time-to-market for new features and
bug fixes.
● Automation testing at each stage ensures that bugs and issues are
identified early in the development cycle, minimizing the cost and effort
of fixing them.
● The pipeline ensures consistency across all environments, reducing the
chances of configuration errors that may cause discrepancies between
development, staging, and production environments.
● Deployment pipelines encourage collaboration between development,
testing, and operations teams, promoting a culture of shared responsibility
and faster feedback loops.
● Automated testing and validation processes help catch issues before they
reach production, leading to fewer deployment failures and rollbacks.
● Continuous integration and automated security checks ensure that code
meets security and compliance standards before deployment.

Key Components of a Deployment Pipeline


The key components of a deployment pipeline are essential building
blocks that enable the automation and smooth flow of code changes from
development to production. These components work together to ensure
the continuous integration, testing, and deployment of software, resulting
in efficient and reliable software releases. The main components of a
deployment pipeline include:-

1. Version Control System (VCS): It is a central repository that


tracks and manages code changes made by developers. It allows multiple
developers to collaborate on the same codebase, facilitates versioning,
and provides a historical record of changes. Popular VCS options include
Git, SVN, and Mercurial.
2. Build Server: It is responsible for automatically compiling and
packaging the code whenever changes are pushed to the VCS. It takes the
code from the repository and transforms it into executable artifacts that
are ready for testing and deployment. Continuous integration tools
like Jenkins, GitLab CI/CD, and Travis CI are often used to set up build
servers.
3. Automated Testing: It includes various types of tests, such as unit
tests, integration tests, functional tests, and performance tests. Automated
testing ensures that any defects or issues are caught early in the
development process, reducing the risk of introducing bugs into
production.
4. Artifact Repository: The artifact repository stores the build
artifacts generated by the build server. These artifacts are the build
process output and represent the compiled and packaged code ready for
deployment. A centralized artifact repository ensures consistent and
reliable deployments across different environments, such as staging and
production.
5. Deployment Automation: Deployment automation streamlines the
application deployment process to various environments. It involves
setting up automated deployment scripts and configuration
management tools to ensure a consistent deployment process. By
automating the deployment process, organizations can reduce manual
errors and maintain high consistency in their deployments.
By integrating these key components into a well-designed deployment
pipeline, software development teams can achieve faster, more reliable,
and higher-quality software releases.

Main Stages of a Deployment Pipeline


A deployment pipeline consists of a series of automated stages that code
changes must pass through before being deployed to production. Each
stage is designed to verify the quality, functionality, and compatibility of
the code, ensuring that the software release is efficient, and reliable, and
maintains consistent quality across different environments.
We will explore the main stages of a deployment pipeline and their
significance in the software development process.
1. Commit Stage:
The deployment pipeline starts with the Commit stage, triggered by code
commits to the version control system (VCS). In this stage, the code
changes are fetched from the VCS, and the build server automatically
compiles the code, running any pre-build tasks required. The code is then
subjected to static code analysis to identify potential issues, such as
coding standards violations or security vulnerabilities. If the code passes
these initial checks, the build artifacts are generated, which serve as the
foundation for subsequent stages.
2. Automated Testing Stage:
After the successful compilation and artifact generation in the Commit
stage, the next stage involves automated testing. Various tests are
executed in this stage to ensure the code’s functionality, reliability, and
performance.
● Unit tests, which validate individual code components, are run first,
followed by integration tests that verify interactions between different
components or modules.
● Functional tests check whether the application behaves as expected from
an end-user perspective.

3. Staging Deployment:
The application is deployed to a staging environment once the code
changes have successfully passed the automated testing stage. The
staging environment resembles the production environment, allowing for
thorough testing under conditions that simulate real-world usage. This
stage provides a final check before the application is promoted to
production.

4. Production Deployment:
The final stage of the deployment pipeline is the Production Deployment
stage. Once the application has passed all the previous stages and
received approval from stakeholders, it is deployed to the production
environment.

● This stage requires extra caution as any issues or bugs introduced into the
production environment can have significant consequences.
● To minimize risk, organizations often use deployment strategies such as
canary releases, blue-green deployments, or feature toggles to control the
release process and enable easy rollback in case of any problems.
● Continuous production environment monitoring is also essential to ensure
the application’s stability and performance.

How to Build a Deployment Pipeline?


Building a deployment pipeline is crucial in the software development
and DevOps process. It helps automate and streamline the process of
deploying code changes from development to production environments.

Here’s a step-by-step guide on how to build a deployment pipeline:


1. Define Your Pipeline Requirements

● Identify your project’s requirements, including target environments


(development, testing, staging, production), technologies, and tools.
● Decide on the frequency of deployments (continuous deployment,
continuous integration, etc.).
2. Select a Version Control System (VCS)

● Use a version control system like Git to manage your source code and
track changes.
3. Choose a Build Automation Tool:

● Select a build tool like Jenkins, Travis CI, CircleCI, or GitLab CI/CD to
automate the build and testing process.
4. Setup Continuous Integration (CI)

● Configure your chosen CI tool to monitor your VCS repository for


changes and trigger builds automatically.
● Set up build scripts or configuration files (e.g., Jenkinsfile, travis.yml) to
define build steps and dependencies.
5. Automate Testing

● Integrate unit, integration, and functional tests into your pipeline.


● Ensure that the pipeline fails if any tests do not pass, preventing code
with issues from progressing further.
6. Artifact Generation

● Create deployable artifacts (e.g., Docker images, JAR files) from your
codebase after successful testing.
7. Implement Continuous Deployment (CD):

● Set up deployment stages for different environments (e.g., staging,


production).
● Use infrastructure as code tools.

Deployment Pipeline Tools


There are several popular deployment pipeline tools available that
facilitate the automation and orchestration of the software delivery
process. These tools help set up continuous integration, continuous
deployment, and continuous delivery pipelines, ensuring a streamlined
and efficient development workflow.
Here are some widely used deployment pipeline tools:
● Jenkins: Jenkins’s rich plugin ecosystem allows customization and
integration with various tools, enhancing its adaptability to diverse
development environments. The integration of Jenkins with
BrowserStack is easy and impactful.
● GitLab CI/CD: GitLab CI/CD automates software integration, testing,
and deployment within the GitLab platform, streamlining development
workflows. BrowserStack offers seamless integration with GitLab,
facilitating efficient cross-browser and device compatibility testing.
● Travis CI: Travis CI specializes in automating build, test, and
deployment workflows, ensuring continuous integration and delivery.
● CircleCI: It is a robust automation tool in the software development
realm. It excels in automating build, test, and deployment procedures,
fostering a seamless continuous integration and delivery environment.
● GitHub Actions: It stands as a potent force in automating software
development workflows. Its core strength lies in automating build, test,
and deployment operations, creating a smooth ecosystem for continuous
integration and delivery. Complementing this power,

Reference:

6.Prototype Design PatternThe Prototype Design Pattern is a creational pattern that


enables the creation of new objects by
copying an existing object. Prototype allows us to hide the complexity of making new
instances
from the client. The concept is to copy an existing object rather than create a new
instance from
scratch, something that may include costly operations. The existing object acts as a
prototype and
contains the state of the object.
Builder Design Pattern
The Builder Design Pattern is a creational pattern used in software design to construct
a complex
object step by step. It allows the construction of a product in a step-by-step fashion,
where the
construction process can vary based on the type of product being built. The pattern
separates the
construction of a complex object from its representation, allowing the same
construction process
to create different representations.
Model-view-controller
The Model-View-Controller (MVC) is an architectural pattern that separates an
application into
three main logical components: the model, the view, and the controller. Each of these
components are built to handle specific development aspects of an application. MVC
is one of
the most frequently used industry-standard web development framework to create
scalable and
extensible projects.

MVC Components
Following are the components of MVC −
Model
The Model component corresponds to all the data-related logic that the user works
with. This can
represent either the data that is being transferred between the View and Controller
components or
any other business logic-related data. For example, a Customer object will retrieve the
customer
information from the database, manipulate it and update it data back to the database or
use it to
render data.
ViewThe View component is used for all the UI logic of the application. For example,
the Customer
view will include all the UI components such as text boxes, dropdowns, etc. that the
final user
interacts with.
Controller
Controllers act as an interface between Model and View components to process all the
business
logic and incoming requests, manipulate data using the Model component and interact
with the
Views to render the final output. For example, the Customer controller will handle all
the
interactions and inputs from the Customer View and update the database using the
Customer
Model. The same controller will be used to view the Customer data.

2MARK:
1. **Define the Command pattern in software design.**
- The Command pattern is a behavioral design pattern that encapsulates a request as
an object, thereby allowing parameterization of clients with queues, requests, and
operations. It decouples the sender of a request from the receiver, which executes the
request. This pattern enables the separation of the request for a service from the
execution of that service, providing flexibility, extensibility, and support for undoable
operations.

2. **What is the purpose of the Strategy pattern?**


- The Strategy pattern is a behavioral design pattern that defines a family of
algorithms, encapsulates each one, and makes them interchangeable. The purpose of
this pattern is to allow the client to choose an algorithm from a family of algorithms
dynamically, without coupling the client to a specific implementation. It promotes
code reuse, allows for easier maintenance, and enables algorithms to be modified or
extended without affecting the client code.

3. **Explain the Observer pattern briefly.**


- The Observer pattern is a behavioral design pattern where an object, known as the
subject, maintains a list of its dependents, called observers, and notifies them of any
state changes, usually by calling one of their methods. This pattern establishes a one-
to-many dependency between objects, so that when the subject's state changes, all its
observers are notified and updated automatically. It is widely used to implement
distributed event handling systems, decoupling the sender and receiver.

4. **Define the Proxy pattern and its usage.**


- The Proxy pattern is a structural design pattern that provides a surrogate or
placeholder for another object to control access to it. It acts as an intermediary
between a client object and a target object, allowing the proxy to control access, add
functionality, or defer the creation of the target object until it is actually needed.
Proxies are commonly used for lazy initialization, access control, logging, caching,
and monitoring.

5. **What is a Facade in software architecture?**


- A Facade is a structural design pattern that provides a simplified interface to a
larger body of code, such as a complex subsystem or API. It hides the complexities of
the system and provides a unified interface to interact with it. The purpose of a Facade
is to improve readability, maintainability, and ease of use by providing a higher-level
interface that shields the client from the details of the subsystem.

6. **Explain Regression testing.**


- Regression testing is a type of software testing that verifies that recent code
changes have not adversely affected existing features. It involves re-running tests that
have been previously executed to ensure that modifications to the software have not
introduced new bugs or caused existing functionalities to fail. The goal of regression
testing is to maintain the integrity of the software and to ensure that it continues to
function correctly after changes are made.

7. **Define verification, validation, testing, debugging.**


- **Verification**: The process of evaluating software to ensure that it meets the
specified requirements and standards. It involves activities such as reviews,
walkthroughs, and inspections.
- **Validation**: The process of evaluating software to ensure that it satisfies the
user's needs and requirements. It involves activities such as testing, user feedback, and
acceptance criteria.
- **Testing**: The process of executing a software system to identify differences
between expected and actual results, and to evaluate the quality of the software.
- **Debugging**: The process of identifying, isolating, and fixing errors, or bugs,
in software code.

8. **Compare blackbox testing and whitebox testing.**


- **Blackbox Testing**: Tests the functionality of a software system without
knowledge of its internal structure or implementation details. Testers focus on inputs
and outputs and do not have access to the source code. Examples include functional
testing, user acceptance testing, and system testing.
- **Whitebox Testing**: Tests the internal structure, logic, and code of a software
system. Testers have access to the source code and use techniques such as code
coverage, path testing, and branch testing to ensure that all paths are exercised.
Examples include unit testing, integration testing, and code review.

9. **Define Unit Testing and its importance in software development.**


- **Unit Testing**: A software testing technique where individual units or
components of a software system are tested in isolation to verify that they behave as
expected. It involves writing and executing tests for each unit, typically at the code
level, to validate its correctness.
- **Importance**: Unit testing is important because it helps identify defects early in
the development cycle, promotes code quality and maintainability, provides rapid
feedback to developers, and ensures that individual units function correctly in
isolation before they are integrated into larger systems.

10. **Explain symbolic execution testing.**


- **Symbolic Execution Testing**: Symbolic execution is a testing technique that
analyzes a program's code paths with symbolic rather than concrete input values. It
involves executing the program symbolically, representing input values as symbols
rather than actual data. This allows for the exploration of different execution paths,
the generation of test cases, and the detection of errors or vulnerabilities in the code.
Symbolic execution can be used for various testing purposes, including path coverage
analysis, automated test case generation, and vulnerability detection.

11. **List out various factors affecting project plan.**


- Various factors affecting project plans include:
- Scope: The size and complexity of the project's requirements.
- Time: The project's deadlines and time constraints.
- Budget: The financial resources available for the project.
- Resources: Human, material, and technological resources required for the
project.
- Risks: Potential threats to the project's success and how they will be managed.
- Stakeholders: The people or organizations involved or affected by the project.
- Dependencies: The relationships between project tasks and activities.
- Quality: The standards and expectations for the project's deliverables.
- Communication: The channels and frequency of communication among team
members and stakeholders.

12. **What are the main components of a Deployment Pipeline in DevOps?**


- The main components of a Deployment Pipeline in DevOps include:
- **Source Control**: Version control system (e.g., Git) to manage source code.
- **Build**: Automated build process to compile, test, and package the
application.
- **Test**: Automated testing, including unit tests, integration tests, and
acceptance tests.
- **Deploy**: Automated deployment to development, testing, and production
environments.
- **Release**: Release management to control the rollout of new features or
updates.
- **Monitor**: Continuous monitoring of application performance and health.

13. **Write about DevOps.**


- **DevOps** is a software development approach that emphasizes collaboration,
communication, integration, and automation between software development and IT
operations teams. It aims to streamline the software delivery process, enabling
organizations to deliver high-quality software products faster and more efficiently.
DevOps practices include continuous integration (CI), continuous delivery (CD),
automated testing, infrastructure as code (IaC), and continuous monitoring. By
breaking down silos between development and operations, DevOps fosters a culture
of shared responsibility, rapid feedback, and continuous improvement, ultimately
resulting in increased agility, reliability, and innovation.

14. **Explain PERT.**


- **PERT (Program Evaluation and Review Technique)** is a project management
technique used to analyze and represent the tasks involved in completing a project,
especially those with uncertain durations. PERT charts depict the sequence of tasks,
their dependencies,
and the estimated time required for each task. PERT allows project managers to
visualize the critical path, identify potential bottlenecks, and estimate the overall
duration of the project. It is particularly useful for large, complex projects where task
durations may vary.

15. **Milestone and Deliverables**


- **Milestone**: A significant event or point in time during the project's lifecycle
that marks the completion of a major phase or achievement. Milestones are used to
track progress, set goals, and evaluate performance.
- **Deliverables**: Tangible or intangible items produced as part of a project that
are delivered to stakeholders. Deliverables can include documents, reports, software,
prototypes, or any other output that meets a specific requirement or objective.

16. **Public Subscribe Design Pattern**


- **Publish-Subscribe Design Pattern**: Also known as the Observer pattern, it is a
behavioral design pattern where an object, known as the subject, maintains a list of its
dependents, called observers, and notifies them of any state changes, usually by
calling one of their methods. This pattern establishes a one-to-many dependency
between objects, so that when the subject's state changes, all its observers are notified
and updated automatically. It is widely used to implement distributed event handling
systems, decoupling the sender and receiver.

17. **Different Architectural Design**


- **Architectural design** refers to the high-level structure of a software system,
including the arrangement of its components, their interactions, and the principles
guiding their design. Different architectural designs include:
- **Layered Architecture**: Organizes the system into layers, where each layer
provides a set of services to the layer above it. Examples include OSI model and
three-tier architecture.
- **Client-Server Architecture**: Divides the system into clients and servers,
where clients make requests to servers for resources or services. Examples include
web applications and database servers.
- **Microservices Architecture**: Decomposes the system into a set of small,
independent services, each running in its own process and communicating via
lightweight mechanisms. Promotes modularity, scalability, and flexibility.
- **Service-Oriented Architecture (SOA)**: Organizes the system as a collection
of services that can be combined to fulfill business requirements. Services are loosely
coupled and interoperable, promoting reusability and flexibility.
- **Event-Driven Architecture**: Focuses on events and their handlers, where
components communicate by generating and consuming events. Enables
asynchronous communication and scalability.
- **Component-Based Architecture**: Breaks down the system into reusable and
replaceable components, each encapsulating a set of functionalities. Promotes
reusability, maintainability, and flexibility.

You might also like