enterprise slides
enterprise slides
enterprise slides
a retrospective
What is Enterprise computing?
- Business intelligence
● Backward Compatibility
● Hot swapping of hardwares and softwares
○ Parts can be changed without downtime
● Reliability, availability, and serviceability (RAS)
● Scalability
○ The degree to which the IT organization can add capacity
without disruption to normal business processes or without
incurring excessive overhead (nonproductive processing) is
largely determined by the scalability of the particular
computing platform.
Goal-Line Technology
Example: DNS
Why shift to Client-Server?
• increased performance
: Implements Middleware
- Business intelligence
● Backward Compatibility
● Hot swapping of hardwares and softwares
○ Parts can be changed without downtime
● Reliability, availability, and serviceability (RAS)
● Scalability
○ The degree to which the IT organization can add capacity
without disruption to normal business processes or without
incurring excessive overhead (nonproductive processing) is
largely determined by the scalability of the particular
computing platform.
Goal-Line Technology
Example: DNS
Why shift to Client-Server?
• increased performance
: Implements Middleware
● Awesome Processing
Power
● Cost efficient
● Expandability
● Availability
Distributed Computing
Resource Heterogeneity:
Resource Management:
Use Cases:
Architecture:
Communication:
Scalability:
Use Cases:
● On-Demand Service
● Pay-per-Use Model
● Scalability
● Virtualization
● Service-Level Agreements (SLAs): Utility computing providers often
offer service-level agreements that define the quality of service,
performance guarantees, availability, and support for the provided
resources.
Utility Computing vs Cloud Computing
● Resource Model:
○ Utility Computing: focuses on providing computing resources, such as
processing power, storage, and network bandwidth, as a service. It follows a
pay-per-use model where users are billed based on their actual resource
consumption.
○ Cloud Computing: encompasses a broader range of services, including
infrastructure, platforms, and software, delivered over the internet. It provides
a variety of services beyond computing resources, such as databases,
development tools, and applications. Cloud computing can include utility
computing as one of its components.
● Emphasis:
○ Utility Computing: Utility computing emphasizes resource consumption and the
ability to pay for resources based on actual usage. It provides a cost-effective
and flexible approach to accessing and utilizing computing resources.
○ Cloud Computing: Cloud computing emphasizes the delivery of various services
beyond computing resources. It focuses on providing a scalable, on-demand,
and self-service environment for users to access and consume a wide range of
services.
In 1997 W3C published XML (extensible markup language) using which one
could also write well-formed HTML, thereby driving browsers to support XML
in addition to HTML.
The web introduced a universal mechanism called URI (Uniform Resource
Identifier) for naming and locating resources. This allowed for easy
identification of web or network locations. A well-known example of a URI is the
URL (Uniform Resource Locator), which specifies a specific web address. URIs
can also be used to name other resources or abstractions.
The combination of URI and XML as a standardized message format enabled the
development of formal web service standards. XML-RPC, for instance, emulates
remote procedure calls over HTTP using XML for data transfer. However, XML-
RPC is limited to simple data types like integers and strings.
To support more complex and nested types (such as OO structures), the SOAP
(Simple Object Access Protocol) protocol was developed.
With SOAP, applications gained the ability to call web services published by
other applications over the internet, allowing for interoperability between
different systems and platforms.
During the same period when web services standards were being developed
and utilized by companies like Fed-Ex, Amazon, and eBay for tasks like order
placement and shipment tracking, large enterprises faced challenges in
integrating their diverse range of applications and architectures within their
data centers.
This situation highlights the need for robust integration solutions that can
bridge the gaps between different systems and enable seamless
The emerging application server architecture provided a seamless way to access
legacy systems, making it an ideal choice for building integration layers. Software
vendors developed products known as "enterprise service buses" (ESBs) on top of
application servers, which abstracted integration complexities.
The term "AJAX" (Asynchronous JavaScript and XML) emerged to describe this
style of user interfaces. AJAX-based mashups empowered users to
personalize their web experience by combining services of their choice to
create customized pages.
The rise of AJAX-based mashups marked a shift in the way applications were
integrated, emphasizing the importance of client-side interactions and
dynamic web experiences.
IOS refers that everything needed to use software applications is
available as a service on the internet, including the software itself,
tools to develop it and the platform to run the software.
Adopting Cloud Computing in Business
● No, there are many enterprises that exist without the intervention of a
enterprise architect. However, there are many situations when it is
better to architect an enterprise than to leave the creation and
evolution of its architecture to chance.
● Every enterprise will always have an architecture, it’s not optional. But
we do have a choice, whether we manage its evolution or not and how
well we manage it.
ENTERPRISE DATA AND PROCESSES
1. Data level integration: This level involves the direct transfer of data between
applications. It can be achieved through batch programs that periodically exchange data or
through real-time data exchange using database triggers. For example, an inventory
management system may update the stock levels in a sales system database through batch
programs or trigger-based updates.
2. API level integration: Applications can publish API (Application Programming Interface)
libraries that allow other applications to access their data or functionality. APIs define a set
of rules and protocols for communication and data exchange. By leveraging APIs,
applications can interact with each other seamlessly. For instance, a payment gateway API
allows an e-commerce application to process online transactions securely.
In the city, EAI and SOA work hand in hand. EAI helps the different neighborhoods
connect and share information by building bridges and communication channels.
SOA helps them follow a set of rules and communicate using their unique abilities
or services. It's like EAI building the roads and bridges, and SOA providing the
language and rules for them to interact.
ENTERPRISE TECHNICAL ARCHITECTURE
While the security of data in the cloud is often well-managed from a technical and physical
safety standpoint, there are still important considerations regarding regulatory restrictions
and network security. It's crucial to ensure that applications have strong network security
measures in place before deploying them to the cloud. Cloud providers offer features like
VPCs to help address these security concerns.
Implementation architectures and quick-wins
It is more likely that new technology, including cloud computing, will be first
deployed in peripheral arenas far removed from core IT systems
DATA CENTER INFRASTRUCTURE: COPING
WITH COMPLEXITY
We have discussed many of the architectural issues and decisions involved while
considering applications for cloud deployment. But why consider cloud
deployment in the first place?
Let us consider a typical large enterprise, such as a major North American bank,
for instance. If one were to do an inventory of such an enterprise’s IT infrastructure
and applications, what should one expect?
1. Virtualization
2. Automation
Chapter 3 completed!
Cloud Concepts
Cloud Computing
● IaaS
● PaaS
● SaaS
Infrastructure as a Service (IaaS)
Examples: EC2, Microsoft Azure Virtual Machines, and Google Cloud Platform (GCP)
Compute Engine.
By leveraging IaaS, organizations can benefit from the flexibility, scalability, and
cost efficiency of cloud computing while reducing the burden of managing and
maintaining physical infrastructure. They can focus more on their applications and
business logic rather than worrying about hardware provisioning and maintenance.
Platform as a Service (PaaS)
It is a cloud computing service model that provides users with a platform and
environment for developing, deploying, and managing applications over the internet.
In a PaaS model, the cloud service provider manages the underlying infrastructure,
including servers, storage, and networking, while users have control over the
applications and services they develop and deploy on the platform.
Examples: AWS Elastic Beanstalk, Microsoft Azure App Service, Google App Engine
By leveraging PaaS, developers and organizations can focus more on building and
deploying applications, rather than managing the infrastructure and platform
components. PaaS offers a streamlined development experience, faster time to
market, and scalability benefits, making it an attractive choice for building and
running modern applications in the cloud.
Software as a Service (SaaS)
It is a cloud computing service model that delivers software applications over the
internet. In the SaaS model, the cloud service provider hosts and manages the
underlying infrastructure, including servers, storage, and networking, as well as
the software applications, which are made accessible to users over the internet.
SaaS offers ease of use, scalability, and the ability to leverage the latest software
capabilities, making it an attractive option for businesses and individuals seeking
flexible and cost-effective software solutions.
Cloud Types (Deployment Models)
● Public Cloud
● Private Cloud
● Hybrid Cloud
● Community Cloud
Public Cloud
In a public cloud deployment, cloud computing resources are owned and operated
by a cloud service provider (CSP) and made available to the general public or
multiple organizations.
These resources, such as virtual machines, storage, and applications, are hosted in
the provider's data centers. Users can access and utilize these resources over the
internet on a pay-as-you-go basis.
Examples of public cloud providers include Amazon Web Services (AWS), Microsoft
Azure, and Google Cloud Platform (GCP).
Private Cloud
The private cloud can be physically located on-premises, within the organization's
own data center, or it can be hosted externally in a provider's data center. Private
clouds offer more control, security, and customization options, making them
suitable for organizations with specific compliance or data privacy requirements.
Hybrid Cloud
A hybrid cloud deployment combines elements of both public and private clouds. It
allows organizations to utilize a mix of on-premises infrastructure, private cloud
resources, and public cloud services.
The hybrid cloud model provides flexibility and scalability by enabling data and
applications to move between the private and public cloud environments.
This allows organizations to take advantage of the public cloud's elasticity while
retaining control over sensitive data in the private cloud.
Community Cloud
Data breaches, unauthorized access, and data loss are potential risks that must be
addressed through robust security measures, including encryption, access
controls, and regular security audits.
Moving applications and data to the cloud may lead to vendor lock-in, where an
organization becomes heavily dependent on a particular cloud provider's
proprietary technologies, APIs, or infrastructure.
This can limit flexibility and make it difficult to migrate to another cloud provider or
bring services back in-house. To mitigate this risk, organizations should consider
interoperability standards, open-source solutions, and strategies for multi-cloud or
hybrid cloud environments.
Cost Management
Cloud computing offers cost savings through pay-as-you-go pricing models, but it
can also introduce challenges in managing and optimizing costs. Organizations
need to carefully monitor and control their cloud usage to avoid unexpected
expenses.
Virtualization plays a crucial role in enabling the cloud computing paradigm. It forms
the foundation for the cloud by providing the necessary infrastructure and
capabilities to deliver the key features and benefits of cloud computing.
Roles of virtualization in enabling the cloud:
Virtualization allows multiple users or tenants to securely share the same physical
infrastructure while maintaining isolation and ensuring resource allocation
according to their needs.
Elasticity and Scalability:
Virtualization allows for elastic and scalable resource allocation in the cloud.
Virtual machines can be rapidly provisioned or decommissioned based on
workload requirements, enabling the cloud to scale up or down dynamically.
This elasticity ensures efficient resource utilization and the ability to handle
varying workloads and spikes in demand.
Hardware Independence and Abstraction:
Virtual storage allows for the creation of virtual disks or storage volumes that can
be dynamically allocated to virtual machines. These virtual networking and
storage capabilities enhance the agility, flexibility, and scalability of cloud
services.
Virtualization forms the building blocks of cloud computing, providing the
necessary abstraction, resource pooling, scalability, and management capabilities
to deliver on-demand, scalable, and cost-effective cloud services. It enables the
efficient utilization of resources, improves agility, and simplifies the deployment
and management of cloud infrastructure and services.
Application availability, performance, security
and disaster recovery
Application Availability
Redundancy and Replication: Cloud providers typically offer redundancy and replication across
multiple data centers or availability zones. By distributing application instances across these
zones, service availability is enhanced, and the risk of single-point failures is mitigated.
Load Balancing: Cloud platforms provide load balancing capabilities to distribute incoming
traffic across multiple instances of an application. This ensures efficient resource utilization
and improved availability by directing traffic away from overloaded instances.
Auto Scaling: Cloud services often include auto scaling functionality, allowing the
infrastructure to automatically adjust the number of application instances based on demand.
Scaling up or down in response to workload fluctuations helps maintain availability and
performance.
Performance
Content Delivery Networks: CDNs can be leveraged in the cloud to distribute static
content closer to end-users, reducing latency and improving application
performance. CDNs cache content in multiple geographical locations, enabling
faster content delivery.
Security
Identity and Access Management (IAM): Cloud providers offer IAM services to manage
user access, authentication, and authorization. Administrators can define fine-grained
access policies, ensuring that only authorized users can access applications and data.
Network Security: Cloud networks can be configured with firewalls, network access
control lists (ACLs), and security groups to control inbound and outbound traffic and
protect against unauthorized access.
Encryption and Key Management: Cloud providers offer encryption services to protect
data at rest and in transit. Customers can manage encryption keys or leverage cloud
provider's key management solutions for enhanced security.
Disaster Recovery
Data Replication and Backup: Cloud providers often offer built-in data replication
and backup capabilities, ensuring that data is replicated across multiple
geographic locations. Regular backups of application data can be scheduled and
automated.
Data Centers
These are the physical facilities that house the servers and other hardware
necessary to run cloud services. Cloud providers often have multiple data centers
located in various geographic regions to ensure redundancy and minimize latency
for users in different parts of the world.
Availability Zones
Availability Zones (AZs) enhance the fault tolerance and high availability of their
infrastructure and services. AZs are distinct and isolated data center locations
within a geographic region that are connected by high-speed and low-latency
networks.
These zones are designed to be physically separate from each other to reduce the
risk of simultaneous failures due to natural disasters, power outages, or other
potential disruptions.
Region
A geographical area where a cloud service provider has established one or more
data centers. Each region is designed to be an isolated and independent part of
the cloud provider's infrastructure, and it typically consists of multiple availability
zones (AZs) within that specific geographic area.
When users access content, the CDN caches copies of that content in edge
locations. Subsequent requests for the same content can be served directly from
the nearest edge location, reducing latency and relieving the load on the origin
servers.
Regional Edge Cache
On the other hand, regional edge caches cover larger areas, such as whole
countries or specific regions within a country. They are strategically placed to
enhance content delivery within designated regions, reducing latency for users in
those areas.
Architecture
Diagram
Deploying and operating in the Cloud
Deploying
➢ Provision infrastructure from code (Infrastructure as a Code (IaC) and AWS
CloudFormation)
➢ Tagging can be used with automation to provide more insights into what has been
provisioned
Well-Architected Framework Design
Principles
1. Operational Excellence
2. Security
3. Reliability
4. Performance Efficiency
5. Cost Optimization
6. Sustainability
Operational Excellence:
● Addresses the protection of data, systems, and assets while maintaining the
confidentiality, integrity, and availability of information.
● Advocates for implementing strong identity and access management,
encryption, network security, and best practices for secure application
development.
Reliability:
● Design for elasticity and scalability: Use cloud services like auto-scaling to
dynamically adjust resources based on demand, rather than provisioning
fixed capacity in advance.
● Take advantage of on-demand provisioning and pay-as-you-go pricing to
avoid over provisioning and reduce costs.
Test Systems at Production Scale:
A cloud computing model that provides virtualized computing resources over the
internet. In IaaS, cloud service providers offer a complete infrastructure, including
virtual machines, storage, networking, and other resources, as a service to users.
This allows organizations to access and use computing resources without the need
to purchase and maintain physical hardware.
Amazon EC2
A cloud computing model that provides a platform and environment for developers
to build, deploy, and manage applications without the complexity of managing
underlying infrastructure. PaaS abstracts away the complexities of hardware,
operating systems, and networking, allowing developers to focus solely on
developing and deploying applications.
Google App Engine
➢ Virtualizing servers involves converting one physical server into multiple virtual
machines (VMs).
➢ A virtual server is configured so that multiple users can share its processing
power.
➢ Azure=>Virtual machines
➢ Run code without thinking about servers or clusters. Only pay for what you use.
○ Continuous scaling
➢ Automatic scaling of resources during spikes and termination during the drop
➢ Vertical Scaling (Scale UP/DOWN): Here the existing server is upgraded to the
higher specification of memory, CPU, Storage etc.
➢ Horizontal Scaling (Scale IN/OUT): Here multiple servers or instances are created
having the exact specification as the existing one. It is a more popular type of
scaling for applications or services in the deployment phase. It is also created to
distribute load among the multiple servers using Load Balancer.
Key benefits of auto scaling include:
Performance Optimization: Auto scaling ensures that the application can handle
varying workloads, maintaining optimal performance and responsiveness.
Simplified Management: Auto scaling reduces the need for manual intervention and
monitoring, making application management more efficient.
Flexible and Agile: Auto scaling allows applications to adapt to changing conditions
and respond quickly to fluctuations in user traffic.
Here's how auto scaling works:
Scaling Policies: Based on predefined scaling policies, the auto scaling system analyzes
the monitoring data to determine whether the application needs to scale up or down.
Scaling Actions: When certain thresholds are met or exceeded, the auto scaling system
takes action. If the workload increases, it adds more instances (e.g., virtual machines,
containers) to handle the additional traffic. Conversely, if the workload decreases, it
removes instances to save on resources and costs.
Dynamic Resource Allocation: Auto scaling can dynamically adjust the number of
instances in response to changes in demand, ensuring that the application can scale up
during peak times and scale down during periods of low activity.
Load Balancing: Auto scaling often works in conjunction with load balancing. As new
instances are added or removed, load balancers distribute incoming traffic evenly among
the available instances, ensuring efficient resource utilization.
Autoscaling
Architecture
Storage Services - Object Storage
➢ In object storage, the data is broken into discrete units called objects and is kept in
a single repository, instead of being kept as files in folders or as blocks on servers.
➢ The objects stored have an ID, metadata, attributes, and the actual data.
➢ An easily set access as well as editing permissions across files and trees such that
security and version control are far easier to manage
➢ Each block of data is given a unique identifier, which allows a storage system to
place the smaller(equal) pieces of data wherever is most convenient.
➢ The more data you need to store, the better off you’ll be with block storage.
➢ Two requirements: Cost must be low and data recovery must be guaranteed.
➢ Traditionally stored in cheaper magnetic storage but retrieval may not be guaranteed due to
storage corruption.
➢ Benefits:
Uses SQL queries to SELECT, DELETE, UPDATE, WHERE, INSERT entries in the database.
SQL is widely used in various applications and industries for managing data in
relational databases. It is the standard language used for interacting with
relational database management systems (RDBMS) like MySQL, PostgreSQL,
Oracle, Microsoft SQL Server, and others. SQL's declarative nature allows users to
specify what data they want to retrieve or modify, and the database management
system handles the actual execution of the queries, making it a powerful and
versatile tool for data manipulation and management.
NoSQL
NoSQL (Not Only SQL) is a term used to describe a class of databases that differ from
traditional relational databases (SQL databases) in their data model and storage
mechanisms. Unlike SQL databases, NoSQL databases are designed to handle large
volumes of unstructured or semi-structured data, providing more flexible and scalable
solutions for specific use cases.
NoSQL databases are popular choices for certain use cases, such as web applications, real-
time analytics, IoT data management, and social networks. They offer greater flexibility and
scalability compared to traditional SQL databases, making them suitable for modern
applications with large volumes of diverse and rapidly changing data. However, it's
essential to choose the right type of NoSQL database based on the specific requirements of
the application and the data model it needs to support.
NoSQL examples
Graph databases are valuable for applications that require real-time analysis and exploration
of complex relationships. Some common use cases for graph databases include:
● Social Networks: Managing user profiles, friend connections, and social interactions.
● Recommendations: Providing personalized recommendations based on user behavior and
preferences.
● Fraud Detection: Identifying suspicious patterns and networks of fraudulent activity.
● Knowledge Graphs: Building knowledge bases to represent and reason about vast
amounts of information.
● Network and IT Operations: Visualizing and analyzing network topologies and
dependencies.
Database migration refers to the process of transferring data and its underlying
structure from one database system to another. This could involve moving data
between different database management systems (DBMS), upgrading to a newer
version of the same DBMS, or consolidating data from multiple databases into a
single database.
Database migration requires careful planning, testing, and execution to minimize the
risk of data loss or downtime. It is essential to have a backup strategy in place and
involve database administrators and IT teams with experience in handling such
migrations. Automated migration tools and scripts can also help simplify the process
and ensure accuracy.
Chapter 5 completed
Networking & Security
Cloud Network
Doesn’t host content rather caches the content for faster access time.
A properly configured CDN may also help protect websites against some common
malicious attacks, such as Distributed Denial of Service (DDOS) attacks.
While a CDN does not host content and can’t replace the need for proper web
hosting, it does help cache content at the network edge, which improves website
performance. Many websites struggle to have their performance needs met by
traditional hosting services, which is why they opt for CDNs.
○ Save bandwidth
○ Speed responsiveness
➢ The naming system for the computers, services or other resources on the internet or
a private network.
➢ Hierarchical distributed database that allows storing IP addresses and other data,
and looking them up to user names.
The primary goal of a cloud load balancer is to evenly distribute incoming traffic
among the backend servers, preventing any individual server from being
overwhelmed, and ensuring that the workload is distributed efficiently. This helps
avoid server overloads and bottlenecks, leading to a smoother user experience and
reduced response times.
Cloud security and compliance concepts
Cloud security refers to the set of policies, technologies, and practices designed to protect data,
applications, and infrastructure hosted in cloud computing environments.
➢ Distributed Denial of Service (DDoS) Protection Google Cloud Armor, AWS Shield, Azure
DDoS Protection
➢ Virtual Private Cloud (VPC) is the logical division of a CSP’s public cloud to support private
cloud computing. It provides network isolation with a range of IP addresses called subnets. It
controls the network traffic to cloud infrastructure in the VPC so protects from unauthorized
access.
➢ Access Control Lists (ACLs) control access settings for resources on the cloud. Permissions
for access control include read/write access and the user/group of users who can access the
resource.
➢ Network Security Groups are available for different services and infrastructure that specify
the protocol of access, IP address of source/destination and open ports for access.
➢ Firewall:- Centrally configure and manage firewall rules
➢ Identity and Access Management (IAM) allows access management using policies that
ensure that the right users or user groups have access to the appropriate resources.
○ Includes:
● IAM Policy: Defines which resource can be accessed and the level of access.
● IAM Role: Used to communicate and control resources. “DENY” has the highest priority.
● Regulatory Compliance
These preset controls protect your sensitive data from dangerous public exposure.
Essential areas of cloud governance include:
-Asset management involves organizations taking stock of all cloud services and
data contained, then defining all configurations to prevent vulnerability.
-Financial controls address a process for authorizing cloud service purchases and
balancing cloud usage with cost-efficiency
-Utilize role-based access and group level privileges, granting access based on
business needs and the least privilege principle.
The complexity and dispersed nature of the cloud make monitoring and
logging all activity extremely important. Capturing the who, what, when,
where, and how of events keeps organizations audit-ready and is the
backbone of compliance verification. When monitoring and logging data in
your cloud environment, it’s essential to:
Cloud adoption has accelerated in the past year as organizations scrambled to support
a remote workforce. Despite this rapid adoption and growth, companies often
misunderstand a key cloud concept: the shared responsibility model (SRM).
Many business leaders still ask, “Is the cloud secure”? This is the wrong question. A
more appropriate question would be, “Are we, as a security team and organization,
securing our share of the cloud?” The overwhelming majority of cloud data
breaches/leaks are due to the customer, with Gartner predicting that through 2025,
99% of cloud security failures will be the customer's fault. For this reason, it is
imperative that all security practitioners understand their responsibilities.
The shared responsibility model delineates what the cloud customer is responsible
for, and what the cloud service provider (CSP) is responsible for.
The CSP is responsible for security “of” the cloud—think physical facilities, utilities,
cables, hardware, etc.
The customer is responsible for security “in” the cloud—meaning network controls,
identity and access management, application configurations, and data.
At a basic level, the NIST Definition of Cloud Computing defines three primary cloud
service models:
-Infrastructure as a service (IaaS): Under the IaaS model, the CSP is responsible for the
physical data center, physical networking, and physical servers/hosting.
-Platform as a service (Paas): In a PaaS model, the CSP takes on more responsibility for
things such as patching (which customers are historically terrible at and serves as a
primary pathway to security incidents) and maintaining operating systems.
-Software as a service (SaaS): In SaaS, the customer can only make changes within an
application’s configuration settings, with the control of everything else being left to the
CSP (think of Gmail a basic example)
CloudWatch
➢ It is a metrics repository
➢ Alarms can be set for certain metric criteria to trigger a notification or even
make changes to the resources if a threshold is crossed.
Cloud Formation
➢ Infrastructure as Code
➢ Developers can deploy, update the resources in simple abstract ways to reduce
complexity.
➢ CSP provides a platform to securely store, search, analyze, and alert all of the
customers log data and events
➢ Service Logs: Monitoring of services provided by various CSPs. May include logs
of object storage, load balancers or CDNs.
Personal Health Dashboard
➢ Provides alerts and guidance for CSP events that might affect your environment
➢ Configure customizable cloud alerts for active and upcoming service issues
➢ Businesses may use cloud messaging services to send event-triggered messages, like
maintenance timings, promotional messages or event messages to their customers.
● Understanding the economics of cloud computing can help businesses make informed
decisions and develop cost-effective cloud strategies aligned with their overall business
objectives.
● Cloud Computing Economics is based on the pay-as-you-go method.
● Economies of Scale: Cloud providers benefit from economies of scale due to their vast
infrastructure and customer base. These providers can spread their infrastructure costs across
many users, potentially offering cost advantages to their customers.
● When exploring cloud economics, a company can follow the procedure that includes:
○ Benchmarking: Calculate cost of operating current data centre including capital cost.
○ Cloud costs: Estimate the cost of cloud infrastructure (private, public or hybrid).
Receive quotations from different CSPs and compare the integration cost, security and
compliance points.
● Based on these costs, ROI and TCO are calculated that are used to make decisions.
Cost benefits of Cloud Computing
○ Converts fixed costs (Capital Expense) into variable costs (Variable Expense)
● The financial considerations and cost implications associated with implementing and operating a
private cloud infrastructure within an organization's own data center or on-premises environment.
● Private clouds offer several economic factors that organizations need to consider when deciding to
adopt this model. Here are some key aspects of the economics of private cloud:
5. Opportunity Cost: While private clouds offer greater control and security,
organizations must weigh the opportunity cost of not leveraging the agility and
scalability benefits of public cloud services. Public clouds allow organizations to
focus more on their core business without the burden of managing infrastructure.
➢ Development and testing servers require a different environment than a production environment.
Also, these development and testing servers become obsolete after release and waste resources.
➢ Virtualization can help in this case to meet the growing demand of servers, but the time for
provisioning and configuring such servers may bottleneck projects with faster development cycles. For
this reason, the public cloud is a better option to provision and release such infrastructure on demand.
➢ Stress testing during the initial stages is also not possible due to the lack of a proper environment,
which is solved by the public cloud.
➢ The public cloud also enables globally distributed teams to work on a project, which is also known
to boost team morale, including skills from different parts of the world. So, it is advantageous to use a
public cloud that is centrally located build servers to provide low latency connection to the globally
distributed team.
➢ Likewise, PaaS provides faster and easier deployment for software and provides better scalability as
well.
Economies of Scale: Public Vs. Private
Clouds
➢ Public cloud providers enjoy purchasing hardware, storage and network are cheaper
on large scales in the case of a public cloud than in a private cloud.
➢ Public cloud providers can gradually pay off the debt of server administration over a
large number of servers by employing automation.
➢ Public cloud providers have their data centres at locations where power cost is less or
where power is produced.
➢ Most popular public cloud vendors have pre-established data centres and employ
cloud services using those resources at a high level. (Eg. Google, Amazon, Microsoft)
Q) An enterprise plans to host its MIS in the cloud.
b) If the pricing model of the virtual server is changed to a full year service plan
with the commitment of NPR 67000 and with full payment upfront. What will be the
percentage change in the cost?
Solution: Only an approximation calculation daily to monthly conversion is done by
multiplying by 30,
a) On Demand
➢ Detecting anomalies and other rare events, such as credit card and insurance
claim fraud, illegal duplication of mobile phone SIMs, and even terrorist activities.
➢ Helps organizations analyze historical and current data, so they can quickly uncover
actionable insights for making strategic decisions.
➢ Processing large data sets across multiple sources and presenting findings in visual
formats that are easy to understand and share
Key components of Business Intelligence include:
1. Data Collection: BI starts with collecting data from various sources such as databases,
spreadsheets, web applications, cloud services, and more. Data can be structured (e.g.,
tables, databases) or unstructured (e.g., text, images).
2. Data Integration: The collected data often comes from disparate sources and needs to be
integrated into a central repository, called a data warehouse, to provide a unified view of the
organization's data.
3. Data Analysis: Once the data is integrated, it is analyzed using various statistical,
mathematical, and analytical techniques to identify patterns, trends, correlations, and
insights.
4. Reporting and Visualization: BI tools offer reporting and visualization capabilities that
transform complex data into easy-to-understand charts, graphs, dashboards, and reports,
making it simpler for business users to interpret the information.
5. Business Performance Management: BI helps monitor key performance indicators (KPIs)
and track progress towards business goals, enabling organizations to make data-driven
decisions to improve performance.
6. Data Mining: This involves identifying hidden patterns or relationships within large datasets
to reveal valuable information for business purposes.
7. Predictive Analytics: BI can also leverage predictive modeling techniques to forecast future
trends, anticipate customer behavior, or predict outcomes based on historical data.
8. Real-time BI: Some BI systems allow real-time or near real-time data processing, enabling
businesses to react swiftly to changes and make decisions based on the most current
information.
Benefits of Business Intelligence:
Text and Data Mining (TDM) refers to the process of automatically extracting useful
information, patterns, or knowledge from large collections of textual and
structured data. It involves using computational techniques, algorithms, and
machine learning to analyze vast amounts of data to discover hidden insights,
trends, and relationships.
Text Mining
Text mining, also known as text analytics or natural language processing (NLP), involves
extracting meaningful information from unstructured text data. Unstructured text data
can include documents, articles, social media posts, emails, customer reviews, and more.
Free-Form Text: This includes documents, articles, books, emails, social media posts, and any
other type of text that people write naturally without adhering to a structured format.
Narratives: Storytelling, conversations, narratives, and personal accounts that don't follow a
structured template fall under unstructured textual data.
Notes and Comments: Unstructured text often includes handwritten notes, annotations,
comments on documents, and memos that capture thoughts and ideas.
Web Pages and Blogs: Web pages, blog posts, and online content that are authored in a way
that doesn't conform to a structured schema.
Social Media Data: Social media posts, comments, tweets, and messages that reflect the
informal and diverse nature of human communication.
Data Mining
Data mining involves the extraction of useful patterns and insights from structured data sets,
typically stored in databases or spreadsheets. The process includes applying statistical and
machine learning algorithms to explore data, discover patterns, and make predictions.
Common data mining techniques include:
Text and data mining have a wide range of applications across various industries:
● Finance: Analyzing financial news and reports to predict market trends and
stock performance.
● Marketing: Analyzing customer reviews and feedback to understand
customer preferences and sentiment.
● Academia: Mining scholarly articles to identify research trends and
connections between topics.
● Social Media Analysis: Extracting insights from social media data to
understand customer behavior and brand perception.
Data Mining use cases:
Market Basket Analysis: Retailers analyze customer purchase patterns to identify products
frequently bought together, enabling them to optimize product placement and promotions.
Credit Scoring: Financial institutions use data mining to assess credit risk by analyzing customer
financial data and credit history.
Fraud Detection: Banks and credit card companies identify unusual transactions or patterns to
detect fraudulent activities and prevent financial losses.
Healthcare Analytics: Analyzing patient records and medical data helps identify disease
patterns, treatment effectiveness, and population health trends.
Recommendation Systems: Online platforms use data mining to suggest products, services, or
content based on user preferences and behaviors.
Sentiment Analysis: Social media and customer reviews are analyzed to gauge public sentiment
and opinion about products, brands, or events.
Text and database search
Searching structured data using text search instead of SQL might be appropriate in
certain situations, depending on the context and requirements of the task. Here are
some scenarios where text search could be preferred over SQL for searching
structured data:
However, it's important to note that text search is not a complete replacement for SQL,
especially for tasks that involve querying structured data based on precise conditions,
performing aggregations, or dealing with complex joins across multiple tables. SQL is purpose-
built for interacting with relational databases, performing set operations, filtering, sorting, and
other structured data manipulations.
Chapter 8 completed
Enterprise cloud
computing ecosystem
and roadmap
Public cloud providers
The major public cloud service providers are: Amazon Web Services, Google Cloud
Platform and Microsoft Azure.
The common feature is the ability of users to pay only for the resources they
actually consume, at a very fine granularity. From a business perspective this is
the essence of classifying any offering as a public cloud service.
Amazon Web Services (AWS)
➢ Currently has 102 availability zones, which also makes it the cloud service
provider among the discussed group with the most available global locations.
➢ High profile companies such as Netflix, Unilever, and Airbnb use AWS.
➢ It offers Platform as a Service (PaaS) with the use of AWS Elastic Beanstalk.
➢ AWS has its primary focus on the public cloud rather than private or hybrid
cloud models.
Microsoft Azure
➢ The reputed customer base of companies like Apple, Honeywell, HP and many
more.
➢ It offers PaaS under the alias App Service and Cloud Services.
➢ Microsoft Azure’s focus is divided among public and private cloud with
enterprise customers being most attracted to the services.
Google Cloud Platform (GCP)
➢ GCP, offered by Google, is a bunch of cloud services that internally use the same
resources used by YouTube, Google Search Engine and other Google products.
➢ It has the least global locations spread and offers over 60 services which are the
least among the discussed options.
➢ GCP’s client base includes companies like PayPal, Dominos, 20th Century Fox.
➢ Both AWS and Microsoft Azure provide pay-per-minute billing, however, GCP allows
the customer to opt for pay-per-second billing which means customers save more with
GCP than they do with AWS or Microsoft Azure.
Selection Considerations
➢ Considering the establishment of the three service providers, AWS is the oldest and the most
experienced
➢ However, GCP has the best growth rate amongst the three.
➢ With over 200 services, AWS offers the most and with over 60 services GCP offers the least
number of
services.
➢ When it comes to open-source integration and on-premise systems, Microsoft Azure has the most
advantage.
➢ Considering the brands that already use the services, all platforms are considered equal
Cloud management platforms and tools
➢ These platforms are themselves deployed on the cloud, either by agreement with
partner service providers or some smaller hosting providers.
➢ Cloud management tools also offer dynamic monitoring and load balancing.
➢ Nowadays these tools are deeply integrated within CSP architecture for example:-
Amazon has Elastic load balancing, CloudWatch and Autoscaling
While CSPs offer a robust set of native tools, there are scenarios where third-
party Cloud Management Platforms (CMP) can still be beneficial:
The AppScale open source project (also developed at the University of California,
Santa Barbara) mimics the GAE platform through distributed deployment of the
GAE development web-server on a cluster of virtual machines. Using AppScale, a
GAE-like PaaS environment can be implemented in a scalable manner on an IaaS
platform, such as EC2 or Eucalyptus.
Future of enterprise cloud computing
It has been well elucidated in the popular book The Big Switch, the evolution of industrial use of
electricity from private generating plants to a public electricity grid can serve as an illuminating
analogy for the possible evolution of enterprise IT and cloud computing. In such an analogy, privately
run enterprise data centers are analogous to private electric plants whereas the public electricity grid
illustrates a possible model towards which the public the clouds of today may evolve.
As another analogy, let us consider data communications: In the initial days of digital networks,
corporations owned their own data communication lines. Today all data communication lines are
owned by operators who lease them out, not only to end-users, but also to each other. The physical
resource (bandwidth) has become a commodity, and it is only in the mix of value added services
where higher profits are to be made.
In the Forrester report it is mentioned that 80% of future computing experiences will be accomplished
by light computing modes (in other words: smartphones, displays, browser-based laptops, etc.), while
20% will still require heavy compute resources for graphics, AI, and other workloads.
According to report some future trends are:
●Increased application access (Mobile apps): Cloud-based applications are optimized for mobile platforms,
enhancing user experiences on smartphones and tablets, and promoting collaborative workflows through real-
time document sharing and teamwork. This trend also aligns with hybrid work models, offering flexibility
between home and office settings. While providing benefits like reduced dependency on local installations and
scalability, challenges such as security, network reliability, user training, and data privacy must be addressed to
ensure successful implementation and optimal user experiences.
●Moving away from traditional desktops: The future trend of enterprise cloud computing involves moving
away from traditional desktop setups towards more dynamic and flexible approaches. This shift includes the
adoption of Virtual Desktop Infrastructure (VDI) and Desktop as a Service (DaaS), which enable remote access
to desktop environments from anywhere. Cloud-based application streaming and productivity suites further
reduce the dependency on locally installed software. Additionally, containerization is being explored for desktop
applications to ensure consistent experiences across devices. This transition requires a zero-trust security
model and considerations for network reliability, user experience, data privacy, compliance, and employee
training. Overall, this trend reflects the evolving nature of work, emphasizing remote accessibility, centralized
management, and streamlined user experiences.
●Commoditization of the data center: The future trend of enterprise cloud computing involves the
commoditization of the data center, wherein data center resources and services transition from complex,
specialized environments to standardized and easily accessible commodities. This shift is driven by
advancements in cloud computing and virtualization technologies, enabling businesses to obtain computing
resources on-demand without substantial upfront investments.. Embracing data center commoditization
empowers businesses to optimize IT infrastructure, innovate more efficiently, and respond to changing
demands while considering the balance between benefits and challenges.
●Inter-operating Virtualized Data Centers: This concept envisions a seamless integration of multiple
data center environments through virtualization, creating a cohesive system that shares resources across
locations. It enables efficient workload distribution, scalable resource allocation, and application redundancy
while facilitating hybrid cloud strategies. By interconnecting data centers, organizations can achieve
flexibility, agility, and optimal resource utilization. However, managing complexity and ensuring security
remain critical considerations in realizing the full potential of this trend.
●Convergence of private and public clouds: This trend involves seamlessly integrating privately owned
cloud infrastructure with resources from external cloud vendors, resulting in a hybrid cloud environment.By
strategically distributing workloads between private and public clouds, enterprises can optimize resource
utilization, ensure data compliance, and enhance disaster recovery capabilities. This trend addresses the
diverse needs of different workloads while mitigating vendor lock-in risks, ushering in a new era of cloud
strategy that capitalizes on the strengths of both deployment models.
Chapter 9 completed
Enterprise Computing
Monolith
Single Codebase: All components of the application are developed, maintained, and deployed within a
single codebase.
Tight Integration: Components within the monolith are closely coupled and interact through direct
function calls or method invocations.
Single Deployment Unit: The entire application is deployed as a single package, simplifying deployment
but potentially leading to longer deployment times.
Shared Database: Monolith applications often share a single database, which can lead to challenges in
scaling and managing data access.
Scalability Challenges: Scaling a monolith can be challenging, as the entire application needs to be
scaled even if only specific components require more resources.
Development and Testing: Developers work on the same codebase, making it easier to collaborate.
However, testing can become complex as changes in one part of the application can impact others.
Technology Stack: Monolith applications typically use a single technology stack or framework
throughout.
Maintenance: Maintenance and updates can be complex, as changes might require a full rebuild and
redeployment of the entire application.
Advantages of Monolith Architecture:
Testing Complexity: Changes in one part of the application can impact others,
making testing complex and potentially error-prone.
Microservice Architecture
Decomposition: The application is broken down into smaller, self-contained microservices, each
responsible for a specific feature or functionality.
Loose Coupling: Microservices communicate through APIs, allowing them to evolve independently
without affecting other services.
Scalability: Each microservice can be scaled individually based on demand, allowing efficient resource
utilization.
Resilience: If one microservice fails, the rest of the application can continue to function, reducing the
impact of failures.