Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Mastering AWS Serverless: Architecting, developing, and deploying serverless solutions on AWS (English Edition)
Mastering AWS Serverless: Architecting, developing, and deploying serverless solutions on AWS (English Edition)
Mastering AWS Serverless: Architecting, developing, and deploying serverless solutions on AWS (English Edition)
Ebook992 pages3 hours

Mastering AWS Serverless: Architecting, developing, and deploying serverless solutions on AWS (English Edition)

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Serverless computing is relatively new compared to server-based designs. Amazon Web Services launched its serverless computing offering by introducing AWS Lambda. Lambda has introduced a revolution in cloud computing, where servers could be excluded from architectures, and events could be used to trigger other resources. The AWS serverless services have allowed developers, startups, and large enterprises to focus more on developing and creating features and spend less time managing and securing servers.

It covers key concepts like serverless architecture and AWS services. You will learn to create event-driven apps, launch websites, and build APIs with hands-on exercises. The book will explore storage options and data processing, including serverless Machine Learning. Discover best practices for architecture, security, and cost optimization. The book will cover advanced topics like AWS SAM and Lambda layers for complex workflows. Finally, get guidance on creating new serverless apps and migrating existing ones.

The knowledge gained from this book will help you create a serverless website, application programming interface, and backend. In addition, the information covered in the book will help you process and analyze data using a serverless design.
LanguageEnglish
Release dateApr 29, 2024
ISBN9789355517791
Mastering AWS Serverless: Architecting, developing, and deploying serverless solutions on AWS (English Edition)

Related to Mastering AWS Serverless

Related ebooks

Computers For You

View More

Related articles

Reviews for Mastering AWS Serverless

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Mastering AWS Serverless - Miguel A. Calles

    CHAPTER 1

    Introduction to AWS Serverless

    Introduction

    This book aims to help you master serverless on Amazon Web Services (AWS).¹ As a master, you will have the skill, knowledge, and proficiency to build serverless applications in the AWS cloud. With the master knowledge, you will be on your road to becoming a serverless expert.

    This chapter will lay the foundation of AWS serverless. We will start with an overview of cloud computing and how that enabled serverless computing. After that, we will learn about the different services used in serverless architectures and designs.

    Structure

    We will cover the following topics in this chapter:

    •Introduction to cloud computing

    •Introduction to serverless computing

    •Introduction to serverless storage

    •Introduction to serverless services

    •Reviewing AWS serverless services

    Objectives

    At the end of the chapter, you will understand AWS’s services. You can apply that knowledge when we discuss a serverless application built on AWS.

    Introduction to cloud computing

    In the early days of computing, organizations hosted equipment in buildings they managed. An organization would buy servers, data storage, router, switches, and racks. They dedicated a room (or sometimes an entire building) to having multiple equipment racks. Cables interconnect connected the equipment to switches and routers that provided network connectivity. Some networks provided local networks that interconnected a select number of devices. Other networks provided connectivity to the organization’s Intranet so their staff could connect to their equipment and network services. Some networks provided Internet connectivity where internet-connected devices could connect. These network configurations enabled the on-premises hosting and computing in the organization’s data warehouse. See Figure 1.1 for an example of an organization’s on-premises infrastructure:

    Figure 1.1: An example infrastructure diagram showing on-premises hosting and computing

    As Internet speeds and connectivity improved, service providers created data warehouses to provide hosting and computing as a service. For example, an organization would pay a service fee for a web server to be made available. An organization could perform a make-or-buy analysis. It would host the web server when it made sense from business, financial, and legal standpoints. Otherwise, they would rent a web server to save on costs (for example, operational, labor costs, and maintenance) and possibly offload some legal, compliance, and security implications to the service provider.

    The organization’s infrastructure diagrams became simpler as it rented hosting from service providers. Some diagrams started showing a cloud to abstract all the servers and services that were no longer on-premises but provided by a service provider. See Figure 1.2 for an example of an organization’s infrastructure that uses a service provider. In this example figure, a computer that accessed servers using internal networking now accesses them from the service provider, which is depicted as a cloud:

    Figure 1.2: An example infrastructure diagram that shows a service provider as a cloud

    The term cloud computing was eventually accepted. We will refer to cloud computing as a set of services and resources provided by a third-party service provider that is accessed via the Internet and are not physically hosted inside an organization’s building. This working definition is essential because some serverless solutions allow an organization to perform serverless computing using on-premises hardware. We are focusing on AWS serverless, which is a set of serverless services that AWS, a cloud computing provider (or cloud provider for short), offers.

    Cloud providers became popular because their prices became more cost-effective, and service offerings were more powerful. These improvements were fueled by more affordable equipment, higher capacity data storage, faster Internet speeds, and higher bandwidth. Furthermore, companies created new technologies that allowed multiple servers to run within one physical server. These innovations enabled new cloud providers to emerge.

    AWS launched its services to the public in 2006 by offering Internet-based services that customers could configure using a web-based application.² A customer could sign up for an AWS account, use the AWS web-based console (AWS console for short), create a server, configure the networking, set up the Domain Name System (DNS), and get an Internet Protocol (IP) address. A customer could have a live web server within minutes, for example. Since its launch, AWS has provided various cloud-based products: cloud computing (that is, servers), cloud storage (that is, file server equivalents), databases, networking, and many more.

    As previously mentioned, the virtual server technologies enabled AWS and other service providers to provide cloud-based services to their customers. AWS created data warehouses (or data centers) in various geographical regions within a country and around the world. They configured their networking and equipment to allow multiple customers to create servers and save their data, so they were logically separated. No customer would be able to access another customer’s resources. They were logically separated because the software would separate the data and prevent authorized access even though they might be physically hosted on the same equipment. See Figure 1.3 for an example of logical separation:

    Figure 1.3: An example of logical separation that a cloud provider might implement

    With virtualization technologies, AWS can support several customers with the equipment in their data centers. Furthermore, it allows customers to choose:

    •The number, speed and type of central processing unit ( CPU ) processors

    •The random access memory ( RAM ) size

    •The size of the disk space

    •The number, speed and type of graphical processing units

    •Other specifications to meet their needs.

    How the CPU, RAM, and disk space are provisioned on the physical hardware is done for us.

    AWS follows the shared responsibility model to ensure our data is stored and accessed safely.³ AWS is committed to the Security of the Cloud, and they believe it is the customer’s responsibility for the Security in the Cloud. This means that AWS will ensure the cloud services and infrastructure is secure. Any hardware, software, data storage, networking, building, staff, operations, and so on, involved in providing cloud services is secured and maintained securely. This also means that how the customer uses the AWS cloud is their responsibility to secure. Like with any software and service, there are best practices for using them. The customer must know how to and be willing to implement them.

    Secure virtualization technologies advanced to another level of virtualization. Initially, the technology provided virtual networks, servers, and disk space. This allowed for the introduction of containers. A container is like a virtual server but extremely lightweight. The container’s operating system has a minimum set of libraries and dependencies and no graphical display. The container has no resources but shares them with the server running the container orchestration software. A virtual server needs a virtual CPU, RAM, disk drive, network interfaces, and other resources, whereas a container borrows the resources from the physical server when it needs them.

    For example, a physical server may have 4 CPUs, 16 gigabytes (GB) of RAM, and one terabyte (TB) of disk space. That server can host four virtual servers that each have 1 CPU, 4 GB of RAM, and 250 MB of disk space. We cannot add a fifth virtual server because we are out of resources. Even if the virtual server only uses 2 GB of RAM, it was provisioned with 4 GB of RAM, and they are unusable by another virtual server. See Figure 1.4 for a drawing illustrating the four virtual servers:

    Figure 1.4: An example of four virtual servers running on a physical server

    Containers use resources differently. Suppose we create four containers on the physical server. The containers only use 0.5 CPUs and 1 GB of RAM. The containers will use an image that has the operating system, and they all share the same image. Any runtime files are stored in a temporary container disk drive that grows as needed and is deleted when the container is stopped. The container can use a volume to store files on the drive permanently. These four containers share resources and use up a total of 2 CPUs, 1 GB of RAM, the size of the container image, and the size of the new files. As long as there are still available resources on the physical server, we can continue creating containers. See Figure 1.5 for a drawing illustrating the containers:

    Figure 1.5: An example of containers running on a physical server

    Cloud providers were able to improve their service offerings because they maximized the utilization of the physical server resources. Furthermore, it allowed organizations to design their applications to use smaller servers that used fewer resources and could be less expensive. The ability to potentially run an application in a single container resulted in the next level of innovation: serverless computing.

    Introduction to serverless computing

    In 2014, AWS introduced AWS Lambda, which is their serverless computing service.⁴ Lambda allows us to upload and run code as a function without configuring a server or a container. The service uses containers to run the code. When we upload the code to the Lambda service, it is stored as a compressed file inside AWS. When the Lambda function needs to run the code, the service will create a new container, create a volume using the compressed file, execute the code, and delete the container after a period of inactivity. How the Lambda service manages that process is beyond our control. We specify the parameters we would like our Lambda function to have:

    •Amount of RAM

    •Maximum execution time

    •Trigger(s) that starts the code execution

    •Function code compressed file

    •Security permissions

    There are other parameters that we will learn in later chapters. The Lambda service uses the Lambda function parameters to create a container. The Lambda service is considered serverless instead of a container service for the following reasons:

    •The purpose is to run code

    •We cannot directly manage the containers

    •The containers are short-lived

    •AWS may choose to move away from containers as the underlying technology

    Other serverless providers do not use containers for serverless computing, and how AWS operates the Lambda service is out of our control. A container service assumes the containers are running for long periods. Containers are often used as small servers, and the container configuration might impact how the application runs. As a result, defining the Lambda service as a different service offering makes sense.

    Moving to a serverless offering provided many benefits to AWS and its customers. In the previous examples, a physical server could host four virtual servers and more than four containers. The physical server can host many more containers because they are created when they are needed and deleted when they are not. Using the previous example, we can potentially host 100s over Lambda function configurations because the underlying containers only exist when they are needed. The likelihood that all the Lambda functions need to exist simultaneously is low. This is a benefit for AWS customers because they can define as many Lambda functions as needed and only pay when they are used, which can be significantly less than provisioning a server or container.

    The serverless computing model provides the following benefits:

    •On-demand usage

    •Elasticity

    •Scalability

    •Rapid development

    •Reduced infrastructure

    •Reduced maintenance

    •Lower costs

    •Smaller attack surfaces

    The success of serverless computing resulted in cloud providers creating new serverless services and updating existing services to work with serverless applications.

    Introduction to serverless storage

    In 2006, AWS introduced Amazon Simple Storage Service (S3), which is their object storage service.⁵ S3 was one of AWS’s early service offerings.⁶ S3 is an object storage where we can store any type of data as an object associated with a key name. We store data in buckets which allows us to create logical groups for our objects. We cannot access objects like files in an operating system or file share server. We access the S3 data using the following AWS capabilities:

    •The web-based console

    •The application programming interface ( API )

    •The command line interface ( CLI )

    •The software development kit ( SDK )

    AWS and many developers consider S3 as part of the AWS serverless offering. Much like how we do not need to worry about how a Lambda function is created and managed, AWS will manage how we save data to S3. With S3, we can:

    •Save an object as big as 5 TB.

    •Have no limit on how many objects we can save to a bucket.

    •Cre ate up to 1000 buckets

    S3 provides flexibility and scalability in storage, similarly to how Lambda provides that for computing.

    Since then, AWS has introduced serverless databases or serverless versions of non-serverless databases. Object storage is beneficial for storing large amounts of data but can be inefficient for querying and searching. Services exist that enable querying and searching for object storage, but that can result in a more complex design and increased costs. Rather, we can use serverless databases. A popular serverless database is Amazon DynamoDB.

    DynamoDB is a key-value or Non-Structured Query Language (NoSQL) database.⁹ Unlike other SQL or NoSQL database services, DynamoDB does not require us to have a dedicated server. AWS fully manages DynamoDB, using multiple servers and disk drives to handle the traffic and store our database table. When we use DynamoDB, we can focus on tables and their data without worrying about configuring and managing the database servers. We work with the tables and data using the AWS console, API, CLI, or SDK.

    DynamoDB allows us to configure how much data we can read and write per second (that is, read and write capacity). There are two capacity modes:

    •Provisioned capacity: We can specify how much read capacity and write capacity we want as available for our application to use. This provides us with predictability in performance and costs. This mode is best for when we have a consistent number of reads and writes to our table.

    •On-demand capacity: We can use a pay-per-use mode where we do not need to specify how much capacity we need. This gives us the most flexibility, but we pay higher prices per read and write. This mode is best when we have unpredictable or spikey traffic or when our data sits idle most of the time.

    Furthermore, DynamoDB has no limits to the amount of data a table can store. This unlimited storage provides the benefits of object storage with the added features of database operations.¹⁰

    Introduction to serverless services

    Data storage and application interfaces were the next logical areas to introduce a serverless solution.

    Serverless data storage services were designed to have the following key benefits:

    •Elastic capacities

    •Scalable resources

    Many serverless data storage services provide virtually unlimited or very high data storage. These services do not require provisioning for a specific amount of data storage. Instead, data is stored as needed.

    There are different serverless data storage service options which include:

    •Object storage

    •File systems

    •Relational databases

    •Non-relational / key-value / document databases

    We will explore these services in more detail when we explore the AWS serverless services.

    The application interface services can be serverless or not. These services are often categorized as serverless because they work well with serverless computing and provide on-demand, elastic, and scalable characteristics. These application interfaces include:

    •API gateways, management, and services

    •Messaging and notifications

    •Workflow and business logic orchestration

    •Event buses and management

    We will explore these services in more detail when we explore the AWS serverless services.

    Reviewing AWS serverless services

    AWS has a web page dedicated to its serverless capabilities.¹¹ See Figure 1.6, which has a screen capture of a section of the AWS serverless web page showing some serverless services:

    Figure 1.6: A screen capture of the AWS serverless computing web page

    AWS provides the following serverless services and services that work well with serverless computing:

    •Amazon Aurora Serverless

    •Amazon API Gateway

    •Amazon CloudFront

    •Amazon CloudWatch

    •Amazon Cognito

    •Amazon DynamoDB (Database)

    •Amazon EventBridge

    •Amazon Elastic File System ( EFS )

    •Amazon Lambda

    •Amazon Neptune Serverless

    •Amazon OpenSearch Serverless

    •Amazon RedShift Serverless

    •Amazon Relational Database Service ( RDS ) Proxy

    •Amazon Simple Email Service ( SES )

    •Amazon Simple Notification Service (SNS )

    •Amazon Simple Storage Service (S3 )

    •Amazon Simple Queue Service ( SQS )

    •AWS AppSync

    •AWS Fargate

    •AWS Step Functions

    These services are well documented in the AWS website and documentation, but we will review some of these services that we will use in this book.

    Amazon Lambda

    We reviewed Lambda earlier in the chapter. (See the Amazon Lambda icon in Figure 1.7.)

    Figure 1.7: Amazon Lambda icon

    To summarize this service, Lambda allows us to execute functions (code) without configuring physical servers, virtual servers, and containers. We specify the Lambda functions configuration, upload the code, and define what events will start (trigger) the code execution.

    We will use Lambda functions as our compute and website backend. See Figure 1.8 for an example of a Lambda function resource:

    Figure 1.8: An example of a Lambda function resource

    Amazon API Gateway

    API Gateways allows us to create and manage an API for our applications.¹² (See the Amazon API Gateway icon in Figure 1.9.)

    Figure 1.9: Amazon API Gateway icon

    We can create APIs that follow the Representational State Transfer (REST) designs and WebSocket communications protocol. There are two flavors of RESTful APIs:

    •REST APIs: They provide rich API endpoints that support the HTTP ( Hyper-Text Transfer Protocols ) (for example, GET , POST , PUT , DELETE OPTIONS ). They use the request-response, where an HTTP request is made to an API endpoint, and it sends a response.

    •HTTP (Hyper-Text Transfer Protocols) APIs: They provide simpler RESTful APIs and natively support OpenID Connect and OAuth ( Open Authorization ) 2.0 protocol.

    We can manually create the API or use the OpenAPI¹³ Specification.

    We will use API Gateway to create the API endpoints for our application. See Figure 1.10 for an example of an API:

    Figure 1.10: An example of an API Gateway API resource

    Amazon Simple Storage Service

    We reviewed the Amazon Simple Storage Service (S3) earlier in the chapter. (See the Amazon S3 icon in Figure 1.11)

    Figure 1.11: Amazon S3 icon

    To summarize this service, S3 allows us to upload various data types as objects. We associate an object with a key name and upload it to the bucket. An object can be up to 5 TB in size, and a bucket has no limit on the number of objects or total disk space.

    We will use S3 to host our website files and save our application data. See Figure 1.12 for an example of a bucket:

    Figure 1.12: An example of an S3 bucket resource

    Amazon CloudFront

    CloudFront provides a Content Delivery Network (CDN) that caches copies of website files at locations that are geographically closers to the client.¹⁴ See the Amazon CloudFront icon in Figure 1.13:

    Figure 1.13: Amazon CloudFront icon

    The client can get the website files faster since the data has to travel a shorter distance. Also, it is faster to deliver cached copies rather than asking a server or AWS service to look up and respond to the data.

    CloudFront works well with serverless applications because it can deliver cached copies of the data rather than using serverless resources to respond to HTTP requests.

    We will use CloudFront to provide a CDN distribution for the HTML files stored in an S3 bucket. See Figure 1.14 for an example of a CloudFront distribution:

    Figure 1.14: An example of a CloudFront distribution resource

    Amazon DynamoDB

    We reviewed the Amazon Simple Storage Service (S3) earlier in the chapter. See the Amazon DynamoDB icon in Figure 1.15:

    Figure 1.15: Amazon DynamoDB icon

    To summarize this service, DynamoDB provides key-value storage, database operations, and provisioned or on-demand read-write capacity.

    We will use DynamoDB to save our website data. See Figure 1.16 for an example of a table:

    Figure 1.16: An example of a DynamoDB table resource

    Amazon CloudWatch

    CloudWatch allows us to monitor our serverless applications using its metrics, alarms, and logs features.¹⁵ See the Amazon CloudWatch icon in Figure 1.17:

    Figure 1.17: Amazon CloudWatch icon

    CloudWatch will automatically create logs and some metrics. We can do the following with CloudWatch:

    •Create a dashboard.

    •Enable detailed monitoring and anomaly detection.

    •Enable application monitoring.

    •View application logs.

    •Create alarms.

    We will use CloudWatch for application logging. See Figure 1.18 for an example of logs:

    Figure 1.18: An example of CloudWatch logs

    Amazon EventBridge

    EventBridge allows us to route events using a bus.¹⁶ See the Amazon EventBridge icon in Figure 1.19:

    Figure 1.19: Amazon EventBridge icon

    We can route events between AWS services and third-party services. It enables us to create rules to schedule events to send to our application. It is serverless and allows us to expand our event-driven designs for our serverless applications.

    We will use EventBridge to schedule events that trigger Lambda functions to process data. See Figure 1.20 for an example of an EventBridge scheduled event:

    Figure 1.20: An example of an EventBridge schedule rule resource

    Amazon Simple Email Service

    Simple Email Service (SES) provides us with a way to send and receive emails. (See the Amazon SES icon in Figure 1.21.)

    Figure 1.21: Amazon SES icon

    SES provides us with a Simple Mail Transfer Protocol (SMTP) service which allows us to send and receive email from a verified domain or identity. We can read and send emails using the AWS APIs and SDKs.

    We will use SES to send emails from our application. See Figure 1.22 for an example of a verified identity:

    Figure 1.22: An example of an SES verified identity

    Amazon Simple Notification Service

    Simple Notification Service (SNS) provides us with a way to send notifications. See the Amazon SNS icon in Figure 1.23:

    Figure 1.23: Amazon SNS icon

    We can send text messages using the Simple Messaging Service (SMS), mobile push notifications, and send messages between Lambda functions. We use topics to organize our notifications. We use subscriptions to add:

    •Phone numbers for SMS

    •Platform application endpoints for push notifications

    •Amazon Kinesis Data Firehose endpoints

    •Amazon SQS endpoints

    •AWS Lambda endpoints

    •Email addresses

    •HTTP endpoints

    When we publish a message to a topic, the subscription endpoints will receive the message. See Figure 1.24 for an example of an SNS topic and its

    Enjoying the preview?
    Page 1 of 1