This document discusses how various companies scale their services and applications on AWS to handle large user loads and data volumes. It provides examples of Animoto handling over 1 billion files saved per day and Airbnb having over 9 million guests. It then outlines an approach for scaling an application from 1 user to millions by starting with EC2 instances, adding services like S3, DynamoDB, ElastiCache and auto-scaling groups. The document emphasizes using AWS managed services to avoid re-inventing solutions for tasks like queuing, storage and databases.
1 of 67
More Related Content
Why Scale Matters and How the Cloud is Really Different (at scale)
5. Peak of 5,000
EC2 instances
[ Animoto.com ]
5
Time
Capacity
Launch of
Facebook App
5
6. Generate a few TBs of logs/data every few hours
~ 25% of US internet bandwidth on weekends
8. Total # Airbnb Guests 9 Million
Sept 2008 March 2009 Sept 2009 March 2010 Sept 2010 March 2011 Sept 2011 March 2012 Sept 2012 March 2013 Sept 2013
Up by over 5 million since the beginning of the year
24. Day One, User One:
• A single EC2 Instance
– With full stack on this host
• Web App
• Database
• Management
• Etc.
• A single Elastic IP
• Route53 for DNS
Elastic IP
EC2
Instance
Amazon
Route 53
User
25. “We’re gonna need a bigger box”
• Simplest approach
• Can now leverage PIOPs
• High I/O instances
• High memory instances
• High CPU instances
• High storage instances
• Easy to change instance sizes
c3.8xlarge
m3.2xlarge
t2.micro
27. “We’re gonna need a bigger box”
• Simplest approach
• Can now leverage PIOPs
• High I/O instances
• High memory instances
• High CPU instances
• High storage instances
• Easy to change instance sizes
• Will hit an endpoint eventually
c3.8xlarge
m3.2xlarge
t2.micro
28. Day Two, User >1:
First let’s separate out
our single host into
more than one.
• Web
• Database
– Make use of a database
service?
Web
Instance
Database
Instance
Elastic IP
Amazon
Route 53
User
29. Self-Managed Fully-Managed
Database Server
on Amazon EC2
choice of database
running on Amazon
EC2
Bring Your Own
License (BYOL)
Amazon
DynamoDB
Managed NoSQL
database service
using SSD storage
Seamless scalability
Zero administration
Amazon RDS
Microsoft SQL,
Oracle, MySQL or
PostgreSQL as a
managed service
Flexible licensing
BYOL or License
Included
Amazon
Redshift
Massively parallel,
petabyte-scale, data
warehouse service
Fast, powerful and
easy to scale
Database Options
30. But how do I choose
what DB technology I
need? SQL? NoSQL?
33. Why start with SQL?
• Established and well worn technology
• Lots of existing code, communities, books, background,
tools, etc
• You aren’t going to break SQL DBs in your first 10 million
users. No really, you won’t*
*Unless you are doing something SUPER weird with the data or
MASSIVE amounts of it, even then SQL will have a place in your stack
34. AH HA! You
said “massive
amounts”, I
will have
massive
amounts!
35. If your usage is such that you will be
generating several TB ( >5 ) of data in the first
year OR have an incredibly data intensive
workload, then you might need NoSQL
36. • Create highly scalable applications
• Distribute load across EC2 instances
in multiple availability zones
Elastic Load
Balancer
Feature Details
Available Load balance across instances in multiple
Availability Zones
Health checks Automatically checks health of instances and
takes them in or out of service
Session stickiness Route requests to the same instance
Secure sockets layer Supports SSL offload from web and application
servers with flexible cipher support
Monitoring Publishes metrics to CloudWatch and can get
logs of requests processed
Elastic Load Balancing
38. This will take us pretty far
honestly, but we care about
performance and efficiency,
so let’s improve this further
39. Amazon S3
• Object based storage for the web
• 11 9s of durability
• Good for things like:
– Static assets ( css, js, images,
videos )
– Backups
– Logs
– Ingest of files for processing
• “Infinitely scalable”
Amazon DynamoDB
• Managed, provisioned throughput
• Fast, predictable performance
• Fully distributed, fault tolerant
architecture
• Considerations for non-uniform data
40. CloudFront
• Cache static content at the edge for faster delivery
• Helps lower load on origin infrastructure
• Dynamic and static content
• Streaming video
• Low TTLs (as short as 0 seconds)
• Optimized to work with Amazon EC2, Amazon S3,
Elastic Load Balancing, and Amazon Route 53
Response Time
Server Load
Response Time
Server
Load
Response Time
Server
Load
No CDN CDN for Static
Content
CDN for Static &
Dynamic Content
80
70
60
50
40
30
20
10
0
8:00
AM
9:00
AM
10:00
AM
11:00
AM
12:00
PM
1:00
PM
2:00
PM
3:00
PM
4:00
PM
5:00
PM
6:00
PM
7:00
PM
8:00
PM
9:00
PM
Volume of Data
Delivered (Gbps)
41. Shift some load around:
Elastic Load
Balancer
Web
Instance
RDS DB Instance
Active (Multi-AZ)
Availability Zone
Amazon
Cloudfront
Amazon S3
Amazon
Route 53
User
ElastiCache
DynamoDB
Let’s lighten the load on our
web and database instances:
• Move static content from
the web instance to Amazon
S3 and Amazon CloudFront
• Move session/state and DB
caching to Amazon
ElastiCache or Amazon
DynamoDB
42. Now that our Web tier is
much more lightweight, we
can revisit the beginning of
our talk…
44. Auto-Scaling Trigger auto-scaling policy
Automatic resizing of compute
clusters based on demand
Feature Details
Control Define minimum and maximum instance pool
sizes and when scaling and cool down occurs.
Integrated to Amazon
CloudWatch
Use metrics gathered by CloudWatch to drive
scaling.
Instance types Run Auto Scaling for On-Demand and Spot
Instances. Compatible with VPC.
Amazon
CloudWatch
aws autoscaling create-auto-scaling-group
--auto-scaling-group-name MyGroup
--launch-configuration-name MyConfig
--min-size 4
--max-size 200
--availability-zones us-west-2c
45. User >500k+:
Availability Zone
Amazon
Route 53
User
Amazon S3
Amazon
Cloudfront
Web
Instance
Availability Zone
Elastic Load
Balancer
DynamoDB
Web
Instance
RDS DB Instance
Read Replica
Web
Instance
Web
Instance
ElastiCache RDS DB Instance
Read Replica
Web
Instance
Web
Instance
RDS DB Instance ElastiCache
Standby (Multi-AZ)
RDS DB Instance
Active (Multi-AZ)
47. AWS Application Management Solutions
Higher-level Services Do it yourself
AWS
Elastic Beanstalk
AWS
OpsWorks
AWS
CloudFormation
Amazon EC2
Convenience Control
48. “Your users around the world don’t
care that you wrote your own DB”
Mike Krieger, Instagram Cofounder
49. On re-inventing the wheel:
If you find yourself writing
your own: queue, DNS
server, database, storage
system, monitoring tool
51. User >1 to 10 mil+:
Reaching a million and above is going to require some bit
of all the previous things:
• Multi-AZ
• Elastic Load Balancer between tiers
• Auto Scaling
• Service oriented architecture
• Serving content smartly (Amazon S3/CloudFront )
• Caching off DB
• Moving state off tiers that auto-scale
52. User >1 to 10 mil+:
RDS DB Instance
Active (Multi-AZ)
Availability Zone
Elastic Load
Balancer
RDS DB Instance
Read Replica
RDS DB Instance
Read Replica
Web
Instance
Web
Instance
Web
Instance
Web
Instance
Amazon
Route 53
User
Amazon S3
Amazon
Cloudfront
Amazon SQS
DynamoDB
ElastiCache
Worker
Instance
Worker
Instance
Amazon
CloudWatch
Internal App
Instance
Internal App
Instance
Amazon SES
53. Putting all this together
means we should now
easily be able to handle
millions of users!
65. Helpshift - Innovate while growing fast
• How to scale fast
– Handle spikes of multiple millions of events, issues, users
– Unpredictable
• How to experiment quickly
– Form and validate hypothesis
– Innovate quickly
• How to manage cost
– Capital efficiency and low capex
Who in the room uses Dropbox?
These statistics are from a recent Forbes Magazine article:
Dropbox has over 100 million users
Syncronising over 1 billion files per day
They double their customer base in 12 months
All of which is stored in the Cloud
Notes:
http://www.forbes.com/sites/victoriabarret/2012/11/13/dropbox-hits-100-million-users-says-drew-houston/
To do: Add what AWS services they use
Let’s take a look at a modern customer example - AirBnB
But by September they had more than doubled that, to 9 million customers
Background intro: Scaling is a big topic, with lots of opinions, guides, how-tos, and 3rd parties. If you are new to scaling on AWS, you might ask yourself this question: “So how do I scale?”
“And if you are like most people, its really hard to know where to start. Again, there are all these resources, twitter based experts, and blog posts preaching to you how to scale”.. “so again, where do we start?”
Background intro: Scaling is a big topic, with lots of opinions, guides, how-tos, and 3rd parties. If you are new to scaling on AWS, you might ask yourself this question: “So how do I scale?”
If you are like me, you’ll start where I usually start when I want to learn how to do something. A search engine. In this case I’ve gone and searched for “scaling on AWS” using my favorite search engine.
It’s important to note something about the results here. First off, there are a lot of things to read. This search was from a few months ago, and there were over half a million posts on how to scale on AWS.
Unfortunately for us and our search engine here however, the first response back is actually not what we are looking for. Auto-scaling IS an AWS services, and its great, but its not really the place to start
Back to Auto-scaling. “Auto-scaling is a tool and a destination for your infrastructure. It isn’t a single thing. Its not a check-box you can click when launching something. Your infrastructure really has to be built with the right properties in mind for Auto-scaling to work.”.. “So again, where do we start?”
What do we need first?
We need some basics to lay the foundations we’ll need to build our knowledge of AWS on top of.
Over 30 services as represented in groupings in the console
Over 30 services, the 29 major services as seen in the console
So let’s start from day one, user one, of our new infrastructure and application.
This here is the most basic set up you would need to serve up a web application. We have Route53 for DNS, an EC2 instance running our webapp and database, and an Elastic IP attached to the EC2 instance so Route53 can direct traffic to us. Now in scaling this infrastructure, the only real option we have is to get a bigger EC2 instance…
Scaling the one EC2 instance we have to a larger one is the most simple approach to start with. There are a lot of different AWS instance types to go with depending on your work load. Some have high I/O, CPU, Memory, or local storage. You can also make use of EBS-Optimized instances and Provisioned IOPs to help scale the storage for this instance quite a bit.
The key concern here, is that you WILL hit an endpoint, where we just don’t have a bigger instance class out yet, and so scaling this way while it can get you over an initial hump, really isn’t going to get us very far.
The first thing we can do to address the issues of too many eggs in one basket, and to over come the “no bigger boat” problem, is to split out our Webapp and Database into two instances. This gives us more flexibility in scaling these two things independently. And since we are breaking out the Database, this is a great time to think about maybe making use of a database services instead of managing this ourselves…
At AWS there are a lot of different options to running databases. One is to just install pretty much any database you can think of on an EC2 instance, and manage all of it yourself. If you are really comfortable doing DBA like activities, like backups, patching, security, tuning, this could be an option for you.
If not, then we have a few options that we think are a better idea:
First is Amazon RDS, or Relational Database Service. With RDS you get a managed database instance of either MySQL, Oracle, or SQL Server, with features such as automated daily backups, simple scaling, patch management, snapshots and restores, High availability, and read replicas, depending on the engine you go with.
Next up we have DynamoDB, a NoSQL database, built ontop of SSDs. DynamoDB is based on the Dynamo whitepaper published by Amazon.com back in 2003, considered the grandfather of most modern NoSQL databases like Cassandra and Riak. DynamoDB that we have here at AWS is kind of like a cousin of the original paper. One of the key concepts to DynamoDB is what we call “Zero Administration”. With DynamoDB the only knobs to tweak are the reads and writes per second you want the DB to be able to perform at. You set it, and it will give you that capacity with query responses averaging in single millisecond. We’ve had customers with loads such as half a million reads and writes per second without DynamoDB even blinking.
Lastly we have Amazon Redshift, a multi-petabyte-scale data warehouse service. With Redshift, much like most AWS services, the idea is that you can start small, and scale as you need to, while only paying for what scale you are at. What this means is that you can get 1TB of of data per year at less than a thousand dollars with Redshift. This is several times cheaper than most other dataware house providers costs, and again, you can scale and grow as your business dictates without you needing to sign an expensive contract upfront.
Given that we have all these different options, from running pretty much anything you want yourself, to making use of one of the database services AWS provides, how do you choose? How do you decide between SQL and NoSQL?
Read as is
Read as is
So Why start with SQL databases? Generally speaking SQL based databases are established and well worn technology. There’s a good chance SQL is older than most people in this room. It has however continued to power most of the largest web applications we deal with on a daily basis. There are a lot of existing code, books, tools, communities, and people who know and understand SQL. Some of these newer nosql databases might have a handful, tops, of companies using them at scale. You also aren’t going to break SQL databases in your first 10 million users. And yes there is an astrisk here, and we’ll get to that in a second. Lastly, there are a lot of clear patterns for scalability that we’ll discuss a bit through out this talk. So as for my point here at the bottom, I again strongly recommend SQL based technology, unless your application is doing something SUPER weird with the data, or you’ll have MASSIVE amounts of it, even then, SQL will be in your stack.
AH HA! You say. I said ‘massive amounts”, and we all assume we’ll have massive amounts, so that means that you must be the lone exclusion in this room… well lets clarify this a bit.
If your usage is such that you will be generating several terabytes ( greater than 5) of data in the first year, OR you will have an incredibly data intensive workload, then, you might need NoSQL
For those who aren’t familiar yet with ELB( Elastic Load Balancer ), it is a highly scalable load balancing service that you can put infront of tiers of your application where you have multiple instances that you want to share load across. ELB is a really great service, in that it does a lot for you without you having to do much. It will create a self-healing/self-scaling LB that can do things such as SSL termination, handle sticky Sessions, have multiple listeners. It will also do health checks back to the instances behind it, and puts a bunch of metrics into CloudWatch for you as well. This is a key service in building highly available infrastructures on AWS.
Read this slide.
but its not that efficient in both performance or cost, and since those are important too, let’s clean up this infrastructure a bit.
Talk about S3
Talk about CloudFront. Make sure to mention the two charts to the right. Static content will certainly speed up your site, but Static&Dynamic content is even better. The chart down below is showing data from a real customer who went from very little traffic, to a huge spike of over 60gigabits per second, without having to do anything on their side, or notify AWS at all.
Serving our static assets through CloudFront is going to be a massive performance boost to our end-users, but CloudFront can do much more.
Read slide
Read slide
Talk about auto-scaling.
If we add in auto-scaling, our caching layer(both inside, and outside our infrastructure), and the read-replicas with MySQL, we can now handle a pretty serious load. This could potentially even get us into the millions of users by itself if continued to be scaled horizontally and vertically.
Discuss lightly pros/cons of each.
Elastic Beanstalk is easiest to start with, but offers less control. Opsworks gives you more tools, with a bit more work on your part. CloudFormation is a template driven tool with its own language, so a bit of a learning curve, but very very powerful. Lastly you could do all this manually, but at scale its nearly impossible without a huge team.
Read slide.
Read slide
This diagram is missing the other AZ, but we’ve only got so much room on the slide. But we can see we’ve added in some internal pools for different tasks perhaps. Maybe we’re not using SQS for something, and have SES for sending our out bound email. Again our users will still talk to Route53, and then to CloudFront to get to our site and our content hosted back by our ELB and S3.