Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
100% found this document useful (1 vote)
180 views

Azure Cosmos DB Workshop

Azure Cosmos DB is a globally distributed, massively scalable, multi-model database service that offers guaranteed low latency at scale. It provides elastic scaling of storage and throughput, supports five consistency models, and has turnkey global distribution. Azure Cosmos DB also offers comprehensive SLAs and supports multiple data models and APIs.

Uploaded by

springlee
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
180 views

Azure Cosmos DB Workshop

Azure Cosmos DB is a globally distributed, massively scalable, multi-model database service that offers guaranteed low latency at scale. It provides elastic scaling of storage and throughput, supports five consistency models, and has turnkey global distribution. Azure Cosmos DB also offers comprehensive SLAs and supports multiple data models and APIs.

Uploaded by

springlee
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 147

Azure Cosmos DB Workshop

Azure Cosmos DB
A globally distributed, massively scalable, multi-model database service

Guaranteed low latency at the 99th percentile


Elastic scale out
of storage & throughput Five well-defined consistency models

Turnkey global distribution Comprehensive SLAs


Azure Cosmos DB
A globally distributed, massively scalable, multi-model database service

Column-family
Document

Key-value Graph

Guaranteed low latency at the 99th percentile


Elastic scale out
of storage & throughput Five well-defined consistency models

Turnkey global distribution Comprehensive SLAs


Azure Cosmos DB
A globally distributed, massively scalable, multi-model database service

MongoDB
Table API

Column-family
Document

Key-value Graph

Guaranteed low latency at the 99th percentile


Elastic scale out
of storage & throughput Five well-defined consistency models

Turnkey global distribution Comprehensive SLAs


What sets Azure Cosmos DB apart
Turnkey Global Distribution
Worldwide presence as a Foundational Azure service

Automatic multi-region replication

Multi-homing APIs

Manual and automatic failovers

Designed for High Availability


Guaranteed low latency at P99 (99th percentile)
Requests are served from local region
Reads Indexed writes
(1KB) (1KB) Single-digit millisecond latency worldwide

Write optimized, latch-free database engine


P50 <2ms <6ms designed for SSD

Synchronous automatic indexing at sustained


P99 <10ms <15ms ingestion rates
Multiple, well-defined consistency choices
Global distribution forces us to navigate the CAP theorem

Writing correct distributed applications is hard

Five well-defined consistency levels

Intuitive and practical with clear PACELC tradeoffs

Programmatically change at anytime

Can be overridden on a per-request basis


Elastically scalable storage and throughput
Single machine is never a bottle neck
Provisioned request / sec

Black Friday
12000000
10000000
Transparent server-side partition management
8000000
6000000
4000000
Elastically scale storage (GB to PB) and throughput (100 to 100M req/sec)
across many machines and multiple regions
2000000

Nov 2016 Dec 2016

Time Automatic expiration via policy based TTL


Hourly throughput (request/sec)
Pay by the hour, change throughput at any time for only what you need
Schema-agnostic, automatic indexing
At global scale, schema/index management is painful

Automatic and synchronous indexing

Hash, range, and geospatial


Schema
Works across every data model

Highly write-optimized database engine

Physical index
Multi-model, multi-API
Database engine operates on Atom-Record-Sequence type system

All data models can be efficiently translated to ARS

Multi-model: Key-value, Document, Column, and Graph

Multi-API: SQL (DocumentDB), MongoDB, Table, Cassandra and Gremlin

More data-models and APIs to be added


Industry-leading, enterprise-grade SLAs
99.99% availability – even with a single region

Made possible with highly-redundant storage architecture

Guaranteed durability – writes are majority quorum committed

First and only service to offer SLAs on:


• Low-latency
• Consistency
• Throughput
Security & Compliance
Always encrypted at rest and in transit
• Encryption@ Rest – AES256
• Encryption @ Transit – SSL / TLS

Fine grained “row level” authorization


• User/Permissions with Resource Tokens

Network security with IP firewall rules and VNET

Comprehensive Azure compliance certification:


• ISO 27001, ISO 27018, EUMC, HIPAA
• PCI, SOC1 and SOC2
• FEDRAMP, HITRUST
Common Use Cases and Scenarios
Content Management Systems
Azure region A

Azure Cosmos DB
Azure region B (app + session state)

Azure Globally distributed


Traffic across regions
Manager Azure region C
Internet of Things – Telemetry & Sensor Data

Azure Cosmos DB (Hot) Azure API App


Azure IoT Hub Azure Databricks Spark
(TTL = 90 days) (user facing app)
(Structured Streaming)

Azure Function Azure Storage (Cold)


(triggered via Cosmos DB change feed)
Retail Product Catalogs

Azure Web App Azure Cosmos DB Azure Search


(e-commerce app) (product catalog) (full-text index)

Azure Storage
(logs, static Azure Cosmos DB
catalog content) (session state)
Retail Order Processing Pipelines

Azure Functions Azure Cosmos DB


(E-Commerce Checkout API) (Order Event Store)

...
Azure Functions Azure Functions Azure Functions
(Microservice 1: Tax) (Microservice 2: Payment) (Microservice N: Fufillment)
Real-time Recommendations
Online Recommendations Service

Azure Container Service Azure Cosmos DB


(Recommendations API) (Product + User Vectors)

Shoppers
E-commerce Store Apache Spark on
Azure Databricks

Azure Container Service Azure Cosmos DB


(Order Transaction API) (Customer Orders)

Order Transactions
Multiplayer Gaming

Azure CDN
Azure Storage
(game files)

Azure Cosmos DB Azure Databricks


Azure Traffic Azure API Apps (game database) (game analytics)
Manager (game backend)

Azure Functions Azure Notification Hubs


(push notifications)
Scale-out Computation

MLlib
Spark Spark GraphX
(machine
SQL Streaming (graph)
learning)

Apache Spark on Databricks

Scale-out Database

Spark Connector using SQL API

Azure Cosmos DB
Let’s zoom in Azure Cosmos DB
Resource Model
Account

Database

Container

Item
********.azure.com
Account

Database IGeAvVUp …

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container User

Item Permission
Account

Database

Container = Collection Graph Table

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container
Note: Throughput can also be
shared across a set of collections

Item
Account

Database

Container

Item
Account

Database

Container

Item
Account

Database

Container

Item Sproc Trigger UDF


Account

Database

Container

Item Sproc Trigger UDF Conflict


System design (logical)

Tenants

Follower
K
K
V
V
Follower
Lead
K V

Tables Collections Graphs er


Forwarder

Replica set To a remote


resource partition(s)
Container
Container
Container
Containers Resource Partition
• A consistent, highly available, and resource
governed coordination primitive
• Consists of a replica set with each replica
hosting an instance of database engine
… • Uniquely belongs to a tenant
• Owns a set of keys
Resource Partitions
Request Units
% CPU Request Units
Request Units (RU) is a rate-based currency
% Memory
% IOPS Abstracts physical resources for performing requests

Key to providing isolation in a multi-tenant environment

Enables SLAs for predictable performance

Foreground and background activities


GET
Request Units
Normalized across various access methods

1 RU = 1 read of 1 KB record
POST

Each request consumes fixed RUs

Applies to reads, writes, queries, and stored procedure


execution
PUT

Query


Request Units
Provisioned in terms of RU/sec
Rate
limit
Rate limiting based on amount of throughput provisioned

Can be increased or decreased instantaneously


Max RU/sec No
Incoming Requests

throttling Billing is metered hourly

Background processes like TTL expiration, index


Min RU/sec transformations scheduled when quiescent
Replica
Quiescent
Pricing Example
Storage Cost

Avg Record Size (KB) 1

Number of Records 100,000,000

Total Storage (GB) 100


Monthly Cost per GB $0.25
Expected Monthly Cost for Storage $25.00

Throughput Cost
Operation Type Number of Requests per Second Avg RU's per Request RU's Needed
Create 100 5 500
Read 400 1 400

Total RU/sec 900


Monthly Cost per 100 RU/sec $6.00
Expected Monthly Cost for Throughput $54.00

Total Monthly Cost


[Total Monthly Cost] = [Monthly Cost for Storage] + [Monthly Cost for Throughput]
= $25 + $54
= $79 per month

* pricing may vary by region; for up-to-date pricing, see: https://azure.microsoft.com/pricing/details/cosmos-db/


Partitioning
Cosmos DB Container (e.g. Collection)
Cosmos DB Container (e.g. Collection)
Cosmos DB Container (e.g. Collection)

Partitioning Scheme: top-most design decision in Cosmos DB


Cosmos DB Container (e.g. Collection)

Partition Key: User Id


Cosmos DB Container (e.g. Collection)

Partition Key: User Id

Logical Partitioning Abstraction

hash(User Id) Behind the Scenes:


Physical Partition Sets

Psuedo-random distribution of data over


range of possible hashed values
Behind the Scenes:
Physical Partition Sets hash(User Id)

Dharma
Andrew
Shireesh

Karthik

Rimma

….
Mike
Bob
Alice

Carol

… …

Partition 1 Partition 2 Partition n

Frugal # of Partitions based on actual storage and throughput needs


(yielding scalability with low total cost of ownership)
Behind the Scenes:
Physical Partition Sets hash(User Id)

Dharma
Andrew
Shireesh

Karthik

Rimma

….
Mike
Bob
Alice

Carol

… …

Partition 1 Partition 2 Partition n

What happens when partitions need to grow?


Behind the Scenes:
Physical Partition Sets hash(User Id)

Partition Ranges can be dynamically sub-divided


Dharma
To seamlessly grow database as the application grows Shireesh
Dharma Rimma

While sedulously maintaining high availability Karthik

Rimma

Alice
Shireesh + Karthik

Carol

… … …

Partition X Partition X1 Partition X2


Behind the Scenes:
Physical Partition Sets hash(User Id)

Partition Ranges can be dynamically sub-divided


Dharma
To seamlessly grow database as the application grows Shireesh
Dharma Rimma

While sedulously maintaining high availability Karthik

Rimma

Best of All: Alice


Shireesh + Karthik

Partition management is completely taken care of by the system


You don’t have to lift a finger… the database takes care of you. Carol

… … …

Partition X Partition X1 Partition X2


Cosmos DB Container (e.g. Collection) Best Practices: Design Goals for Choosing a Good Partition Key

1) Distribute the overall request + storage volume


• Avoid “hot” partition keys
2) Partition Key is scope for [efficient] queries and transactions
• Queries can be intelligently routed via partition key
• Omitting partition key on query requires fan-out
Cosmos DB Container (e.g. Collection) Best Practices: Design Goals for Choosing a Good Partition Key

1) Distribute the overall request + storage volume


• Avoid “hot” partition keys
2) Partition Key is scope for [efficient] queries and transactions
• Queries can be intelligently routed via partition key
• Omitting partition key on query requires fan-out

Steps for Success

1. Ballpark scale needs (size/throughput)


2. Understand the workload
3. # of reads/sec vs writes per sec
• Use 80/20 rule to help optimize bulk of workload
• For reads – understand top X queries (look for common filters)
• For writes – understand transactional needs
understand ratio of inserts vs updates
Cosmos DB Container (e.g. Collection) Best Practices: Design Goals for Choosing a Good Partition Key

1) Distribute the overall request + storage volume


• Avoid “hot” partition keys
2) Partition Key is scope for [efficient] queries and transactions
• Queries can be intelligently routed via partition key
• Omitting partition key on query requires fan-out

Steps for Success

1. Ballpark scale needs (size/throughput)


2. Understand the workload
3. # of reads/sec vs writes per sec
• Use 80/20 rule to help optimize bulk of workload
• For reads – understand top X queries (look for common filters)
• For writes – understand transactional needs
understand ratio of inserts vs updates

General Tips
• Don’t be afraid of having too many partition keys
• Partitions keys are logical
• More partition keys => more scalability
Object Model Design
A few notes about containers

Containers do NOT enforce schema

There are benefits to co-locate multiple types in a container

Annotate records with a "type" property


Co-locating types in the same container

Ability to query across multiple entity types with a single network request.
Ability to query across multiple entity types with a single network request.

For example, we have two types of documents: cat and person.

 
{
{
   "id": "Andrew",
   "id": "Ralph",
   "type": "Person",
   "type": "Cat",
   "familyId": "Liu",
   "familyId": "Liu",
   "worksOn": "Azure Cosmos DB"
   "fur": {
}
         "length": "short",
         "color": "brown"
   }
}
Ability to query across multiple entity types with a single network request.

For example, we have two types of documents: cat and person.

 
{
{
   "id": "Andrew",
   "id": "Ralph",
   "type": "Person",
   "type": "Cat",
   "familyId": "Liu",
   "familyId": "Liu",
   "worksOn": "Azure Cosmos DB"
   "fur": {
}
         "length": "short",
         "color": "brown"
   }
}

We can query both types of documents without needing a JOIN simply by running a query without a filter on type:

SELECT * FROM c WHERE c.familyId = "Liu"


Ability to query across multiple entity types with a single network request.

For example, we have two types of documents: cat and person.

 
{
{
   "id": "Andrew",
   "id": "Ralph",
   "type": "Person",
   "type": "Cat",
   "familyId": "Liu",
   "familyId": "Liu",
   "worksOn": "Azure Cosmos DB“
   "fur": {
}
         "length": "short",
         "color": "brown"
   }
}

                If we wanted to filter on type = “Person”, we can simply add a filter on type to our query:

SELECT * FROM c WHERE c.familyId = "Liu" AND c.type = "Person"


Co-locating types in the same container

Ability to query across multiple entity types with a single network request.

Ability to perform transactions across multiple types


Global Distribution
Why Global Distribution
High Availability
• Automatic and Manuel Failover
• Multi-homing API removes need for app redeployment

Low Latency (anywhere in the world)


• Packets cannot move fast than the speed of light
• Sending a packet across the world under ideal network
conditions takes 100’s of milliseconds
• You can cheat the speed of light – using data locality
• CDN’s solved this for static content
• Azure Cosmos DB solves this for dynamic content
Note: For multi-master enabled accounts –
The priority list indicates which is the designated
“hub” region for resolving write conflicts.
Consistency
ACID != CAP

Consistency w.r.t. Transactions is NOT the same thing as Consistency w.r.t. Replication.

this is about moving from one valid state to this about getting a consistent view across
another for a single given tx replicated copies of data
(West US)

(East US)

(North Europe)
Value = 5

Value = 5

Value = 5
Value = 5 6

Update 5 => 6 Value = 5

Value = 5
Value = 5 6

Update 5 => 6 Value = 5 6

Value = 5

What happens when a network partition is introduced?


Value = 5 6

Update 5 => 6 Value = 5 6

Value = 5

What happens when a network partition is introduced? Reader: What is the value?
Should it see 5? (prioritize availability)
Or does the system go offline until network is restored? (prioritize consistency)
Brewer’s CAP Theorem: impossible for distributed data store to
simultaneously provide more than 2 out of the following 3 guarantees:
Consistency, Availability, Partition Tolerance
Latency: packet of information can travel as fast as speed of light.
Replication between distant geographic regions can take 100’s of milliseconds

Value = 5 6

Update 5 => 6 Value = 5 6

Value = 5
Reader A: What is the value?
Value = 5 6

Update 5 => 6 Value = 5 6

Value = 5

Reader B: What is the value?


Reader A: What is the value?
Value = 5 6

Update 5 => 6 Value = 5 6

Value = 5

Reader B: What is the value?


Should it see 5 immediately? (prioritize latency)
Does it see the same result as reader A? (quorum impacts throughput)
Or does it sit and wait for 5 => 6 propagate? (prioritize consistency)
PACELC Theorem: In the case of network partitioning (P) in a distributed computer
system, one has to choose between availability (A) and consistency (C) (as per the CAP
theorem), but else (E), even when the system is running normally in the absence of
partitions, one has to choose between latency (L) and consistency (C).
Programmable Data Consistency

Choice for
most
distributed
apps

Strong consistency Eventual consistency,


High latency Low latency
Well-defined consistency models
• Intuitive programming model
• 5 Well-defined, consistency models
• Overridable on a per-request basis

• Clear tradeoffs
• Latency
• Availability
• Throughput
Consistency Level Guarantees

Strong Linearizability (once operation is complete, it will be visible to all)

Bounded Staleness Consistent Prefix.


Reads lag behind writes by at most k prefixes or t interval
Similar properties to strong consistency (except within staleness window), while
preserving 99.99% availability and low latency.

Session Consistent Prefix.


Within a session: monotonic reads, monotonic writes, read-your-writes, write-follows-
reads
Predictable consistency for a session, high read throughput + low latency

Consistent Prefix Reads will never see out of order writes (no gaps).

Eventual Potential for out of order reads. Lowest cost for reads of all consistency levels.
Bounded-Staleness: Bounds are set server-side via the Azure Portal
Session Consistency: Session is controlled using a “session token”.
• Session tokens are automatically cached by the Client SDK
• Can be pulled out and used to override other requests (to preserve session between multiple clients)
string sessionToken;

using (DocumentClient client = new DocumentClient(new Uri(""), ""))


{
ResourceResponse<Document> response = client.CreateDocumentAsync(
collectionLink,
new { id = "an id", value = "some value" }
).Result;
sessionToken = response.SessionToken;
}

using (DocumentClient client = new DocumentClient(new Uri(""), ""))


{
ResourceResponse<Document> read = client.ReadDocumentAsync(
documentLink,
new RequestOptions { SessionToken = sessionToken }
).Result;
}
Consistency can be relaxed on a per-request basis

client.ReadDocumentAsync(
documentLink,
new RequestOptions { ConsistencyLevel = ConsistencyLevel.Eventual }
);
Indexing
Schema-agnostic, automatic indexing
Automatically index every property of every record without having to
define schemas and indices upfront.

No need for schema and index management

Works across every data model


Schema

Latch free data structure for highly write-optimized database engine

Multiple index types: Hash, range, and geospatial

Physical index
SQL API
Query Demo:
https://www.documentdb.com/sql/demo
SQL API
Example: SQL Parameterization

Example: LINQ
SQL API
Query Results are paginated:

ToList() automatically iterates through all pages:


SQL API

Cross-Partition Queries
Concurrency
Write Optimized Database Engine
Designed for sustained large write volume without any term locality

B-Tree Lock free; threads never block

Log structured with large elastic writes


Cache
Blind incremental updates
Log Structured In memory
Store Low write, read, and space amplification

Index updates must operate within frugal resource budgets

Optimistic Concurrency Control via etag property


Optimistic Concurrency Control

{
"id": "2c9cddbb-a011-4947-94c2-6f8ccf421d2e",
"_rid": "o8ExAJlS4xRIAAAAAAAAAA==",
"_self": "dbs/o8ExAA==/colls/o8ExAJlS4xQ=/docs/o8ExAJlS4xRIAAAAAAAAAA==/",
"_etag": "\"2e004542-0000-0000-0000-5af31e8c0000\"",
"_attachments": "attachments/",
"_ts": 1525882508
}
Optimistic Concurrency Control
Optimistic Concurrency Control
Transactions
JavaScript Language Integrated Transactions
Context Pool ACID transactions over multiple records scoped to a partition key
… Compiled JavaScript Rich programming model via Stored Procedures
… …
Exposed via a JavaScript as a modern day T-SQL
Store
REPLACE
REPLACE
QUERY

Snapshot isolation at beginning of script invocation


… To other replicas
Writes gets atomically committed upon successful script invocation
Transaction

Exceptions (via “throw” keyword) rolls back the transaction


JavaScript Language Integrated Transactions
ACID transactions over multiple records scoped to a partition key
function(playerId1, playerId2) {
    var playersToSwap = __.filter (function (document) {
        return (document.id == playerId1 || document.id == playerId2);
Rich programming model via Stored Procedures
    });
    var player1 = playersToSwap[0], player2 = playersToSwap[1];
  Exposed via a JavaScript as a modern day T-SQL
    var player1ItemTemp = player1.item;
    player1.item = player2.item;
    player2.item = player1ItemTemp; Snapshot isolation at beginning of script invocation
    __.replace(player1)
        .then(function() { return __.replace (player2); })
        .fail(function(error){ throw 'Unable to update players, abort'; }); Writes gets atomically committed upon successful script invocation
}

System checks for e-tag violations at commit time to avoid conflict

Exceptions (via “throw” keyword) rolls back the transaction


Stored Procedures Best Practices + Caveats
ACID transactions across multiple records in a distributed system involves tradeoffs:
• These are laws of physics – cannot be avoided
• Transactions across multiple machines require expensive coordination – common approach is 2 phase commit
• Providing isolation against read/write skew against other concurrent transactions also involve concurrency tradeoffs

Guidelines & Best Practices:


• Sprocs must be scoped to a partition key value
• Sprocs have bounded execution (5 second rule)
• CRUD methods expose an isAccepted API to help script detect when it is nearing execution boundary
• Long running transactions should be broken up in to “chunks” with a continuation model
• Ex: return a Boolean indicating whether transaction is done, and include metadata (_ts watermark or
pointer) to help resume business logic
• Sprocs are implemented via JS – use callback convention to serialize control flow and avoid queueing up too many
async requests
• Tip: avoid deserializing string => object input unless required – this uses unnecessary CPU / RUs
Change Feed
Azure Cosmos DB Change Feed

Persistent log of records within an Azure Cosmos DB container in the


order in which they were modified
Common Scenarios
Event Sourcing for Microservices
Trigger Action
From Change Feed

Persistent Microservice
Event Store #1

Microservice #2
New Event


Microservice
#N
Materializing Views

Application

Cosmos DB
Materialized View
Subscription User Create Date …

123abc Ben6 6/17/17 User Total Subscriptions

456efg Ben6 3/14/17 Ben6 2


789hij Jen4 8/1/16 Jen4 1
012klm Joe3 3/4/17 Joe3 1
Replicating Data Secondary Datastore (e.g. archive)

Replicate Records

CRUD Data
Working with Change Feed
Working with Change Feed

Step 1: Retrieve a list of the partition key ranges


Working with Change Feed
Step 2: Consume the Change Feed on each PartitionKeyRange
Change Feed Processor Library
Behind the Scenes
Working with Change Feed Processor Library

Step 1: Implement ProcessChangesAsync() on IChangeFeedObserver


Working with Change Feed Processor Library

Step 2: Register the IChangeFeedObserver with to a ChangeFeedEventHost


Security & Compliance
Always encrypted at rest and in transit
• Encryption@ Rest – AES256
• Encryption @ Transit – SSL / TLS

Fine grained “row level” authorization


• User/Permissions with Resource Tokens

Network security with IP firewall rules and VNET

Comprehensive Azure compliance certification:


• ISO 27001, ISO 27018, EUMC, HIPAA
• PCI, SOC1 and SOC2
• FEDRAMP, HITRUST
A few more tips & tricks
Bulk Executor Library

Supports bulk import and update

Auto handles congestion control + transient errors

10x client-side performance improvement

Available for .NET and Java


Azure Cosmos DB + Apache Spark
1. Spark master node connects to the
Cosmos DB gateway node
2. Metadata is returned to Spark master gateway
data
node
node 1 nodes

3. Query is executed from Spark worker 2


master
nodes in parallel to the Cosmos DB data node
nodes 3 Spark-DocumentDB
Connector (Java)
4. Query results are returned from the worker nodes
Cosmos DB data nodes to the Spark 4
worker nodes.
Components of a Lambda Architecture
batch layer serving layer
1. All data pushed into both batch and speed
layer for processing
pre-compute batch view
2. The batch layer has a master dataset
2 3
(immutable, append-only set of raw data)
and pre-compute the batch views
master dataset batch view
3. The serving layer has batch views so data 5
1
available for fast queries. new query
4. The speed layer compensates for data
processing time (to serving layer) and
deals with recent data only. speed layer
4
5. All queries can be answered by merging
real-time view real-time view
results from batch views and real-time
views.

Source: http://lambda-architecture.net/
Lambda Architecture Simplified
1. All data pushed into both batch and speed 1
layer for processing new
data
2. The batch layer has a master dataset collections
(immutable, append-only set of raw data) CosmosDB
computed RT
and pre-compute the batch views change feed
2 4 4
master dataset
3. The serving layer has batch views so data
available for fast queries. 3 computed batch
3
4. The speed layer compensates for
processing time (to serving layer) and pre-compute
5 batch
deals with recent data only. 2

5. All queries can be answered by merging query


results from batch views and real-time
views.
Trouble Shooting Guide
Troubleshooting 429s – Rate Limited Calls
Master Resource High RU charges for High volume of
Throttling? Operations in the requests (even
time window of rate though RU charges
limited calls? are low?)
Yes No

Repeated unnecessary calls to Queries? Ingestion Path? Skew in the data? Read-only Stored
“Master” resources? - Create/Replace/Upsert - Is there an uneven distribution of data Procedures?
- Read Collection and/or requests across partition keys?
- Read Offer - Is the cardinality of the partition key
- Sproc are optimized for
- Read Database too low?
atomically and
- Multiple Client instances transactionally writing
multiple records
- Not ideal for read-only
procedural logic
- Refactor Data Model Design - Re-write read-only logic
Navigate to Navigate to - Follow this guide for best practices on as query/read operations
- Re-use singleton
Troubleshooting Troubleshooting partitioning
instance of client and
refactor redundant Query Ingestion -https://docs.microsoft.com/en-
calls Performance Performance us/azure/cosmos-db/partition-data
slide slide
Troubleshooting Query Performance
High RU Charges for High Query
Query Operations? latencies?

Queries on id? Are these aggregate Sorting on a field Ask for QueryMetrics - Are there a large number of cross Mongo API?
queries? with a large number https://docs.microso partition queries? - Cross partition queries in
of distinct values? ft.com/en- - Are there a large number of physical Mongo are serial. These
e.g. timestamp? us/azure/cosmos- partitions for the collection? queries are expected to
db/sql-api-sql-query- have high latencies.
metrics#query-
execution-metrics
- Control the degree of parallelism
instead of setting - Contact support for help
- Use read document Materialized a view to offload some - Use read document MaxDegreeOfParallelism to -1 re-tuning queries
computation from read path to - This will ensure x number of
over query over query
document write path document partitions are executed against
- Run aggregations client side in parallel
- Contact support to - Contact support to
- Store results of aggregations in
preview upcoming a second collection preview upcoming
indexing - Use ChangeFeed library to indexing
improvements reduce latency for generating improvements
real-time views
https://docs.microsoft.com/en-
us/azure/cosmos-db/change-
feed
Troubleshooting Ingestion Performance
High RU Charges for Ingestion
Operations?

Large documents? Default Indexing Is the BulkExecutor being used?


policy with a large
number of fields?

Try BulkExecutor if not already using


- Can the documents be split into - Default indexing indexes on all
multiple documents? fields by default. This might be
- This can reduce the scope and unnecessary
RU cost of updates. While total - If only a known subset of fields
RUs consumed for other will be used as filters on
operations will be queries, the indexing policy can
approximately the same, be modified to only include
records can partially succeed those fields – this has
instead of rate limiting large empirically proven to show large
chunks improvements in throughput
utilization
- https://docs.microsoft.com/en-
us/azure/cosmos-db/indexing-
policies#index-paths
General Performance Troubleshooting

What is the Linux VMs?


memory - Are you bottlenecked
usage on by max file size (i.e. no
the VM? of open connections)?
- i.e. nofiles

General/High Level
Troubleshooting

What is the
CPU
utilization
How many threads are
on the VM?
being executed in
parallel?
- Does the VM have
How many client instances sufficient cores?
are being created per
process? Ideally, the number
of instances should be
limited to 1 per process.
Thank you and Q&A

Follow @AzureCosmosDB
cosmosdb.com #azure-cosmosdb
Use #CosmosDB

You might also like