Understanding Consistency Models in Azure Cosmos DB to manage availability and performance

Explore the five SLA-backed consistency levels (strong, bounded staleness, session, consistent prefix, and eventual) available in Azure Cosmos DB that enable developers to manage availability and performance tradeoffs. This video will help you to understand how they work and the consistency guarantees they provide when using a managed distributed NoSQL database.

Orchestration-based Saga on Serverless

Recently, I saw a very good Azure sample code on GitHub which is about orchestration based saga on serverless. It looks very promising for serverless architectures to solve real-world business problems for futuristic solution design. So sharing the same details with you guys with links of code repositories and all. I hope this will be beneficial for you guys.

Problem

Contoso Bank is building a new payment platform leveraging the development of microservices to rapidly offer new features in the market, where legacy and new applications coexist. Operations are now distributed across applications and databases, and Contoso needs a new architecture and implementation design to ensure data consistency on financial transactions.

The traditional ACID approach is not suited anymore for Contoso Bank as the data of operations are now spanned into isolated databases. Instead of ACID transactions, a Saga addresses the challenge by coordinating a workflow through a message-driven sequence of local transactions to ensure data consistency.

Solution

The solution simulates a money transfer scenario, where an amount is transferred between bank accounts through credit/debit operations and an operation receipt is generated for the requester. It is a Saga pattern implementation reference through an orchestration approach in a serverless architecture on Azure. The solution leverages Azure Functions for the implementation of Saga participants, Azure Durable Functions for the implementation of the Saga orchestrator, Azure Event Hubs as the data streaming platform and Azure Cosmos DB as the database service.

The implementation reference addresses the following challenges and concerns from Contoso Bank:

Developer experience: A solution that allows developers focus only on the business logic of the Saga participants and simplify the implementation of stateful workflows on the Saga orchestrator. The proposed solution leverages the Azure Functions programming model, reducing the overhead on state management, checkpointing (mechanism that updates the offset of a messaging partition when a consumer service processes a message) and restarts in case of failures.

Resiliency: A solution capable of handling a set of potential transient failures (e.g. operation retries on databases and message streaming platforms, timeout handling). The proposed solution applies a set of design patterns (e.g. Retry and Circuit Breaker) on operations with Event Hubs and Cosmos DB, as well as timeout handling on the production of commands and events.

Idempotency: A solution where each Saga participant can execute multiple times and provide the same result to reduce side effects, as well as to ensure data consistency. The proposed solution relies on validations on Cosmos DB for idempotency, making sure there is no duplication on the transaction state and no duplication on the creation of events.

Observability: A solution that is capable of monitoring and tracking the Saga workflow states per transaction. The proposed solution leverages Cosmos DB collections that allow the track of the workflow by applying a single query.

Architecture

Check the following sections about the core components of the solution, workflows, and design decisions:

Source Code: https://github.com/Azure-Samples/saga-orchestration-serverless

I hope this will be helpful !!!

How you can save with serverless provisioned throughput in Cosmos DB?

Autoscale is a great fit for any situation where you need guaranteed throughput and performance, and your traffic isn’t predictable enough to scale your throughput manually.

With Autoscale, you just need to pay for highest throughput (RU/s) you scale to within one hour. See below image which explain this in pictorial representation.

Azure Cosmos DB serverless will launch in public preview in the next couple of months and will be available for all Azure Cosmos DB APIs.

But what if your workload doesn’t require sustained throughput?

In some scenarios, you may expect your Azure Cosmos DB database to sit idle most of the time, only processing requests occasionally. This is typically the case when you get started with Azure Cosmos DB, build a prototype, or even run small, non-critical applications.

To best serve this kind of use-cases, azure cosmos team has announced preview of Azure Cosmos DB serverless, a purely consumption-based offer. With serverless, you will only pay for:

  • RU consumed by your database operations
  • Storage consumed by your data

Because serverless is a true pay-per-request billing model, it will minimise cost for everyone who wants to start using Azure Cosmos DB or run small applications with light traffic.

Watch this video to better understand how autoscale provisioned throughput and serverless make Azure Cosmos DB a cost-effective solution for any kind of workload:

I hope this helps !!!

Azure Cosmos Database Partitioning

Azure Cosmos DB containers can provide virtually unlimited storage and throughput. They can scale from a few requests per second into the millions, and from a few gigabytes of data to several petabytes.

This article explains the relationship between logical and physical partitions. It also discusses best practices for partitioning and gives an overview of how horizontal scaling works in Azure Cosmos DB.

Let us start by understanding partitioning schemes : physical and logical.

Physical partitions

Azure Cosmos container is scaled by distributing data and throughput across physical partitions. Internally, one or more logical partitions are mapped to a single physical partition. Most small Cosmos containers have many logical partitions but only require a single physical partition. Unlike logical partitions, physical partitions are an internal implementation of the system and they are entirely managed by Azure Cosmos DB.

Logical Partitions

A logical partition consists of a set of items that have the same partition key. For example, in a container that contains data about food nutrition, all items contain a foodGroup property. You can use foodGroup as the partition key for the container. Groups of items that have specific values for foodGroup, such as Beef Products, Baked Products, and Sausages and Luncheon Meats, form distinct logical partitions. You don’t have to worry about deleting a logical partition when the underlying data is deleted.

For example, in the following diagram, a single container has three physical partition sets, each of which stores the data for one or more partition keys (in this example, LAX, AMS, and MEL). Each of the LAX, AMS, and MEL partition keys can’t grow beyond the maximum logical partition limit of 20 GB.

Many partition keys can be co-located on a single physical partition set and are automatically redistributed by the service as needed to accommodate growth in traffic, storage, or both. Because of this, you don’t need to worry about having too many partition key values. In fact, having more partition key values is generally preferred to fewer partition key values. This is because a single partition key value will never span multiple physical partition sets.

Choosing a good partition key

Choosing a good partition key is a critical design decision, as it’s really the only factor that can limit horizontal scalability.

To scale effectively with Azure Cosmos DB, You should choose a partition key such that:

  • The storage distribution is even across all the keys.
  • The volume distribution of requests at a given point in time is even across all the keys.
  • Queries that are invoked with high concurrency can be efficiently routed by including the partition key in the filter predicate.

In general, a partition key with higher cardinality is preferred because it typically yields better distribution and scalability.

Unique partition keys

In selecting a partition key, you may want to consider whether to use unique keys to add a layer of data integrity to your database. By creating a unique key policy when a container is created, you ensure the uniqueness of one or more values per partition key.

When a container is created with a unique key policy, it prevents the creation of any new or updated items with values that duplicate values specified by the unique key constraint.

For example, in building a social app, you could make the user’s email address a unique key. Thereby ensuring that each record has a unique email address and no new records can be created with duplicate email addresses.

Synthetic partition keys

It’s considered a best practice to have a partition key with many distinct values, as a means of ensuring that your data and workload is distributed evenly across the items associated with those partition key values.

The goal is to distribute your data and workload evenly across the items associated with these partition key values.

The benefit of this method is that you can avoid creating a single hot partition key, i.e., a partition key that takes all the workload.

If such a property doesn’t exist in your data, you can construct a synthetic partition key in several ways, such as by concatenating multiple properties of an item, appending a random suffix at the end of a partition key value, or by using a pre-calculated suffix based on something you want to query.

Reference:

Azure Cosmos DB – Resource Planning & Scaling Made Easy

Understanding and Optimizing Throughput in Azure Cosmos DB

We all know that scaling application require lots of analysis on resources on which our application run. In this article, we are going to focus on database part only.

To understand how easy to plan resources in Azure Cosmos DB, first we need to understand how we plan and determine resources for on-premise database server.

On-premise database scaling

For example, your on-premise database currently able to serve 1,000 request per second. You want to scale that database to serve 10,000 request per second.

As a application developer or architect you need to find out how much more RAM or CPU you need to add in your current database machine to achieve this performance.

You might do some stress testing and try to find out that RAM is the bottleneck, but that doesn’t necessarily tell you how much RAM translates to how many requests per second. As you add some RAM, CPU might become the bottleneck.

As you add more processor cores, I/O might become the new bottleneck. This approach gives you a very difficult time to scale and that’s for a single, monolithic server. Imagine doing this for a distributed database. Its painful, right?

Azure Cosmos DB scaling

Azure Cosmos DB uses a machine-learning model to provide a predictable RU charge for each operation. So if you create a document today and it costs 5 RUs, then you can rest assured that the same request will cost you 5 RUs tomorrow and the day after, inclusive of all background processes. This lets you forecast required resource with some basic “mental maths,” using one simple dimension.

In the previous scenario, where you want to scale from 1,000 operations per second to 10,000, you’ll need 10 times the number of RUs. So if it takes 5,000 RUs to support 1,000 writes per second, you can rest assured that you can support 10,000 writes per second with 50,000 RUs.

Just provision the RUs that you’ll want, and Azure Cosmos DB will set aside the necessary system resources for you, abstracting all the complexity in terms of CPU, memory, and I/O. It’s really that simple.

Let us also understand what is RU and how we can determine it when we want to scale our application up or down.

Request units and provisioned throughout

Request units per second (RU/s) often represented in the plural form RUs which is the “throughput currency” of Azure Cosmos DB.

The number of RUs for an operation is deterministic. Azure Cosmos DB supports various APIs that have different operations, ranging from simple reads and writes to complex graph queries. Because not all requests are equal, requests are assigned a normalised quantity of RUs, based on the amount of computation required to serve the request.

RUs let you scale your app’s throughput with one simple dimension, which is much easier than separately managing CPU, memory, and IOPS. You can dynamically dial RUs up and down by using the Azure portal or programmatically, enabling you to avoid paying for spare resource that you don’t need.

For example, if your database traffic is heavy from 11 AM to 2 PM, you can scale up your RUs for those hours and then scale back down for the remaining hours when database traffic is light.

Capacity calculator tool

Azure have also created very nice tool which can help us to determine RUs and Cost for scaling of database as per our need. Please play around with it, it can be very useful tool to determine cost of scaling cosmos db.

Reference & Useful Links

I hope this will help you taking decision while you want to scale your cosmos database to achieve certain level of performance benefits.

As a developer and architect we also need to change our mindset while we choose database tools for our applications, which can scale up or down easily.