Apache Cassandra® is used in cloud-native applications that require massive resilience and scalability. With multiple nodes replicating data all the time, we need advanced knowledge and tooling to understand the health of these systems.

A lot of time can go to investigating and exploring solutions to ensure operational stability. But wait, deploying a scalable, elastic, and self-healing data plane in Kubernetes should be easy…shouldn’t it? 

Thanks to a great community collaboration called K8ssandra, it has become exactly that—easy. K8ssandra is a production-ready platform for running Cassandra on Kubernetes, and provides everything you need in an easy-to-deploy package, which also includes metrics, data anti-entropy services, and backup tooling.

The rise of microservices

A major shift in building apps happened in the last two decades that radically transformed how companies think about and manage their data: suddenly, we started seeing big, monolithic applications divided into microservices. 

Microservice architectures make applications easier to scale and faster to develop, enabling innovation and accelerating speed-to-market for new apps. Highly-scalable NoSQL databases, like Cassandra, became popular. 

Figure 1. Highly scalable cloud application architecture.

Cloud application architecture consists of three levels: applications, data, and infrastructure. Microservices are an architectural approach used at all three levels to create cloud applications composed of small independent services that communicate over APIs.  

Now, companies are building microservice infrastructure tiers running in large data centers that scale up easily using infrastructure like Kubernetes. Web and mobile apps are built on top of these microservice tiers using databases like Cassandra and Data gateways like Stargate.

Microservices and containers are widely used to automate application deployment and operations. To manage all those containers, Kubernetes was invented. Every major cloud provider now offers a Kubernetes hosted environment, or runs in a self hosted environment through VMWare. You can host Kubernetes everywhere: in your own data center, in the cloud, or both.

In this post, we’ll show you how to deploy a PetClinic application on Cassandra with Kubernetes using K8ssandra. K8ssandra makes it effective for you to operate Cassandra, and also simplifies the process of building applications on top of it. 

Why Apache Cassandra?

Cassandra has been around for over a decade as the most scalable NoSQL database and is the perfect companion for a scalable application architecture.

A Cassandra data cluster contains multiple nodes. Each node handles two to four terabytes of data and millions of transactions per second, although the actual mileage of data and throughput you get depends on your particular use case.

The nodes on Cassandra communicate with each other in the background using a process called “gossiping”. Gossiping allows the cluster to self-organize, self-heal, and to route around an underperforming node or a network partition. 

Figure 2. Apache Cassandra scales linearly.

To scale up on Cassandra, you can simply add more nodes, which gives you more storage and throughput. The nodes are organized in a ring-like structure in a Cassandra cluster, allowing linear scalability. Figure 2 illustrates how Cassandra scales linearly compared to other databases. This allows you to scale your entire application seamlessly by scaling microservices and Cassandra in parallel.

Cassandra distributes your data across all the nodes in your cluster, storing multiple copies of your data for high availability. It uses the partition key that you define in your database schema to determine which nodes should own each row of data. To see an illustration of how this works, this video explains data replication.

Cassandra has multiple mechanisms built in to keep your data reliable and resilient against node failures. When a node goes down for a short period of time, other nodes can store a “hint”. Once it comes back online, the other nodes can send a copy of the data. But the hints are not stored forever.

The architecture of Cassandra uses racks so that no replica of data is stored redundantly inside a single rack, but instead spreads around replicas through different racks in case one rack goes down. For example, if you set a replication factor of three, Cassandra uses its awareness of how the nodes are assigned to racks and copies the data on a node in rack one, in rack two, and in rack three. 


Figure 3. Hybrid and multi-cloud.

Cassandra is not limited to a single cloud region: you can deploy Cassandra in hybrid and multi-cloud environments. You can join any of those clouds with nodes in your on-premise data center to get data replication to multiple locations instead of being limited to AWS, Azure, or Google Cloud. This makes Cassandra a really great fit for flexible cloud deployments. 

Two of the earliest adopters of Cassandra are Netflix and Apple, who have been deploying Cassandra in production and contributing to the community for over a decade. In 2019, Apple had 160k Cassandra instances running thousands of clusters with 100+ petabytes of data stored. 

How does Kubernetes fit in? 

Kubernetes is an open-source container orchestration tool for automating deployment, scaling, and management of containerized applications. You can read all about Kubernetes in this book—O’Reilly: Cassandra: The Definitive Guide.

When you deploy Cassandra on a container where you have a very thin layer of Linux, Kubernetes helps you easily and efficiently manage these containers. The key differentiating features of Kubernetes include:

  • Storage orchestration
  • Batch execution
  • Horizontal scaling
  • Self-healing
  • Automatic bin packing
  • Secret and configuration management
  • Automated rollouts and rollbacks
  • Service discovery and load balancing

Kubernetes infrastructure

Figure 4. The infrastructure of Kubernetes master node [left] and worker node [right].

A Kubernetes cluster is a set of nodes that run containerized applications, which allows you to develop, move, and manage applications easily. With Kubernetes clusters, you can run containers across multiple machines and environments: virtual, physical, cloud-based, and on premises.

Within a Kubernetes cluster, there are Control Plane Nodes and Worker Nodes. The Control Plane components include the scheduler which assigns Pods to worker nodes and an API Server that allows interactions with the Control Plane. These elements are shown in Figures 5 and 6.


Figure 5. Kubernetes control plane.

A Kubernetes Control Plane includes: 

  • K8s API Server: All interactions with the Control Plane go through this REST API. The containers that are running inside pods and any tools at the command line communicate through the API server. 
  • Controller Manager: This is a pluggable architecture to add different controllers. Later on, we’ll see how these different managers integrate with K8ssandra.
  • Scheduler: The scheduler distributes the work of different containers onto the nodes in a Kubernetes cluster. 
  • ETCD: ETCD is used as a backup store for all the information about what is deployed within your Kubernetes cluster. 
Figure 6. Kubernetes worker node.

Deploying a Cassandra cluster to Kubernetes used to be a time-consuming and lengthy process. You had to set up networking, storage, and firewalls. Then, you had to create multiple Cassandra Pods, configure backups and repairs, connect your applications, and many more steps.

But with K8ssandra, you can simply use some Helm charts and the Helm package manager to deploy Cassandra to Kubernetes.

Introducing K8ssandra

The K8ssandra project is an open-source, fully working, scalable database with administration tools and easy data access. K8ssandra is a production-ready platform for running Cassandra on Kubernetes. This means that K8ssandra provides not only Cassandra, but also useful tools for the management and use of your Cassandra database—all within the Kubernetes framework. 

K8ssandra includes components that address both the developer and operational aspects. On the developer side, there are:

  1. Cassandra: Scalable cloud-native database managed via cass-operator
  2. Stargate: Data Gateway providing REST, GraphQL, Document APIs
  3. Traefik: Kubernetes ingress for external access

On the operational side, there are: 

  1. Reaper and Medusa: Cassandra utilities for repair and backup/restore
  2. HELM: Packaged and delivered via Helm charts
  3. Prometheus and Grafana: Metrics aggregation and visualization

Watch a detailed explanation of how K8ssandra components work in context.

Setting up K8ssandra is also pretty straightforward. Here are the things you’ll need:

  • Install Helm and the Kubernetes cluster where you’re going to be deploying. 
  • Add the K8ssandra home repo and traefik home repo.

For a more comprehensive guide, check out this K8ssandra guide.

Kubernetes primitives

Before we dive into the hands-on exercises in the next section, let’s get familiar with Kubernetes terminology. 

  1. Namespaces: Namespaces are a way to organize clusters into virtual sub-clusters. They can be helpful when different teams or projects share a Kubernetes cluster. Any number of namespaces are supported within a cluster, each logically separated from others but with the ability to communicate with each other. Namespaces cannot be nested within each other.
  2. Storage: Different applications have different forms of storage needs and Kubernetes offer three classes of storage:
  • PersistentVolume: A storage resource provisioned by an administrator or dynamic provisioner.
  • PersistentVolumeClaim: A user’s request for and claim to a persistent volume.
  • StorageClass: Describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.
  1. StatefulSet: StatefulSet represents a set of pods that are configured to have storage needs using persistentVolumeClaims. If a Cassandra node goes down and you want to replace it, StatefulSets can point it to the same storage volume where the previous data files were stored.
  2. Custom resources: Kubernetes also allows you to define your own resources on top of the ones it provides. Custom Resources are extensions of the Kubernetes API and they make Kubernetes more modular. Examples used in K8ssandra include CassandraDataCenter (cass-operator), CassandraBackup, and CassandraRestore. 
  3. Operator: In the Kubernetes world, an operator is considered to be part of the Control Plane. You can define operators that function on your custom resources. For example, the cass-operator works on Cassandra data centers. These operators understand how to operate your specific infrastructure, such as deploying Cassandra nodes to specific locations or worker nodes to achieve a good distribution of data.
  4. Kubernetes Pods: When you deploy Cassandra, each Casandra node is deployed in a pod. Alongside this, we’re deploying Cassandra Management API service, another open-source project, as a sidecar container.

    It does a number of things but most importantly for K8ssandra, it provides access to metrics from a Cassandra node via HTTP. This is better than the traditional way of getting metrics from Cassandra nodes through Java Management Extensions (JMX), which was notoriously difficult to understand and use. 

The cass-operator simplifies the deployment of a Cassandra cluster on Kubernetes. It automates the details of deploying and running a Cassandra cluster, such as:

  • Proper token ring initialization, with only one node bootstrapping at a time
  • Seed node management—one per rack, or three per datacenter, whichever is more
  • Server configuration integrated into the CassandraDatacenter CRD
  • Rolling reboot of nodes by changing the CRD
  • Store data in a rack-safe way—one replica per cloud AZ
  • Scale up racks evenly with new nodes
  • Replace dead/unrecoverable nodes
  • Multi DC clusters (limited to one Kubernetes namespace)
  • Scaling up and down simply

Hands-on workshop

To benefit the most out of the demos, we recommend that you follow along with the YouTube workshop and do the steps again on your own after the workshop. You can also chat with us on Discord

In the YouTube workshop, we give you two options to get started: local setup or our cloud instance. But since the cloud instances were terminated after the workshop, you’ll need to use your own computer or your cloud node. Make sure you have a Docker-ready machine with at least a 4-core + 8 GB RAM. 

See how to set up K8ssandra in this video and click on the individual links below to find instructions and codes on GitHub: 

  1. Setting up Cassandra  
  2. Monitoring Cassandra 
  3. Working with data 
  4. Scaling your Cassandra cluster up and down 
  5. API Access with Stargate 
  6. Running Cassandra repairs 

Conclusion

Since we’ve held the YouTube live workshop, the K8ssandra community has released the K8ssandra Operator. The 2.0 release maintains feature parity with K8ssandra v1 but has added capabilities that align better with globally geographic Cassandra deployments. 

Once you’re done with the hands-on exercises in the article, you can submit your assignment to our GitHub repo and unlock your “K8ssandra Workshop” achievement. 

The K8ssandra community is a thriving community with lots of active development happening. Join the K8ssandra community on Twitter or on GitHub to hear our latest releases and upgrades. Visit the K8ssandra blog for more tutorials on deploying applications with Cassandra in Kubernetes.

This post was originally published on DataStax Tech Blog.

Resources

  1. Apache Cassandra
  2. Microservices
  3. Kubernetes
  4. Stargate
  5. Cassandra Datacenter and Racks
  6. K8ssandra Documentation
  7. O’Reilly: Cassandra: The Definitive Guide
  8. YouTube Tutorial: Cloud-Native Workshop: Apache Cassandra meets Kubernetes
  9. GitHub: K8ssandra Workshop
  10. Deploying to Multiple Kubernetes Clusters with the K8ssandra Operator