With the release of k8ssandra-operator v1.19.0, the k8ssandra-operator received a major improvement: a new way of deploying Reaper.

In this blog post, we will have a closer look at what the new feature is and how it works.

Deploying Reaper the old way

Any operator striving for healthy and consistent Cassandra clusters needs to ensure the clusters undergo anti-entropy repairs (or just “repairs”) in a regular fashion. The k8ssandra-operator comes with Cassandra Reaper, which handles the scheduling and orchestration of repairs.

Adding Reaper to a k8ssandra cluster is as easy as adding the reaper: {} field to its spec. The k8ssandra-operator picks this up and deploys a Reaper instance alongside the k8ssandra cluster. It also takes care of making Reaper aware of the new cluster, and sets up a repair.

Why change?

There’s a few points one can make about this setup that can be summarized as wasteful:

  • Deploying one Reaper per k8ssandra cluster, and sometimes even one Reaper per k8ssandra cluster’s data center, is an overkill.
  • Similarly, using a database geared to store big data volumes to handle Reaper’s few kilobytes is quite unecessary. The Cassandra data model of Reaper is mostly efficient but tends to generate tombstones over time as previous repair runs get purged.
  • As an operator managing a fleet of k8ssandra clusters, you don’t really want to cycle through several Reaper instances.

Centralized the repair management

We reckon that k8ssandra-operator users will have much better time managing repairs if that could be done from a single place. 

So we came up with an idea that there should be just one Reaper instance into which all the k8ssandra clusters get registered. Making this possible required a bunch of changes at various places.

Persistent storage for Reaper

A centralized reaper will not have a database to store its data. This implies that Reaper can only run with the in-memory storage configuration, which is transient. Upon a restart, Reaper would lose all the registered clusters, as well as configured repairs and their progress. That’s hardly friendly.

So the first thing we did was to add persistence to Reaper’s in-memory storage. With this feature, Reaper will get its in-memory content transparently serialized to disk.

Provisioning the storage in a k8s cluster

When running within K8ssandra, Reaper is a proper stateless service (because its state is stored in Cassandra). For this reason, Reaper has been deployed as a Deployment, which comes with ephemeral storage. This is obviously not useful for the new Reaper. After a restart, Deployment’s pods get brand new volumes, so our state would be lost.

That is why we added the possibility of deploying Reaper as a Stateful Set. If Reaper’s spec.storageType is set to local, the k8ssandra-operator will deploy Reaper as a Stateful Set, including its own Persistent Volume. The PV needs its own configuration under spec.storageConfig (which is a PersistentVolumeClaimSpec).

Putting it all together, deploying Reaper as a Stateful Set can be done with a manifest like this:

kind: K8ssandraCluster
metadata:
  name: test
spec:
  reaper:
    storageType: local
    storageConfig:
      storageClass: gp2
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 256Mi
  cassandra:
    ...

Deploying Reaper with local storage type works for both Reapers deployed together with the k8ssandra-clusters and standalone Reapers.

Control Plane Reaper

Since Reapers with local storage no longer need their parent Cassandra cluster, they can now be deployed independently with a manifest similar to:

kind: Reaper
metadata:
  name: reaper1
spec:
  storageType: local
  storageConfig:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 256Mi

Applying this manifest will make the k8ssandra-operator create a Stateful Set with one (and always one) pod running Reaper. It’s important to have just one pod up at all times because Reaper by itself cannot synchronize with other Reapers. It relied on Cassandra’s LWTs for that.

A K8ssandraCluster can then use the newly added reaperRef field to reference an existing Reaper:

kind: K8ssandraCluster
metadata:
  name: test
spec:
  reaperRef:
    name: reaper1
  cassandra:
    ...

The k8ssandra-operator will use the reference to discover the existing Reaper. It will then add the cluster to Reaper and set up a repair – just like it would do before.

More on the internals

A centralized Reaper might need to access nodes that are in different k8s clusters or contexts. While this can be achieved via JMX, it’s somewhat easier to do using HTTP.

  • Each Cassandra instance in K8ssandra clusters runs the Management API. This is a Java agent loaded into Cassandra’s JVM that, among other things, exposes all the things we normally could do via JMX, but over HTTP.
  • Reaper has had the ability to communicate with Cassandra through the Management API for a while now. To enable this behavior, add the httpManagement to Reaper’s spec:
kind: Reaper
metadata:
  name: reaper1
spec:
  storageType: local
  storageConfig:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 256Mi
  httpManagement:
    enabled: true

Not using Cassandra as a storage backend allows the k8ssandra-operator to skip creating and configuring things like Reapers CQL user and the overall CQL schema. The only thing we need to consider is the authentication when accessing Reaper’s UI. 

The k8ssandra-operator does not create the UI secret itself when deploying a standalone Reaper. Instead, we must provide a reference to an existing secret if we with Reaper to require authentication:

kind: Reaper
metadata:
  name: reaper1
spec:
  storageType: local
  storageConfig:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 256Mi
  httpManagement:
    enabled: true
  uiUserSecretRef:
    name: reaper-ui-secret

This is the same secret the operator uses when registering a k8ssandra-cluster to Reaper. So together with reaperRef, we need to reference the UI secret in the cluster’s spec:

kind: K8ssandraCluster
metadata:
  name: test
spec:
  reaperRef:
    name: reaper1
  uiUserSecretRef:
    name: reaper-ui-secret
  cassandra:
    ...

Summary

  • k8ssandra-operator now supports deploying a standalone Reaper.
  • Reaper’s state is stored on a Persistent Volume, which is cheaper than a Cassandra cluster.
  • A standalone Reaper instance offers centralized view of repairs across a cluster fleet.

Update now

As usual, we encourage all K8ssandra users to upgrade to v1.19.0 in order to get the latest features and improvements.

Let us know what you think of K8ssandra-operator by joining us on the K8ssandra Discord or K8ssandra Forum today. For exclusive posts on all things data and GenAI, follow DataStax on Medium.