The K8ssandra team has just published the 1.3 release. The really big deal is support for the Apache Cassandra™ 4.0 release, but there’s plenty of other goodness here as well. Let’s unpack this gift-wrapped box!

The easiest way to try Cassandra 4.0?

K8ssandra has already been shipping with the C* 4.0 release candidate build (RC2) as the default starting with the 1.2 release. However, with the official release of 4.0 as “general availability”, it’s time to highlight some of the great new capabilities that are available. Creating a new K8ssandra 1.3 cluster might be the easiest way to try out these new features:

  • The biggest usability improvement of the 4.0 release is the introduction of virtual tables, which expose configuration settings and metrics via Cassandra’s CQL interface. This is especially promising for Kubernetes deployments since it helps avoid having to expose the JMX port on your Cassandra pods. See K8ssandra contributor Alex Dejanovski’s blog for a guided tour using virtual tables.
  • K8ssandra takes advantage of the official Java 11 support, which is a contributor to many C* 4.0 performance improvements, along with other features such as improved messaging and streaming between nodes. By including Java 11, we gain the option to use the Z garbage collector (ZGC), which produces significant improvements in tail latencies.
  • C* 4.0 also contains some additional advanced features that are not enabled by default, including audit logging, full query logging, and transient replication. These require overriding values in the cassandra.yaml file which are not yet supported by cass-operator and K8ssandra. If you’re interested in trying out these advanced features in cass-operator or K8ssandra, please give us your feedback on #981 (audit/query logging), by filing a new Issue, or reach out on the Forum or Discord.

Support for private container registries

Many teams work with cloud environments that use private container registries and restrict access to public registries such as Docker Hub. This release of K8ssandra adds support for these private registries so that all images required for K8ssandra installations can be accessed. For more information see the primary pull request (#901) and related issues (#420, #839, #840).

Backup/restore support for Azure blob storage

The Medusa project has been updated to add support for backing up and restoring from Azure blob storage, and we’ve incorporated this into K8ssandra as well (#685). Check out the Azure Backup and Restore documentation page for instructions on using this feature.

Improved affinity support

In the 1.2 release, we added partial support for managing affinity of K8ssandra pods by configuring tolerations for Cassandra, Stargate, and Reaper. Cass-operator already provides Kubernetes node affinity and pod anti-affinity for C* pods, abstracted through the concept of racks (see this blog for an example). The K8ssandra 1.3 release adds the capability to configure affinity for Stargate and Reaper as well.

What’s Next: K8ssandra operator

The team is continuing to work on features and fixes. As you may have noticed if you’re tracking the roadmap, we’ve started working toward a 2.0 release featuring a K8ssandra operator.

Today, K8ssandra is delivered as a series of Helm charts. This has provided a lot of positives to the project and allowed us to deliver new capabilities quickly. However, as we continue to extend K8ssandra with increasingly rich features, the team has bumped into limitations and challenges with using Helm alone. As a result, we’re evolving the underlying implementation of K8ssandra itself.

To help deliver the next generation of capabilities, we’re building a new operator to manage the complete K8ssandra deployment lifecycle. We’ll still use Helm to create a simple install experience for users, but we’ll layer that with a greater level of control within the new operator to provide features including:

  • Unified Status – K8ssandra will be able to report the consolidated status of the deployment back through a single CRD.
  • Multi-DC Support – K8ssandra currently only supports the deployment of a single Cassandra datacenter, we’ll be able to remove that limitation more easily. (See this blog post for a workaround)
  • Multi-Cluster Support – Perhaps the largest feature that the new operator will unlock is K8ssandra deployments that span multiple Kubernetes clusters. As the deployment footprint of K8ssandra expands, features such as unified status will become even more impactful.

To learn more about the challenges that inspired and the designs behind the next generation of K8ssandra, check out the discussions on this issue and the initial design in this document. Jeff DiNoto also gave an overview at the Cassandra Kubernetes Special Interest Group (SIG) meeting on July 1, 2021:

We invite you to follow along, give feedback, and contribute to the development of the operator in its new repository!