K8ssandra Quick Start

Install a Apache Cassandra® database in Kubernetes using K8ssandra, kick the tires and take it for a spin!

Completion time: 10 minutes.

Welcome to K8ssandra! This guide gets you up and running with a single-node Apache Cassandra® cluster on Kubernetes (K8s). If you’re interested in a more detailed component walkthroughs check out the tasks section.

In this quick start, we’ll cover the following topics:

Once these basic configuration and verification steps are completed, you can choose more detailed paths for either a developer or a site reliability engineer.


In your local environment the following tools are required for provisioning a K8ssandra cluster:

As K8ssandra deploys on a K8s cluster, one must be available to target for installation. The K8s environment may be a local version running on your development machine, an on-premises self-hosted environment, or a managed cloud offering.

K8ssandra works with the following versions of Kubernetes either standalone or via a cloud provider:

  • 1.16
  • 1.17
  • 1.18
  • 1.19
  • 1.20

To verify your K8s server version:

kubectl version


Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.3", GitCommit:"01849e73f3c86211f05533c2e807736e776fcf29", GitTreeState:"clean", BuildDate:"2021-02-18T12:10:55Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.16", GitCommit:"7a98bb2b7c9112935387825f2fce1b7d40b76236", GitTreeState:"clean", BuildDate:"2021-02-17T11:52:32Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Your K8s server version is a combination of the Major: and Minor: key/value pairs following Server Version:, in the example above, 1.18.

If you don’t have a K8s cluster available, you can use OpenShift CodeReady Containers that run within a VM, or one of the following local versions that run within Docker:

The instructions in this section focus on the Docker container solutions above, but the general instructions should work for other environments as well.

Resource recommendations for local Kubernetes installations

We recommend a machine specification of no less than 16 gigs of RAM and 8 virtual processor cores (4 physical cores). You’ll want to adjust your Docker resource preferences accordingly. For this quick start we’re allocating 4 virtual processors and 8 gigs of RAM to the Docker environment.

The following Minikube example creates a K8s cluster running K8s version 1.18.16 with 4 virtual processor cores and 8 gigs of RAM:

minikube start --cpus=4 --memory='8128m' --kubernetes-version=1.18.16


😄  minikube v1.17.1 on Darwin 11.2.1
✨  Automatically selected the docker driver. Other choices: hyperkit, ssh
👍  Starting control plane node k8ssandra in cluster k8ssandra
🔥  Creating docker container (CPUs=4, Memory=8128MB) ...
🐳  Preparing Kubernetes v1.18.16 on Docker 20.10.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Verify your Kubernetes environment

To verify your Kubernetes environment:

  1. Verify that your K8s instance is up and running in the READY status:

    kubectl get nodes


    k8ssandra   Ready    master   21m   v1.18.16

Validate the available Kubernetes StorageClasses

Your K8s instance must support a storage class with a VOLUMEBINDINGMODE of WaitForFirstConsumer.

To list the available K8s storage classes for your K8s instance:

kubectl get storageclasses


standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  2m25s

If you don’t have a storage class with a VOLUMEBINDINGMODE of WaitForFirstConsumer as in the Minikube example above, you can install the Rancher Local Path Provisioner:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml


namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created

Rechecking the available storage classes, you should see that a new local-path storage class is available with the required VOLUMEBINDINGMODE of WaitForFirstConsumer:

kubectl get storageclasses


local-path           rancher.io/local-path      Delete          WaitForFirstConsumer   false                  3s
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate              false                  39s

Configure the K8ssandra Helm repository

K8ssandra is delivered via a collection of Helm charts for easy installation, so once you’ve got a suitable K8s environment configured, you’ll need to add the K8ssandra Helm chart repositories.

To add the K8ssandra helm chart repos:

  1. Install Helm v3+ if you haven’t already.

  2. Add the main K8ssandra stable Helm chart repo:

    helm repo add k8ssandra https://helm.k8ssandra.io/stable
  3. If you want to access K8ssandra services from outside of the Kubernetes cluster, also add the Traefik Ingress repo:

    helm repo add traefik https://helm.traefik.io/traefik
  4. Finally, update your helm repository listing:

    helm repo update

Install K8ssandra

The K8ssandra helm charts make installation a snap. You can override chart configurations during installation as necessary if you’re an advanced user, or make changes after a default installation using helm upgrade at a later time.

K8ssandra can install the following versions of Apache Cassandra:

  • 3.11.7
  • 3.11.8
  • 3.11.9
  • 3.11.10
  • 4.0-beta4

To install a single node K8ssandra cluster:

  1. Copy the following YAML to a file named k8ssandra.yaml:

      version: "3.11.10"
        storageClass: local-path
        size: 5Gi
      allowMultipleNodesPerWorker: true
       size: 1G
       newGenSize: 1G
          cpu: 1000m
          memory: 2Gi
          cpu: 1000m
          memory: 2Gi
      - name: dc1
        size: 1
        - name: default
        adminUser: admin
        adminPassword: admin123
      enabled: true
      replicas: 1
      heapMB: 256
      cpuReqMillicores: 200
      cpuLimMillicores: 1000

    That configuration file creates a K8ssandra cluster with a datacenter, dc1, containing a single Cassandra node, size: 1 version 3.11.10 with the following specifications:

    • 1 GB of heap
    • 2 GB of RAM for the container
    • 1 CPU core
    • 5 GB of storage
    • 1 Stargate node with
      • 1 CPU core
      • 256 MB of heap
  2. Use helm install to install K8ssandra, pointing to the example configuration file using the -f flag:

    helm install -f k8ssandra.yaml k8ssandra k8ssandra/k8ssandra


    NAME: k8ssandra
    LAST DEPLOYED: Thu Feb 18 10:05:44 2021
    NAMESPACE: default
    STATUS: deployed

Verify your K8ssandra installation

Depending upon your K8s configuration, initialization of your K8ssandra installation can take a few minutes. To check the status of your K8ssandra deployment, use the kubectl get pods command:

kubectl get pods


NAME                                                  READY   STATUS      RESTARTS   AGE
k8ssandra-cass-operator-6666588dc5-s4xgc              1/1     Running     0          6m59s
k8ssandra-dc1-default-sts-0                           2/2     Running     0          6m27s
k8ssandra-dc1-stargate-6f7f5d6fd6-2dz8f               1/1     Running     0          7m
k8ssandra-grafana-6c4f6577d8-469qx                    2/2     Running     0          6m59s
k8ssandra-kube-prometheus-operator-5556885bd6-l5pxz   1/1     Running     0          6m59s
k8ssandra-reaper-k8ssandra-5b6cc959b7-wt22j           1/1     Running     0          3m20s
k8ssandra-reaper-k8ssandra-schema-dnrpk               0/1     Completed   0          3m35s
k8ssandra-reaper-operator-cc46fd5f4-lzq96             1/1     Running     0          7m
prometheus-k8ssandra-kube-prometheus-prometheus-0     2/2     Running     1          6m32s

The K8ssandra pods in the example above have the identifier k8ssandra either prefixed or inline, since that’s the name that was specified when the cluster was created using Helm. If you choose a different cluster name during installation, your pod names will be different.

The actual Cassandra node name from the listing above is k8ssandra-dc1-default-sts-0 which we’ll use throughout the following sections.

Verify the following:

  • The K8ssandra pod running Cassandra, k8ssandra-dc1-default-sts-0 in the example above should show 2/2 as Ready.
  • The Stargate pod, k8ssandra-dc1-stargate-6f7f5d6fd6-2dz8f in the example above should show 1/1 as Ready.

Once all the pods are in the Running or Completed state, you can check the health of your K8ssandra cluster. There must be no PENDING pods.

To check the health of your K8ssandra cluster:

  1. Verify the name of the Cassandra datacenter:

    kubectl get cassandradatacenters


    NAME   AGE
    dc1    51m
  2. Confirm that the Cassandra operator for the datacenter is Ready:

    kubectl describe CassandraDataCenter dc1 | grep "Cassandra Operator Progress:"


       Cassandra Operator Progress:  Ready
  3. Verify the list of available services:

    kubectl get services


    NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                 AGE
    cass-operator-metrics                       ClusterIP     <none>        8383/TCP,8686/TCP                                       47m
    k8ssandra-dc1-all-pods-service              ClusterIP   None             <none>        9042/TCP,8080/TCP,9103/TCP                              47m
    k8ssandra-dc1-service                       ClusterIP   None             <none>        9042/TCP,9142/TCP,8080/TCP,9103/TCP,9160/TCP            47m
    k8ssandra-dc1-stargate-service              ClusterIP    <none>        8080/TCP,8081/TCP,8082/TCP,8084/TCP,8085/TCP,9042/TCP   47m
    k8ssandra-grafana                           ClusterIP    <none>        80/TCP                                                  47m
    k8ssandra-kube-prometheus-operator          ClusterIP     <none>        443/TCP                                                 47m
    k8ssandra-kube-prometheus-prometheus        ClusterIP   <none>        9090/TCP                                                47m
    k8ssandra-reaper-k8ssandra-reaper-service   ClusterIP    <none>        8080/TCP                                                47m
    k8ssandra-seed-service                      ClusterIP   None             <none>        <none>                                                  47m
    kubernetes                                  ClusterIP        <none>        443/TCP                                                 53m
    prometheus-operated                         ClusterIP   None             <none>        9090/TCP                                                47m

    Verify that the following services are present:

    • --all-pods-service
    • --dc1-service
    • --stargate-service
    • --seed-service

Retrieve K8ssandra superuser credentials

You’ll need the K8ssandra superuser name and password in order to access Cassandra utilities and do things like generate a Stargate access token.

To retrieve K8ssandra superuser credentials:

  1. Retrieve the K8ssandra superuser name:

    kubectl get secret k8ssandra-superuser -o jsonpath="{.data.username}" | base64 --decode ; echo


  2. Retrieve the K8ssandra superuser password:

    kubectl get secret k8ssandra-superuser -o jsonpath="{.data.password}" | base64 --decode ; echo




  • If you’re a developer, and you’d like to get started coding using CQL or Stargate, see the K8ssandra developer quick start.
  • If you’re a site reliability engineer, and you’d like to explore the K8ssandra administration environment including monitoring and maintenance utilities, see the K8ssandra site engineer quick start.