Install a Apache Cassandra® database in Kubernetes using K8ssandra, kick the tires and take it for a spin!

Completion time10 minutes.

Welcome to K8ssandra! This guide gets you up and running with a single-node Apache Cassandra® cluster on Kubernetes (K8s). If you’re interested in a more detailed component walkthroughs check out the tasks in our Docs section.

In this quick start, we’ll cover the following topics:

Once these basic configuration and verification steps are completed, you can choose more detailed paths for either a developer or a site reliability engineer.

Prerequisites

In your local environment the following tools are required for provisioning a K8ssandra cluster:

As K8ssandra deploys on a K8s cluster, one must be available to target for installation. The K8s environment may be a local version running on your development machine, an on-premises self-hosted environment, or a managed cloud offering.

K8ssandra works with the following versions of Kubernetes either standalone or via a cloud provider:

  • 1.16
  • 1.17
  • 1.18
  • 1.19
  • 1.20

To verify your K8s server version:

kubectl version

Output

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.3", GitCommit:"01849e73f3c86211f05533c2e807736e776fcf29", GitTreeState:"clean", BuildDate:"2021-02-18T12:10:55Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.16", GitCommit:"7a98bb2b7c9112935387825f2fce1b7d40b76236", GitTreeState:"clean", BuildDate:"2021-02-17T11:52:32Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Your K8s server version is a combination of the Major: and Minor: key/value pairs following Server Version:, in the example above, 1.18.

If you don’t have a K8s cluster available, you can use OpenShift CodeReady Containers that run within a VM, or one of the following local versions that run within Docker:

The instructions in this section focus on the Docker container solutions above, but the general instructions should work for other environments as well.

Resource recommendations for local Kubernetes installations

We recommend a machine specification of no less than 16 gigs of RAM and 8 virtual processor cores (4 physical cores). You’ll want to adjust your Docker resource preferences accordingly. For this quick start we’re allocating 4 virtual processors and 8 gigs of RAM to the Docker environment.

Tip

See the documentation for your particular flavor of Docker for instructions on configuring resource limits.

The following Minikube example creates a K8s cluster running K8s version 1.18.16 with 4 virtual processor cores and 8 gigs of RAM:

minikube start --cpus=4 --memory='8128m' --kubernetes-version=1.18.16

Output:

😄  minikube v1.17.1 on Darwin 11.2.1
✨  Automatically selected the docker driver. Other choices: hyperkit, ssh
👍  Starting control plane node k8ssandra in cluster k8ssandra
🔥  Creating docker container (CPUs=4, Memory=8128MB) ...
🐳  Preparing Kubernetes v1.18.16 on Docker 20.10.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Verify your Kubernetes environment

To verify your Kubernetes environment:

  1. Verify that your K8s instance is up and running in the READY status:
    kubectl get nodes

Output:
NAME STATUS ROLES AGE VERSION
k8ssandra Ready master 21m v1.18.16

Validate the available Kubernetes StorageClasses

Your K8s instance must support a storage class with a VOLUMEBINDINGMODE of WaitForFirstConsumer.

To list the available K8s storage classes for your K8s instance:

kubectl get storageclasses

Output:

NAME standard (default)
PROVISIONER k8s.io/minikube-hostpath                         
RECLAIMPOLICY Delete
VOLUMEBINDINGMODE Immediate
ALLOWVOLUMEEXPANSION false
AGE 2m25s

If you don’t have a storage class with a VOLUMEBINDINGMODE of WaitForFirstConsumer as in the Minikube example above, you can install the Rancher Local Path Provisioner:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Output:

namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created

Rechecking the available storage classes, you should see that a new local-path storage class is available with the required VOLUMEBINDINGMODE of WaitForFirstConsumer:

kubectl get storageclasses

Output:

NAME local-path standard (default)
PROVISIONER rancher.io/local-path k8s.io/minikube-hostpath
RECLAIMPOLICY Delete Delete
VOLUMEBINDINGMOD WaitForFirstConsumer Immediate
ALLOWVOLUMEEXPANSION false false
AGE 3s 39s

Configure the K8ssandra Helm repository

K8ssandra is delivered via a collection of Helm charts for easy installation, so once you’ve got a suitable K8s environment configured, you’ll need to add the K8ssandra Helm chart repositories.

To add the K8ssandra helm chart repos:

  1. Install Helm v3+ if you haven’t already.
  2. Add the main K8ssandra stable Helm chart repo:
    helm repo add k8ssandra https://helm.k8ssandra.io/stable
  3. If you want to access K8ssandra services from outside of the Kubernetes cluster, also add the Traefik Ingress repo:
    helm repo add traefik https://helm.traefik.io/traefik
  4. Finally, update your helm repository listing:
    helm repo update

Tip

Alternatively, you can download the individual charts directly from the project’s releases page.

Install K8ssandra

The K8ssandra helm charts make installation a snap. You can override chart configurations during installation as necessary if you’re an advanced user, or make changes after a default installation using helm upgrade at a later time.

K8ssandra can install the following versions of Apache Cassandra:

  • 3.11.7
  • 3.11.8
  • 3.11.9
  • 3.11.10
  • 4.0-beta4

Important

K8ssandra comes out of the box with a set of default values tailored to getting up and running quickly. Those defaults are intended to be a great starting point for smaller-scale local development but are not intended for production deployments.

To install a single node K8ssandra cluster:

  1. Copy the following YAML to a file named k8ssandra.yaml:
cassandra:
  version: "3.11.10"
  cassandraLibDirVolume:
    storageClass: local-path
    size: 5Gi
  allowMultipleNodesPerWorker: <strong>true</strong>
  heap:
    size: 1G
    newGenSize: 1G
  resources:
    requests:
      cpu: 1000m
      memory: 2Gi
    limits:
      cpu: 1000m
      memory: 2Gi
  datacenters:
  - name: dc1
    size: 1
    racks:
    - name: default 
kube-prometheus-stack:
  grafana:
    adminUser: admin
    adminPassword: admin123 
stargate:
  enabled: <strong>true</strong>
  replicas: 1
  heapMB: 256
  cpuReqMillicores: 200
  cpuLimMillicores: 1000

That configuration file creates a K8ssandra cluster with a datacenter, dc1, containing a single Cassandra node, size: 1 version 3.11.10 with the following specifications:

  • 1 GB of heap
  • 2 GB of RAM for the container
  • 1 CPU core
  • 5 GB of storage
  • 1 Stargate node with
    • 1 CPU core
    • 256 MB of heap

Important

The storageClass: parameter must be a storage class with a VOLUMEBINDINGMODE of WaitForFirstConsumer as described in Validate the available Kubernetes StorageClasses.

2. Use helm install to install K8ssandra, pointing to the example configuration file using the -f flag:

helm install -f k8ssandra.yaml k8ssandra k8ssandra/k8ssandra

Output:

NAME: k8ssandra
LAST DEPLOYED: Thu Feb 18 10:05:44 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1

Tip

In the example above, the K8ssandra pods will have the cluster name k8ssandra prefixed or appended inline.

Note

When installing K8ssandra on newer versions of Kubernetes (v1.19+), some warnings may be visible on the command line related to deprecated API usage. This is currently a known issue and will not impact the provisioning of the cluster.

W0128 11:24:54.792095  27657 warnings.go:70] 
apiextensions.k8s.io/v1beta1 CustomResourceDefinition is 
deprecated in v1.16+, unavailable in v1.22+; 
use apiextensions.k8s.io/v1 CustomResourceDefinition

For more information, check out issue #267.

Verify your K8ssandra installation

Depending upon your K8s configuration, initialization of your K8ssandra installation can take a few minutes. To check the status of your K8ssandra deployment, use the kubectl get pods command:

kubectl get pods

Output:

NAME                                                  READY   STATUS      RESTARTS   AGE
k8ssandra-cass-operator-6666588dc5-s4xgc              1/1     Running     0          6m59s
k8ssandra-dc1-default-sts-0                           2/2     Running     0          6m27s
k8ssandra-dc1-stargate-6f7f5d6fd6-2dz8f               1/1     Running     0          7m
k8ssandra-grafana-6c4f6577d8-469qx                    2/2     Running     0          6m59s
k8ssandra-kube-prometheus-operator-5556885bd6-l5pxz   1/1     Running     0          6m59s
k8ssandra-reaper-k8ssandra-5b6cc959b7-wt22j           1/1     Running     0          3m20s
k8ssandra-reaper-k8ssandra-schema-dnrpk               0/1     Completed   0          3m35s
k8ssandra-reaper-operator-cc46fd5f4-lzq96             1/1     Running     0          7m
prometheus-k8ssandra-kube-prometheus-prometheus-0     2/2     Running     1          6m32s

The K8ssandra pods in the example above have the identifier k8ssandra either prefixed or inline, since that’s the name that was specified when the cluster was created using Helm. If you choose a different cluster name during installation, your pod names will be different.

The actual Cassandra node name from the listing above is k8ssandra-dc1-default-sts-0 which we’ll use throughout the following sections.

Verify the following:

  • The K8ssandra pod running Cassandra, k8ssandra-dc1-default-sts-0 in the example above should show 2/2 as Ready.
  • The Stargate pod, k8ssandra-dc1-stargate-6f7f5d6fd6-2dz8f in the example above should show 1/1 as Ready.

Important

  • The Stargate pod will not show Ready until at least 4 minutes have elapsed.
  • The pod k8ssandra-reaper-k8ssandra-schema-xxxxx runs once as part of a job and does not persist.

Once all the pods are in the Running or Completed state, you can check the health of your K8ssandra cluster. There must be no PENDING pods.

To check the health of your K8ssandra cluster:

  1. Verify the name of the Cassandra datacenter:
    kubectl get cassandradatacenters
    Output: NAME AGE dc1 51m
  2. Confirm that the Cassandra operator for the datacenter is Ready:
    kubectl describe CassandraDataCenter dc1 | grep "Cassandra Operator Progress:"
    Output: Cassandra Operator Progress: Ready
  3. Verify the list of available services:
    kubectl get services
    Output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cass-operator-metrics ClusterIP 10.99.98.218 <none> 8383/TCP,8686/TCP 47m k8ssandra-dc1-all-pods-service ClusterIP None <none> 9042/TCP,8080/TCP,9103/TCP 47m k8ssandra-dc1-service ClusterIP None <none> 9042/TCP,9142/TCP,8080/TCP,9103/TCP,9160/TCP 47m k8ssandra-dc1-stargate-service ClusterIP 10.106.70.148 <none> 8080/TCP,8081/TCP,8082/TCP,8084/TCP,8085/TCP,9042/TCP 47m k8ssandra-grafana ClusterIP 10.96.120.157 <none> 80/TCP 47m k8ssandra-kube-prometheus-operator ClusterIP 10.97.21.175 <none> 443/TCP 47m k8ssandra-kube-prometheus-prometheus ClusterIP 10.111.184.111 <none> 9090/TCP 47m k8ssandra-reaper-k8ssandra-reaper-service ClusterIP 10.104.46.103 <none> 8080/TCP 47m k8ssandra-seed-service ClusterIP None <none> <none> 47m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m prometheus-operated ClusterIP None <none> 9090/TCP 47m
    Verify that the following services are present:
    • –all-pods-service
    • –dc1-service
    • –stargate-service
    • –seed-service

Retrieve K8ssandra superuser credentials

You’ll need the K8ssandra superuser name and password in order to access Cassandra utilities and do things like generate a Stargate access token.

To retrieve K8ssandra superuser credentials:

  1. Retrieve the K8ssandra superuser name:
    kubectl get secret k8ssandra-superuser -o jsonpath="{.data.username}" | base64 --decode ; echo
    Output: k8ssandra-superuser
  2. Retrieve the K8ssandra superuser password:
    kubectl get secret k8ssandra-superuser -o jsonpath="{.data.password}" | base64 --decode ; echo
    Output: PGo8kROUgAJOa8vhjQrE49Lgruw7s32HCPyVvcfVmmACW8oUhfoO9A

Tip

Save the superuser name and password for use in following sections.

Next

  • If you’re a developer, and you’d like to get started coding using CQL or Stargate, see the K8ssandra developer quick start.
  • If you’re a site reliability engineer, and you’d like to explore the K8ssandra administration environment including monitoring and maintenance utilities, see the K8ssandra site engineer quick start.