Cluster API on Docker

Ripon Banik
6 min readMar 21, 2021

A Getting Started Guide

Overview

Cluster API is beautiful Kubernetes project which allows provisioning, upgrading, and operating multiple Kubernetes clusters using declarative APIs.

Kind is another beautiful project to build Kubernetes cluster on docker.

Both of the above projects are blessings for someone like me who wants to build cluster dynamically for development purpose. But the journey was not smooth when I stared using using using cluster-api version 3.8. Lot of hacks need to be made to make it working.

Although cluster api new version (at the time of writing it was v0.3.14) improves it a lot, the quick-start guide provided does not provide clarity on the dependency and thus I thought to write an article to build it on Docker without spending hours to find the issue.

Concepts

Let’s understand the concept before getting stated. We could also just use kind to create multiple Kubernetes clusters, but it will not provide the functionality that cluster-api provides as well as real world scenario where it is not easy to build a Kubernetes Cluster.

Kind allows to create a Kubernetes cluster, which we provision as Management Cluster using the Cluster API by using clusterctl.

We also create multiple Kubernetes clusters configuration using cluserctl to be built by same Kubernetes cluster management tool — kubectl.

Requirements

This is very important, since utilizing different version of above tool, may not provide a working cluster-api management cluster. I have used the following to build my environment.

  1. Ubuntu VM (20.04) with docker installed.
  2. Kind version 0.10.0
  3. Clusterctl version v.0.3.14
  4. Kubectl version v1.20.0

Install and Configure

1.Install kind and verify

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.10.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
kind --version

2. Install kubectl and verify

curl -LO https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl 
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client

3. Install clusterctl and verify

curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.14/clusterctl-linux-amd64 -o clusterctl
chmod +x ./clusterctl
sudo mv ./clusterctl /usr/local/bin/clusterctl
clusterctl version

3. Install and configure Management Cluster

Use kind to create a Kubernetes cluster . Dot not forget to set the following environment variable, otherwise the cluster will be created in docker kind network which later will fail to communicate with cluster-api managed cluster which will be created in bridge network.

export KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
cat > kind-cluster-api.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.18.15@sha256:5c1b980c4d0e0e8e7eb9f36f7df525d079a96169c8a8f20d8bd108c0d0889cc4
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
EOF

Here I have pinned the Kubernetes version 1.18.15 with hash tag to make sure it works every time. Please note here that although getting started guide mentions about v.1.18.16 for building workload cluster it fails, since it is not available.

Now create the cluster by using the config above -

kind create cluster --config ./kind-cluster-api.yaml --name clusterapi

Verify the above cluster is ready using — kubectl get nodes. The above installation modifies kubeconfig to use the cluster as default context.

To initialize the above cluster to be used as Management Cluster use the command below. It will install cluster-api, bootstrap, control plane and infrastructure provider (docker) CRDs.

clusterctl init --infrastructure docker

If the above is successful, you will find the following pods running without any issue on the management cluster — kubectl get po -A

4. Create a Workload Cluster

Now management cluster is ready to build workload clusters. Use the command like below to build Kubernetes v1.18.15 cluster.

clusterctl config cluster capi-k18 --flavor development \
--kubernetes-version v1.18.15 \
--control-plane-machine-count=1 \
--worker-machine-count=3 \
> capi-k18.yaml
kubectl apply -f capi-k18.yaml

It will start building the workload cluster, it will be ready when all of the components of the cluster will have True in READY column when using command — clusterctl describe cluster capi-k18

At this stage control plane will be initialized but not ready since CNI plugin is not installed.

Let’s get kubernetes cluster config file using the command below -

clusterctl get kubeconfig capi-k18 > capi-k18.kubeconfig

Deploy the Calico CNI in the above workload cluster using the above kubeconfig file. Any other CNI can also be used.

kubectl --kubeconfig=./capi-k18.kubeconfig \
apply -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml

Now verify that both the Control Plane, Worker Nodes and Pods are ready.

kubectl get kubeadmcontrolplane --all-namespaces
kubectl --kubeconfig=./capi-k18.kubeconfig get nodes -A
kubectl --insecure-skip-tls-verify --kubeconfig=./capi-k18.kubeconfig get pods -A

I used insecure-skip-tls-verify option above since without that I get the following error -

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)

I put a alias in my .bashrc to avoid typing it every time,

alias kubectl='kubectl --insecure-skip-tls-verify'

It is also possible to just export the kubeconfig as environment variable, instead of running with every kubectl invokation.

export KUBECONFIG=./capi-k18.kubeconfig

5. Troubleshooting

During the installation and configuration process, I have encountered issues which did troubleshooting by looking at management cluster pod like the following -

a) kubectl -n capd-system logs capd-controller-manager-d459f8876-xg9zm manager

b) kubectl -n capd-system describe pod capd-controller-manager-d459f8876-l6rh2

……

If you get an issue like unknow flag: — metrics-addr it could be due to use of an old version.

The issue, MountVolume.SetUp failed for volume “cert” : secret “capi-webhook-service-cert” not found is due to use of old clusterctl version.

kubeadm init waiting could be due to workload and management cluster is different network.

You can also get error message intermittently while running kubectl — error: You must be logged in to the server (Unauthorized), just run the command again in that case.

Some other useful commands.

kubectl get cluster --all-namespaces
kubectl get machines

6. Cleanup

To delete workload cluster run below command. Let it complete. do not terminate the process. Otherwise it will be tedious manual process to cleanup.

kubectl delete cluster capi-k18

To delete management cluster use

kind delete cluster

Conclusion

The above article shows how you can prototype k8s cluster build without leaving Kuberentes API and on your own laptop without going to provision ing new hardware and resources in the cloud.

Please encourage by providing 👏 if that helped you. If you need any assistance for your business, please reach out via this page.

--

--

Ripon Banik

A Cloud and DevSecOps Engineer passionate about simplification of technology and make it consumable.