Kubernetes


About Kubernetes clusters

The User platform supports an easy way to define, deploy and manage clusters with Kubernetes. The clusters are built using regular compute resources (instances) - however, unlike with manual Kubernetes deployment, you won't need to separately create instances, connect them to the network, define their roles, download and install the software, etc. A fully operational Kubernetes cluster is deployed with a single operation, and it is ready to run your containerized workloads immediately after.

When defining a cluster, you will need to specify a number of parameters:

  • Cluster name
  • Image to use for master and node creation. Note that Kubernetes service is designed to work with special images, so the choice of images will be different from a regular instance creation
  • Master flavor. The recommended configuration depends on a number of nodes in your cluster. For clusters up to 5 nodes, at least 1CPU and 4GB of RAM is recommended. For 6-10 nodes, chose 2CPU and 8GB RAM configuration. Larger clusters (up to 100 nodes) may require 4CPU and 16GB RAM on the master.
  • Node flavor. The flavor defining node performance depends on the size and number of the pods you intend to run in the cluster. Since you can also scale your cluster horizontally (adding more nodes), a large flavor is not always required for a large application - it just needs to be large enough to run one or several pods. Note however that all nodes will be the same size - you cannot add nodes based on a different flavor later to your cluster.
  • Network to connect your cluster members to. The cluster will be publicly available via floating IPs, so make sure you use a network available via an internet-connected router
  • Custom Registry URL (if not set, your cluster will be connected to the public Docker registry)
  • Number of masters and nodes. For test clusters, you may use a single master; while for production it is recommended to have 2 or more for redundancy. As for the nodes, you may set the number considering your applications requirements

Creating Kubernetes cluster

To create a new cluster, expand "Stacks" in the left navigation pane, and then click on "Kubernetes". In the window opened, press "Create a Kubernetes Stack" button. Enter the parameters as described above, and click "Create". Note that due to the configuration performed during the deployment, a cluster creation may take considerably longer than a single instance creation.

Managing Kubernetes cluster

In the cluster list, you can modify cluster parameters in the action menu by clicking on "Cluster details" item. From there, you can:

  • See parameters of the cluster (name, IDs, IPs, used flavors, etc)
  • See cluster API URL that can be used by the management tools and applications
  • Change number of nodes in the cluster

You can also remove a cluster that you no longer need by clicking "Delete" in the action menu.

Accessing Kubernetes cluster with kubectl

kubectl is a standard Kubernetes management tool. You can install it using variety of methods - such as with yum or apt-get on popular Linux distributions, snap on ubuntu and homebrew on MacOS. For more information, you can refer to https://kubernetes.io/docs.

Once kubectl is installed, you need to configure it to access your cluster. To do so, first, go to the cluster overview ("Access" tab) and download your kubeconfig. Now, you need to make your kubectl use it. There are two ways to do that: first, you can put it in the default path where kubectl searches its config - path ~/.kube/config. Alternatively, you can override the default by specifying KUBECONFIG environment variable, such as

$ export KUBECONFIG=/home/user/k8s/cluster1/config

To make sure everything works, run a command:

$ kubectl get namespaces

if you see list of namespaces in the output, your kubectl has successfully connected and authenticated against your cluster.

Accessing Kubernetes cluster via web dashboard

Kubernetes also provides a web dashboard to manage your cluster via web UI. To access the dashboard, you need to run kubectl in a proxy mode - so it will maintain API connection to your cluster, while acting itself as a web server locally on your computer.

To run kubectl proxy, run the following command:

$ kubectl proxy

You may also add "&" sign at the end to launch this process in a background. Once proxy is started, your web UI will be accessible at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

You will also need to use a secret token to authenticate (authentication with kubeconfig is not supported). To retrieve your token, do the following: first, run a command to get secret name where your token is stored:

$ kubectl -n kube-system get secret

Note the secret named "admin-token-xxxxx" in the output, where xxxxx will vary for different clusters. Now, run

$ kubectl -n kube-system describe secret admin-token-xxxxx

Note the token body (long alphanumerics string). Copy it and paste to your browser, and you should be able to successfully log in to your cluster web UI.

Using persistent storage

A Persistent volumes enable you to connect a storage volume to a pod, where the application running in the container can store its data. When the application container is recreated (on upgrade or another event), the container itself will be deleted and created from image again, however the persistent volume will stay. This way, the application can store persistent data locally (such as a database or other files), without risk of removing it accidentally.

Kubernetes volumes should be accessible from any node, to allow a pod to be scheduled there. So, local storage is not a good candidate for production persistent volume. To bypass local storage limitation, use regular volumes as Kubernetes storage volumes.

To use volume with a pod, create a volume first and find it's ID (either via portal or CLI). Let's say the ID we found is b6da200d-2760-494c-8d4d-a03de08f0c38. Then, use the following syntax in your YAML file to include such volume to a container:

  volumes:
  - name: vol1
    # This OpenStack volume must already exist.
    cinder:
      volumeID: b6da200d-2760-494c-8d4d-a03de08f0c38
      fsType: ext4

For example, a YAML describing pod based on Ubuntu container, which mounts volume to /myvol directory may look like this:

apiVersion: v1
kind: Pod
metadata:
  name: test-volume
spec:
  containers:
  - image: ubuntu:18.04
    name: my-application
    command: ["/bin/bash", "-c", "--"]
    args: ["while true; do sleep 30; done;"]
    volumeMounts:
    - mountPath: /mywol
      name: vol1
  volumes:
  - name: vol1
    # This OpenStack volume must already exist.
    cinder:
      volumeID: b6da200d-2760-494c-8d4d-a03de08f0c38
      fsType: ext4

Note that such volumes are regular block devices mounted locally on the nodes. They only support ReadWriteOnce mode (see https://kubernetes.io/docs/concepts/storage/persistent-volumes/ for more details). You cannot mount same volume to several pods simultaneously.