Harvester is a distribution ready to use as a Kubernetes cluster. Better to install it on bare metal servers to take advantage of its HCI (Hyper Converged Infrastructure) capabilities. The resulting cluster will form a single redundant storage using Longhorn software. This storage will serve the persistent storage claims for kubernetes workloads. Using the KubeVirt software, a fully functional virtual machine (by gemu-kvm) can be deployed. Integrated Rancher makes this distro the easiest way to install your Rancher with a single click.
The project is quite young. Apart from the typical installation, the documentation is very poor. But the project itself looks very promising, and I believe in its future. This caused me check some aspects of this project and write about my experience with it.
Of course, we'll be testing the installation on virtual machines because there are countless numbers of them available. Therefore, I will not be testing KubeVirt and the ability to run virtual machines. However, a simple Kubernetes cluster deployment is exactly what we can test.
Installation from CD is pretty straightforward, well documented and even shown in video demos. I have a PXE infrastructure in my LAB, then I have tested how the PXE installation works. To do this, you need to create two YAML configuration files, one for the first server and one for the other servers. The procedure is also well documented, including sample configuration files. However, the slightest difference from the demo setting can lead to a dead end. Here we look at solutions for common real-world problems.
The prerequisites are quite high for VMs, but absolutely basic when talking about physical servers:
Pre-requisites: >= 4 x86_64 CPU >= 8G RAM >= 120G disk at least 5k IOPS. Trunk connection to support VLANS
If you're only going to use a Rancher management tool, you might even be cutting back on resources.
Once installed, you can enter to the harvester URL displayed on the console, set a new password for the administrator, and enable the Rancher interface. In fact, you don't have to enable it on every Harvester cluster, as the main benefit of one Rancher is the ability to manage multiple kubernetes clusters.
To enable the rancher interface, go to Advanced -> Settings, find rancher-enabled and change the value to true in the three-dot menu. Once enabled, a link to the Rancher button appears in the upper right corner.
For further work it is necessary to understand the internal structure of the Harvester. The base OS used by Harvester is k3os from Rancher Labs. The file system structure according to the project page looks like this:
/etc - ephemeral /usr - read-only (except /usr/local is writable and persistent) /k3os - system files /home - persistent /var - persistent /opt - persistent /usr/local - persistent
The ephemeral /etc do more configuration problems than benefits. Hopefully this will be changed in future releases, but now we will learn how to live with it.
You can get the root user in the console by pressing F12 and using the password provided during installation. If you installed it using PXE and did not specify a password in the YAML file, then the default password "rancher" will be used. Another login option is via SSH. The "root" user is locked out of ssh, and you must be logged in with the "rancher" user. For most kubernetes related tasks, you will be performing as a "rancher" User. The "root" user will rarely be used only for k3os-related tasks. You can get it with the "sudo su -" command.
Harvester HCI is intenent to be installed on bare metal servers. They usually have more than one physical disk for either redundancy or capacity. However, the default configuration of Longhorn (a storage subsystem) only uses free space in the /var/lib/longhorn directory on the boot system disk.
You need to format all additional drives using the CLI and mount them somewhere. Let's say you need to add /dev/sdb to your longhorn storage. Log in to the serverand get root as described above. Then create a mount point, format the disk and mount it:
hrvstr1 [~]# mkdir /var/lib/longhorn-sdb hrvstr1 [~]# mkfs.ext4 -j -m0 /dev/sdb .. hrvstr1 [~]# lsblk -f /dev/sdb NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT sdb ext4 770ffcd9-29d1-4c5b-87d9-41f1a572b15e 97.9G 0% hrvstr1 [~]# mount UUID=770ffcd9-29d1-4c5b-87d9-41f1a572b15e /var/lib/longhorn-sdb
However, this mount does not survive a reboot due to the ephemeral nature of the /etc/fstab and /etc at all. Let's start with workarounds.
hrvstr1 [~]# mkdir -p /var/lib/rancher/k3os/config.d/ hrvstr1 [~]# TERM=linux vi /var/lib/rancher/k3os/config.d/sdb.yaml hrvstr1 [~]# cat /var/lib/rancher/k3os/config.d/sdb.yaml runCmd: - "sudo mount UUID=770ffcd9-29d1-4c5b-87d9-41f1a572b15e /var/lib/longhorn-sdb" hrvstr1 [~]#
Explanation: All YAML files in the /var/lib/rancher/k3os/config.d/ directory merged to /var/lib/rancher/k3os/config.yaml, which describe the behavior of k3os. One of the most useful options is the ability to execute any arbitrary command, in our case, mount an additional disk.
Try reboot the server and verify the disk mount is persist. The next step is to add a new space to Longhorn. The easiest way is to do it through the Longhorn GUI. To access the graphical interface, you must first enable the Rancher graphical interface, enter its Cluster Explorer, then select Longhorn from the top-left drop-down menu. Select a node, select Edit Node and Disks from the three-dot menu and click the Add Disk button.
This is another very demand feature that is not supported out of the box and can only be implemented with a workaround. Most SSL services in an organization will use certificates signed by a custom CA (usually Microsoft Domain Services), so this CA should be trusted by k3os.
There is a /etc/ssl/certs/ca-certificates.crt file that consolidates all currently trusted CA certificates in k3os. Due to the ephemeral nature of /etc, you cannot edit this file once to fix the problem. Workarounds began again.
hrvstr1 [~]# mkdir -p /var/lib/rancher/k3os/config.d/ hrvstr1 [~]# TERM=linux vi /var/lib/rancher/k3os/config.d/ca.yaml hrvstr1 [~]# cat /var/lib/rancher/k3os/config.d/ca.yaml runCmd: - echo -e "\n-----BEGIN CERTIFICATE-----\nMIIEh ... PHHx7Hnegxi4hQ==\n-----END CERTIFICATE-----\n" >> /etc/ssl/certs/ca-certificates.crt hrvstr1 [~]#
Explanation: All YAML files in the /var/lib/rancher/k3os/config.d/ directory merged to /var/lib/rancher/k3os/config.yaml, which describe the behavior of k3os. One of the most useful options is the ability to execute any arbitrary command, in our case, adding a certificate to the end of the file. Be careful when preparing this "echo" line, the "\n" must be instead of the original line break.
Organizations generally do not allow direct Internet access from internal servers. An open organization can provide you with a proxy connection. The proxy configuration is based on the fact that most tools respect the "http_proxy" and other environment variables. Much the same workaround is followed:
hrvstr1 [~]# mkdir -p /var/lib/rancher/k3os/config.d/ hrvstr1 [~]# TERM=linux vi /var/lib/rancher/k3os/config.d/proxy.yaml hrvstr1 [~]# cat /var/lib/rancher/k3os/config.d/proxy.yaml k3os: environment: http_proxy: http://192.168.0.254:3128 https_proxy: http://192.168.0.254:3128 no_proxy: "localhost,127.0.0.1,192.168.0.0/24,.mydomain.com,10.0.0.0/8" hrvstr1 [~]#
The ingress controller is an important part of the Kubernetes cluster network. However, it is not installed by default due to the many installation variants. For me, this is a very clear choice - any surviving node should response for incoming connections. This narrows down the choice - the installation must be kind of daemon set that runs on each node. The installation must be of type nodePort to listen to the node interface. UseHostNetwork must be set to true to force listening on the real external IP address so that ingress traffic is actually ingress.
The Harvester comes pre-loaded with some helm charts for easy installation. Open Cluster Explorer and select Apps & Market from the top-left drop-down menu. Select + Charts and write "ingress" in the filter field. The haproxy ingress controller is available, click on it. Go to Settings - these are the installation options. Select the Deployment Type be DaemonSet, the Service Type as NodePort. Don't install yet.
Click the Edit as YAML button. Find useHostNetwork and change it to true, do the same for useHostPort. Now you can click Install.
Once properly installed, the ingress controller should be listening for the HTTP and HTTPS ports. You can verify this:
hrvstr1 [~]# netstat -tlnp | grep haproxy tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 26519/haproxy tcp 0 0 0.0.0.0:1042 0.0.0.0:* LISTEN 26519/haproxy tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 26519/haproxy tcp 0 0 0.0.0.0:1024 0.0.0.0:* LISTEN 26519/haproxy tcp 0 0 :::80 :::* LISTEN 26519/haproxy tcp 0 0 :::443 :::* LISTEN 26519/haproxy
Any disconnected (or airgap) installation requires a local docker registry accessible over HTTPS with a trusted certificate. Setting up such a registry is outside the scope of this document and can be found, for example, here. Add the custom CA to the Trusted list as described above.
The haproxy ingress controller uses two containers, you can check their names and versions in the YAML file. you have to upload both into your local registry.
The deployment itself is almost the same as described above, but while editing the YAML configuration, correct the location of the images as well.
Download the workload image and upload it to your private registry. I took the tutum/hello-world image, which includes an HTTP server listening on port 80. You can check the container's listening port with the docker inspect IMAGENAME command.
Once you have the appropriate YAML file to deploy, simply upload it to run. Otherwise, the guided way is very easy to use. In Rancher's Cluster Explorer, navigate to Workload -> Deployments -> Create. Name your workload and mention the correct image name. Add a service port. Do not select ClusterIP - it is only used for internal internetworking. The LoadBalancer entry is used with an external compatible load balancer, usually supported by cloud providers. NodePort is a good choice.
Once deployed, the next step is to create the correct ingress. The ingress controller we deployed above is the actual load balancer (haproxy) instance. It works, but it doesn't serve anything. The actual routing rule is called ingress and must be configured separately. Go to Service Discovery -> Ingresses -> Create. Choose an informative name for your ingress rule and fill out the rest of the form. If you want to bind the service to a specific DNS name, fill in the Request Host field. If you want to bind the service to a specific path, fill in the prefix field. It is important to choose the right Target Service. After selection, the Port field will be populated with the ports associated with the launched image. If you need HTTPS, you probably want to add certificates. To do this, first you need to create a Secret of TLS type, containing your certificates.