RancherOS and Helm on VMWare Fusion
Work has recently had me interacting more and more with Kubernetes and I finally decided it was time to have a test stack at home to play with. Unfortunately, getting OpenShift running locally is a gigantic pain in the rear and until very recently I didn’t have a VM host with enough memory to even support it. To get around that limitation I decided to explore Rancher and its custom distribution RancherOS. What follows is a bit of a riff on this post with a few odds and ends thrown in from my experience getting everything set up. My deploy platform is my Mac running VMWare Fusion and the plan is to get a working node up with a minimum of resources and fuss.
VM Requirements and Setup
As with most things we will start with a VM. We will need a minimum of 2 CPU cores, 4GB of RAM, and 20GB of hard disk space. We will also need to create a custom VMWare network to attach our VM to so that we can forward ports 22, 80, 443, and 6443 through to the host. If you already have things running on these ports, remapping is fine but you will need to consult the documentation for how to properly annotate this in your configuration.
Also worth noting is that for a VMWare NAT network, the gateway address lives at
x.x.x.2
and the first host connects via DHCP at x.x.x.128
. This is particularly
important for your port forwards as it will request the internal IP to forward to
and won’t work without it. The last bit of prep work we need to do is edit the
/etc/hosts
file on our host machine to add a mapping from 127.0.0.1
to the
hostname for our new cluster. For example:
127.0.0.1 localhost localhost.localdomain
127.0.0.1 rancher rancher.example.com
Installing RancherOS
Since we’re not using one of the prebuilt platforms our first step is to grab the
RancherOS ISO from GitHub and attach it to our VM. When we
boot the VM the LiveCD will come up and pull down the various Docker container
images that RancherOS needs to operate before automatically logging in as the
rancher
user. To be able to copy files from the host onto the VM we need to
reset the rancher
user’s password via the VM console with sudo passwd rancher
.
With that done we can now manage the VM via SSH and copy files to it with scp
.
RancherOS uses cloud-init style configuration for its install so we will prepare the configuration file for our new VM in our project directory on the host machine as seen below.
#cloud-config
rancher:
network:
interfaces:
eth0:
address: 172.16.0.128/24
gateway: 172.16.0.2
mtu: 1500
dhcp: false
hostname: rancher.example.com
ssh_authorized_keys:
- ssh-rsa AAA...
After installation, RancherOS also only allows SSH login via SSH keys so we will need to include a public key in our configuration that we have access to. We will also disable DHCP on the primary NIC so that we don’t have the potential for IP reassignment problems later on. You can read more about RancherOS network configuration here.
Now we will copy our cloud configuration over to the VM using scp
. If port 22
is not forwarded straight through you’ll need to use the -P <host_port>
flag
to correctly target the mapped port on the host machine.
$ scp cloud-config.yml rancher@rancher.example.com:~/
Once the file is on the VM you can run the install with:
$ sudo ros install -c cloud-config.yml -d /dev/sda
The installation process is fairly quick and straightforward. Once it is finished, reboot the VM and remove the LiveCD from the virtual drive. When the VM is back up you are ready to start install K8s.
Installing K8s
Next we will be using the Rancher Kubernetes Engine (RKE) to set up K8s on our
new VM. You can find installation instructions here. Now
we need to create our cluster configuration, rancher-cluster.yml
, in the project
directory.
nodes:
- address: 127.0.0.1
internal_address: 172.16.0.128
user: rancher
role: [controlplane, worker, etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
This is a baseline configuration which will put all the various elements of a
cluster onto our single host. This can also be created by running rke config
and answering the provided prompts but for the most part the generated file is
empty and can confuse which elements are most important to our setup. More information
on how to configure a cluster can be found here.
Now we can install K8s with rke up --config ./rancher-cluster.yml
. Simple! One
thing that I did run into during my setup is that the rke-network-plugin-deploy-job
may fail during the initial deploy. There is some bug information here
and some people seem to think using the FQDN of the node instead of the IP fixes
the problem. In my case, a second execution of the install command brought the
cluster up successfully.
Installing Rancher
We now have a working K8s cluster but we could really use something to manage it and provide a nice web interface. Re-enter: Rancher! We will be installing the Rancher cluster management software using Helm, a tool for managing K8s applications. Install instructions for Helm can be found here. More detailed instructions on how to install Rancher with Helm can also be found here.
First, we need to add the Rancher Helm chart repository to Helm. There are several
different channels available but we will be going with latest
because we’d like
to try out the newest stuff available.
$ helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
Next we need to export the K8s configuration that RKE created for us into our shell so that we can access our new cluster with all of the various Kubernetes tools.
$ export KUBECONFIG=./kube_config_rancher-cluster.yml
Rancher requires that all of its components be run in a specially named K8s
namespace, cattle-system
. We will use kubectl
to create the namespace before
we continue our deploy.
$ kubectl create namespace cattle-system
Next up is preparing our SSL certificates for Rancher. We will cover two options here: using the self signed Rancher certificates and using certificates from an external CA.
Using Rancher Generated Certificates
If we’re going to use the internal Rancher certificate we need to install
cert-manager
to handle reissuing the certificates as they expire. Applying things
directly from GitHub looks slightly suspect but this is apparently the recommended
way of setting things up.
# Install the CustomResourceDefinition resources separately
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml
The cert-manager
application also needs its own custom namespace so we will
create and customize that as well.
# Create the namespace for cert-manager
$ kubectl create namespace cert-manager
# Label the cert-manager namespace to disable resource validation
$ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
Now we can add the Helm charts we will need to actually install the application automatically.
# Add the Jetstack Helm repository
$ helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
$ helm repo update
Finally we are ready to install cert-manager
with Helm.
# Install the cert-manager Helm chart
$ helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v0.9.1
Once the deploy has completed we can install Rancher using Helm as well, taking most of the available default values.
$ helm install \
rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher.example.com
Using Your Own Certificates from FreeIPA
In my case, most of my home network uses FreeIPA for authentication and certificate
issuing so I decided to go the “bring your own” route for my certificates. First
I needed to generate a key and CSR for my new certificate. I went the interactive
route but you could compose the -subj
line yourself, see here
for reference.
$ openssl req \
-newkey rsa:4096 -nodes \
-keyout tls.key \
-out tls.csr
With the CSR I went into FreeIPA and got a certificate issued for the hostname
and downloaded the certificate file to tls.crt
in my project directory. Since
my FreeIPA CA is not part of the global CA certificates bundle I also needed to
download it through the web UI so I could add it to the configuration. Now we
can install Rancher with Helm and some additional flag values to configure it for
our custom certificates.
$ helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher.example.com \
--set ingress.tls.source=secret \
--set privateCA=true
Once the deploy has finished we can add our SSL certificates to Kubernetes as
secrets so Rancher can use them. We need to be sure to add them to the correct
namespace or Rancher won’t know where to find them. Additionally, the secret
name is mandatory without further configuration if we want Rancher to pick up
the certificates automatically. The host certificate and key are created as a
pair in a tls
secret type.
$ kubectl -n cattle-system create secret \
tls tls-rancher-ingress \
--cert=tls.crt \
--key=tls.key
The custom CA certificate, however, is created as a generic
secret from the
certificate PEM file.
$ kubectl -n cattle-system create secret \
generic tls-ca \
--from-file=cacerts.pem
Finishing the Install
We can watch the status of the Rancher deploy using the K8s tools.
$ kubectl -n cattle-system rollout status deploy/rancher
Once the deploy is finished we can log into the web interface at
https://rancher.example.com
to finish the setup. One thing to note is, if you
are using the Rancher built in certificates, Chrome will not let you connect
to the instance over HTTPS due to how the certificate is issued. Thankfully, this
is a Chrome only issue and you can get around it by just using Firefox. When you
get to the web interface set the administrator password and confirm the cluster’s
URL and you are good to go!
Configuring Local Storage Class
Now having a K8s cluster is all well and good but there isn’t a lot of experimenting you can do without the ability to store persistant data. To this end, we’re going to add local path storage to our Rancher “cluster”. This storage method has some downsides, namely in a multi-node cluster any pod that is backed by this storage class can’t be migrated off of the node that it originally spawned on. But, since we only have one node in this experiment, it saves us the bother of setting up NFS so we’re going to just do it! Installation is simple if again a bit suspect since we’re just applying raw YAML from GitHub.
$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
This creates a storage class called local-path
on our cluster which will allow
us to create Persistant Volume Claims automatically instead of needing to set
persistant volumes up ahead of time.
Adding Helm Charts
Now to really start using our cluster with Helm we will need to pull in some more charts! A good source is the offical “stable” chart repository but you can also search the Helm Hub for even more charts from other developers. Here is how to add the stable repository to your local Helm.
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
$ helm repo update
Installing Drupal
Finally, let’s deploy an actual end user application to our cluster. [Drupal][drupal]
is a good example because it highlights the advantages of Helm in pulling together
multiple services (NGINX, php-fpm, MariaDB) to deploy a useful application. For
our example we will also add the default Drupal hostname to our host /etc/hosts
file so we can access it with our browser later.
127.0.0.1 drupal drupal.local
To help keep our different applications seperate we will create a new K8s namespace for Drupal before deploying it with Helm.
$ kubectl create namespace drupal
$ helm install drupal stable/drupal \
--namespace=drupal \
--set global.storageClass=local-path,ingress.enabled=true
You can also see that we are passing in a storage class and enabling ingress for
our chart. This tells the chart where to set up its persistant volume for MariaDB
and to pass traffic in on port 80 to the application. We can watch the deploy
as it runs and when it finishes go to http://drupal.local
to view our brand
new application!
$ kubectl -n drupal rollout status deploy/drupal
Conclusion
With that we have a functional Kubernetes cluster and a useful application deployed to it for testing. All in all not too terrible for standing up such a complex stack of software. Also, if that seemed like too much, there is a Vagrant quickstart available which will take care of most of the setup for you.