Ever since setting up my Rancher cluster in a previous post, I have been itching to start deploying proper applications to it. Since Jenkins is helpful for other task automation and is a tool I work with daily, it seemed like a logical place to start. Our goals for today are as follows.

Goals

  1. Jenkins is installed with Helm and running on my Rancher cluster.
  2. No manual configuration of Jenkins. Everything is configuration as code.
  3. Jenkins must authenticate users with FreeIPA.
  4. Implement basic role based authorization for the Jenkins instance based on FreeIPA groups.

Prerequisites

Since I’ve already configured most of these in previous posts, I’m going to assume certain configuration and infrastructure is already in place for this install.

  1. FreeIPA is installed and providing LDAP services from ipa.example.com.
  2. FreeIPA has user accounts created and groups defined specifically for Jenkins.
    • The groups can be either POSIX or non-POSIX groups.
    • In my case they were called jenkins-admins and jenkins-users.
  3. You have created an appropriate LDAP service account in FreeIPA for Jenkins.
    • Service accounts are not regular users since they don’t own files.
    • Usually they are created under cn=sysaccounts,cn=etc,dc=example,dc=com in the LDAP tree.
    • For more information on how to create them, see here.
    • I suggest using Apache Directory Studio for making edits through a GUI.
  4. There is a DNS configuration in place so that the hostname we provide for Jenkins is resolvable to the Rancher loadbalancer.
    • This can be done using a regular A or CNAME record if you prefer.
    • I used a wildcard CNAME record to redirect *.rancher to rancher.example.com which lets Rancher’s ingress controller sort out where my application traffic needs to go.
  5. You have an appropriate SSL certificate and key for your new Jenkins hostname.
  6. There is a running Kubernetes cluster with Rancher installed with at least one suitable StorageClass configured.
    • If you are setting Jenkins up on a multi-node cluster you will need to use a storage class that supports migration between nodes like NFS.
    • In my case I used a NFS server in a StorageClass named nfs-client.
  7. You have configured a project and namespace in Rancher for Jenkins.
    • I haven’t set up resource limits yet for my development cluster, but in production it would be best practice to at least set limits at the project level.
    • My project is named CI/CD and the Jenkins namespace is, logically, jenkins.

Configuring Secrets

Secrets and secret management are a huge part of any Jenkins installation and we are going to use bindings to Kubernetes Secrets in our initial setup. I plan to eventually set up HashiCorp Vault to manage secrets in a more secure fashion, but for now Kubernetes is good enough.

SSL Certificates

The first secret we’re going to need is for our SSL certificate. We can create the secret directly with kubectl:

$ kubectl create secret tls tls-jenkins-ingress \
    --namespace=jenkins
    --cert=jenkins.cert \
    --key=jenkins.key

Or we can actually generate the YAML configuration for the Secret if we have a secure way to store it.

$ kubectl create secret tls tls-jenkins-ingress \
    --cert=jenkins.cert \
    --key=jenkins.key \
    --output yaml \
    --dry-run \
    > tls-jenkins-ingress.yml

This results in a file similar to the one shown below.

# tls-jenkins-ingress.yml
apiVersion: v1
kind: Secret
metadata:
  name: tls-jenkins-ingress
type: kubernetes.io/tls
data:
  tls.crt: <certificate data>
  tls.key: <key data>

You can apply this file to your cluster with kubectl as well.

$ kubectl create --namespace jenkins --filename tls-jenkins-ingress.yml

LDAP Manager Password

Originally, I had planned on using the Jenkins Kubernetes Credential Provider plugin to feed in my LDAP service account password. Unfortunately, the LDAP plugin doesn’t communicate with the Jenkins Credential API so there is no way to pass information from one to the other. Instead we have to do a little funny business with secret files to have the values we need injected as environment variables.

Create a new secret with the template below, making sure that any data you enter is Base64 encoded. I’m not sure why but adding it as stringData and letting Kubernetes figure it out doesn’t work for some reason.

# jenkins-secrets-file.yml
apiVersion: v1
kind: Secret
metadata:
  name: jenkins-secrets-file
type: Opaque
data:
  JENKINS_LDAP_BIND_PASSWORD: <base64 encoded password>

Apply this with kubectl and when we load this secret in Helm later it will make our values available to Jenkins as environment variables; e.g. ${JENKINS_LDAP_BIND_PASSWORD}.

Other Credentials

Until I switch over to Vault most of the credentials used by Jenkins jobs are going to be stored as Kubernetes Secrets and accessed with the Kubernetes Credential Provider. The plugin actually makes setting up new credentials extremely easy and by passing a RBAC flag while installing Jenkins with Helm we get effortless secret sync into Jenkins from the Kubernetes namespace. Below is an example of a username and password secret definition and how it would be added to Jenkins.

# nexus-username-password-secret.yml
apiVersion: v1
kind: Secret
metadata:
  name: nexus-username-password
  labels:
    jenkins.io/credentials-type: usernamePassword
  annotations:
    jenkins.io/credentials-description: A username and password for Nexus
type: Opaque
stringData:
  username: admin
  password: admin123

When this is added with kubectl, as shown below, it will appear in Jenkins as a credential named nexus-username-password, ready to be used in any jobs.

$ kubectl create --namespace jenkins --filename nexus-username-password-secret.yml

Installing Jenkins with Helm

Now it’s time to install Jenkins with Helm and we have a lot of configuration to go through. The official stable chart is available here if you need it for reference.

The Basics

We’re going to start by creating a YAML file with the values we intent to pass to the Jenkins Helm chart and adding in some basic information.

# jenkins-values.yml
clusterZone: cluster.local
persistence:
  storageClass: nfs-client
rbac:
  readSecrets: true
serviceAccountAgent:
  create: true
master:
  sidecars:
    configAutoReload:
      enabled: true
  rbac:
    install: true
  installPlugins:
    - configuration-as-code:latest
    - kubernetes:latest
    - credentials-binding:latest
    - kubernetes-credentials-provider:latest
    - ldap:latest
    - role-strategy:latest
    - workflow-aggregator:latest
    - workflow-job:latest
    - git:latest
    - ansible:latest

This looks like a lot but mostly we’ve told Helm that we want our Jenkins master configured with access to Kubernetes Secrets in its namespace, to reload its configuration on change instead of on reboot, and a list of Jenkins plugins we would like installed. I ended up specifying latest for all of my plugins to deal with a few version incompatibilities but individual plugins can be configured to a specific version if required.

Ingress and SSL Certificates

Next up is configuring ingress and our SSL certificates so we can actually access Jenkins after it is installed. One thing to note is that my load balancer is set to proxy only ports 80 and 443, not the 8080 that Jenkins usually runs on, so we’re going to be making a small change to account for that.

# jenkins-values.yml
...
master:
  ...
  servicePort: 443
  ingress:
    enabled: true
    hostName: jenkins.rancher.example.com
    tls:
      - secretName: tls-jenkins-ingress
        hosts:
          - jenkins.rancher.example.com
  jenkinsUrlProtocol: https

By setting servicePort to 443 we present our service on the normal HTTPS port which the load balancer handles happily. We also need to make sure that the hostname we set here matches both in the tls configuration block and in the certificate we issued for the service.

LDAP Authentication

Now we have the big one: LDAP authentication. This one took more than a little trial and error but I finally got it figured out. We will be using that oddball secret we created earlier along with the Jenkins Configuration As Code plugin to setup LDAP authentication without any manual interaction on our part.

# jenkins-values.yml
...
master:
  ...
  useSecurity: false
  secretsFilesSecret: jenkins-secrets-file
  containerEnv:
    - name: SECRETS
      value: /var/jenkins_secrets
  JCasC:
    enabled: true
    configScripts:
      ldap-settings: |
        configurations:
          - server: ipa.example.com
            rootDN: dc=example,dc=com
            userSearchBase: cn=users,cn=accounts
            managerDN: uid=jenkins,cn=sysaccounts,cn=etc,dc=example,dc=com
            managerPasswordSecret: ${JENKINS_LDAP_BIND_PASSWORD}
            groupSearchBase: cn=groups,cn=accounts
            groupSearchFilter: (& (cn={0}) (objectclass=ipausergroup))
            groupMembershipStrategy:
              fromUserRecord:
                attributeName: memberOf

The documentation here tells us that to actually use the secret values we’re passing in with secretsFilesSecret we need to set the SECRETS environment variable for Jenkins to the path where that secret is mounted. Figuring this out took me a lot longer than I’m proud to admit…but there it is, the solution. This mounting allows us to access our LDAP bind password later on in the CasC configuration as the environment variable JENKINS_LDAP_BIND_PASSWORD.

Each dictionary entry under configScripts becomes its own configuration file in Jenkins and is loaded/executed at startup. When interacting with FreeIPA I found that a few minor modifications were needed to properly handle non-POSIX groups, namely changing the groupSearchFilter value to something more IPA specific. Another thing I discovered is that Jenkins/the LDAP plugin does not like DNS SRV records like _ldap._tcp.example.com and will pitch a fit if not given a resolvable A or CNAME record. Lastly I discovered that without injecting an additional CA certificate for my FreeIPA CA, there was no way to connect to the IPA server over LDAPS. Java pitches a fit about the certificate issuer being unknown and refuses to go forward. I haven’t figured a way around this yet so I would put that down as a major security mark against this setup.

Role Based Authorization

Our LDAP configuration is all well and good but without some authorization any authenticated user can administer our Jenkins instance, definitely not what we want! We are now going to add another configuration file to the configScripts to set up the role-based authorization plugin which will give the jenkins-admins group full administrative access and everyone else read-only access. Additional configuration examples can be found here.

# jenkins-values.yml
...
master:
  ...
  JCasC:
    ...
    configScripts:
      ...
      role-auth-settings: |
        jenkins:
          authorizationStrategy:
            roleBased:
              roles:
                global:
                  - name: administrators
                    description: Jenkins Administrators
                    permissions:
                      - Overall/Administer
                    assignments:
                      - jenkins-admins
                  - name: read-only
                    description: Read-Only Access
                    permissions:
                      - Overall/Read
                      - Job/Read
                    assignments:
                      - authenticated

Additional Configuration

The Jenkins Configuration as Code plugin also provides an interface within Jenkins to export the current configuration as YAML. This can be useful if you can’t find good documentation on the available options for a particular sub-system. You can make your changes through the web UI and then export them as YAML for storage in source control.

The Install

After all of that, the actually install with Helm is a bit of an anti-climax. We pass all the values that we have prepared to the chart and wait for the deploy to finish. Once it is done we should be able to login to Jenkins at https://jenkins.rancher.example.com with FreeIPA credentials of any user in the jenkins-admins or jenkins-users groups.

$ helm install jenkins stable/jenkins \
    --namespace jenkins \
    --values jenkins-values.yml

If we make a change or want to update anything we can also use Helm to update our deploy.

$ helm upgrade jenkins stable/jenkins \
    --namespace jenkins \
    --values jenkins-values.yml

Some parameters, like installing plugins, seem to trigger pod recreation while others, like LDAP settings, seem to be handled happily by the auto-reload sidecar container. Luckily, since all of our configuration so far is described in our value files there’s no risk in blowing away the pods because we can get back to the same state very easily.

Conclusion and Next Steps

Overall, the Helm install and Configuration as Code plugins make setting up Jenkins fairly painless but there are still a few rough edges I want to sand off. I need to figure out how to insert a CA certificate so that I can use LDAPS to connect to FreeIPA. I also want to look into perhaps using Ansible along with Helm to better manage passwords at rest. Ansible Vault at least provides a secure way to store passwords in source control instead of just leaving them as Base64 encoded text. This may become moot when I finally deploy HashiCorp Vault because the CasC plugin seems to integrate better with secrets there than with Kubernetes, but we will have to see. And last I want to look into a manageable way to generate and store job configuration so that the loss of a pod or deployment doesn’t leave me needing to reconfigure all of my jobs.

Update: Getting LDAPS Working

I still haven’t found a solution but I did try a bunch of different things after finding this old pull request from 2018. I also noticed that there was a httpsKeyStore entry under master which seemed to allow you to provide your own Java keystore to the Jenkins instance. I tried every concievable variation of options for that parameter and could not for the life of me get it working. I even tried just doing a basic enable of the builtin keystore and that too failed with an error saying that there was no keystore available. I have a GitHub issue open for the problem and I guess I’ll see where it goes from there.