LDAPS in Jenkins on Kubernetes
So after my previous post, I did a little more searching and
came across this post which seemed to suggest that the route
forward for loading CA certificates wasn’t the master.httpsKeyStore
option
provided by the Helm chart but rather passing in options to Java directly. That
seemed all well and good to me because try as I might I was never able to get the
master.httpsKeyStore
options working correctly. When Jenkins started to load
it would always complain that there was no valid keystore available. That is not
the last time I was going to see that particular error but more on that later.
Working With A Fresh Keystore
The first thing I wanted to try was using a brand new keystore that I had just created and only contained the CA certificate that I wanted to use when talking to LDAP. I created a new keystore with the command below and loaded the CA certificate into it at the same time.
$ keytool -import -trustcacerts \
-alias example.com \
-file cacerts.pem \
-keystore keystore.jks \
-storepass changeit \
-noprompt
Cribbing off of the Jenkins Helm chart I went ahead and created a new secret the contained both the keystore and its password with the keys the Helm chart used. Edit: Corrected the secret creation command based on an anonymous comment below.
$ kubectl create secret generic jenkins-https-jks \
--from-literal=https-jks-password=changeit \
--from-file=jenkins-jks-file=keystore.jks \
--output yaml \
--dry-run > jenkins-https-jks.yml
$ kubectl create \
--namespace jenkins \
--filename jenkins-https-jks.yml
Next I jumped into the values file I was passing to Helm and added an additional initialization container to let us look at the contents of our keystore as Jenkins would see it. I also was careful to mirror the way we would later mount the secret in the master container here so any errors in configuration would crop up early.
# jenkins-values.yml
...
master:
...
customInitContainers:
- name: "check-certificates"
image: "{{ .Values.master.image }}:{{ .Values.master.tag }}"
imagePullPolicy: "{{ .Values.master.imagePullPolicy }}"
command: ["/bin/sh", "-c"]
args:
- keytool -list -keystore /var/jenkins_keystore/keystore.jks -storepass $(JENKINS_HTTPS_KEYSTORE_PASSWORD)
volumeMounts:
- name: jenkins-https-keystore
mountPath: /var/jenkins_keystore
env:
- name: JENKINS_HTTPS_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: jenkins-https-jks
key: https-jks-password
I redeployed the chart with helm and went to look at the logs for my new container in Rancher. I was able to see that it loaded up the keystore and password correctly and printed out the single certificate that I had loaded in. With that base proof of concept I went ahead with updating the configuration for the master pod. The first order of business is to add the volumes, mounts, and environment variables that we’ll be pulling from our new secret to the master container.
# jenkins-values.yml
...
persistence:
...
volumes:
- name: jenkins-https-keystore
secret:
secretName: jenkins-https-jks
items:
- key: jenkins-jks-file
path: keystore.jks
mounts:
- name: jenkins-https-keystore
mountPath: /var/jenkins_keystore
...
master:
...
containerEnv:
...
- name: JENKINS_HTTPS_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: jenkins-https-jks
key: https-jks-password
Now we pass our keystore and its password in via the master.javaOpts
key in
the chart which sets the JAVA_OPTS
environment variable in the container.
# jenkins-values.yml
...
master:
...
javaOpts: >
-Djavax.net.ssl.trustStore=/var/jenkins_keystore/keystore.jks
-Djavax.net.ssl.trustStorePassword=$(JENKINS_HTTPS_KEYSTORE_PASSWORD)
Lastly, we update the LDAP server URL to use LDAPS instead of the insecure plaintext version. This is way more secure even in a development environment than passing credentials in the clear.
# jenkins-values.yml
...
master:
...
JCasC:
...
configScripts:
...
ldap-settings: |
jenkins:
securityRealm:
ldap:
configurations:
...
- server: ldaps://ipa.example.com
...
I went ahead and redeployed with Helm but…no good. Jenkins would boot up but any attempt to login through the interface would fail and looking in the log I saw that we were again getting errors about the keystore being corrupted or having and incorrect password. So now the question was, why?
Why Didn’t It Work?
As it turns out after reading the Kubernetes documentation
and the chart templates in more detail the reason is
pretty clear. In the chart template, user defined environment variables are listed
near the end of the env
section of the main container spec while JAVA_OPTS
is
listed towards the beginning. Unfortunately, Kubernetes can only expand variables
which have been previously defined in the spec when it goes to run. Thus my attempt
to load the keystore password from the secret would fail because it was defined
after the variable I was trying to use it in. The short term solution is to just
set the keystore password explicitly in master.javaOpts
as you can see below.
# jenkins-values.yml
...
master:
...
javaOpts: >
-Djavax.net.ssl.trustStore=/var/jenkins_keystore/keystore.jks
-Djavax.net.ssl.trustStorePassword=changeit
A better solution would be to patch the chart so that JAVA_OPTS
is defined last
since it does not have any dependant variables and that way any user defined
variables can be correctly expanded in the template. The variable worked correctly
in the initialization container because it was being called as a command argument
as opposed to being used in another variable definition so its value had time to
resolve correctly. Thankfully, with that fix everything seemed to be working
correctly from a LDAPS perspective. But, unsurprisingly, once the master container
finished booting it was not able to make SSL connections to any server not signed
by my internal CA since that was the only one available to it. That’s good enough
for a proof of concept but now it’s time to get everything working correctly.
Setting Up a Full CACerts Bundle
Since we want to be able to make SSL connections to things outside of our own network we need to set up a more complete CA certificate bundle to pass into Jenkins. I started with my local system’s Java CA bundle and copied it into my working directory.
$ cp /etc/ssl/certs/java/cacerts keystore.jks
$ chmod u+w keystore.jks
Just like before I added my CA certificate to the bundle and recreated the Kubernetes secret.
$ keytool -import -trustcacerts \
-alias example.com \
-file cacerts.pem \
-keystore keystore.jks \
-storepass changeit \
-noprompt
$ kubectl create secret generic jenkins-https-jks \
--from-literal=https-jks-password=changeit \
--from-file=jenkins-jks-file=keystore.jks \
--output yaml \
--dry-run > jenkins-https-jks.yml
With that I patched the secret, stripped out the initialization container, and reran the Jenkins Helm deploy with the hardcoded keystore password to bring everything up to date. After the deploy finished I was able to login successfully using LDAPS and everything seems to be working correctly.
Next Steps
I think this actually brings the saga of LDAP and Jenkins to a close so I’m glad
I was able to figure out how to inject that certificate. It isn’t an ideal solution
if CA certificates expire because it ignores the underlying container and host’s
lists. I will probably do some more work with initialization containers to see if
there’s a way to perform the insert in a sustainable way using those instead.
I would also like to make a pull on the Helm chart so that I can actually use
my secrets appropriately in JAVA_OPTS
instead of needing to set the password
manually.
Beyond that,I am interested to try out using Keycloak for Jenkins and other web application authentication/authorization. At the end of the day it all will still be looking back at the FreeIPA LDAP implementation but it could be nice to only need to setup one LDAP service account and just let everything else route through it.