FreeIPA logo

With the release of CentOS-8 I decided it was time to get my home domain controllers up and running the latest software. But since my FreeIPA cluster is also the cornerstone of all the services deployed in my house the upgrade also needs to be a smooth handover that replicates all of my configuration. Thankfully, FreeIPA supports differing versions in the same cluster for short periods of time so a replication upgrade is my path forward.

Environment Preparation

The first step is to snapshot the three existing VMs so I have a fallback position should the upgrade go poorly. The concept of “first master” will become important later so you will want to know which VM that is. A cluster’s “first master” is the first FreeIPA server you brought up for that cluster domain. In my environment it is called dc-0. The other two servers in my environment, dc-1 and dc-2, were created as replicas after dc-0 was created and then promoted into a multi-master cluster. We’re going to be making changes to the cluster configuration primarily on the first master until the very end of the swap where we will replace it as well.

Next up we will shut down one of our replica masters and create a new CentOS-8 VM for its replacement. The new VM should be configured with a minimum of 2 CPUs, 5120MB of RAM, and a 20GB disk as well as a single NIC connected to the VM data network. You will also want to make sure that your VM comes up with the same network configuration and IP addresses as the previous machine. In my oVirt cluster this is pretty straight forward by setting the parameters in the cloud-init configuration before the first VM bootup.

Next we’re going to remove the old VM from the FreeIPA replication group in preparation for adding the new machine. From the first master run the following:

$ kinit admin
$ ipa-replica-manage list
$ ipa-replica-manage del

Since I’m using FreeIPA’s DNS capabilities this will also remove all the DNS records for dc-1. Unfortunately, the install proceedure doesn’t recreate the PTR record that the server needs later so now is a good time to go into the FreeIPA interface and recreate that record. You can also do it through the CLI from the first master with:

$ ipa dnsrecord-add 1 --ptr-rec

Replica Setup

Now we boot up the new VM and wait for it to get started. In the case of my oVirt images it’s also vital to run touch /etc/cloud/cloud-init.disabled since oVirt does not resupply the cloud-init configuration on reboot by default. This way all that nice static IP configuration we’ve done isn’t overwritten the next time the server reboots.

Next, we’re going to follow a guide found here to install the client portion of FreeIPA on our new VM. CentOS-8 moves away from vanilla yum and instead uses dnf to handle the new modular repository structure. We will install the required client software with:

$ dnf module -y install idm:DL1/client

And we will run the FreeIPA client setup to get the new VM added to the domain.

$ ipa-client-install --mkhomedir

Back on the first master we now can add our new VM to the FreeIPA replica group so that when we install the server software on it, the information is mirrored automatically to the new machine.

$ ipa hostgroup-add-member ipaservers --hosts

Next we need to setup firewall rules for FreeIPA and I discovered that the Glances provided CentOS-8 VM didn’t actually have firewalld installed by default. Easy enough to fix:

$ dnf install -y firewalld
$ systemctl enable firewalld
$ systemctl start firewalld

Now we can add the rules that we need for FreeIPA to function properly.

$ firewall-cmd \
    --add-service={freeipa-ldap,freeipa-ldaps,dns,ntp,freeipa-replication} \
$ firewall-cmd --reload

Now over in our next guide, it is now time to install the FreeIPA server software on the new VM!

$ dnf module -y install idm:DL1/dns`

Once the install is finished we can go ahead and setup the replication with the new machine. Since we want this to still be a multi-master cluster we will be enabling both the CA and DNS subsystems of FreeIPA for this replica.

$ ipa-replica-install \
    --setup-ca \
    --setup-dns \
    --forwarder= \

With that, we’re done with the first replica and can repeat the same process with the second to continue the upgrade. When both new replicas are finished I would also recommend shutting them both down, taking a snapshot, and then rebooting so you have a clean fallback point if replacing the first master doesn’t go well.

Replacing the First Master

You can find more information on the idea of the “first master” as well as how to transfer its configuration here, here, and here. Thankfully, it turns out to be pretty straight forward to do. With only the new servers in the cluster running you will want to log into one of them and remove replication from the first master server.

$ ipa-replica-manage del

You will also want to check the IPA configuration to make sure that the CA renewal role has been migrated successfully.

$ ipa config-show

Now you can bring up the last new VM and run the install just like the last two machines. After the install is finished check both of the replica servers using the guide found here to check that neither is configured to be the renewal master. If one is, you’ll want to configure it as a clone before continuing.

Using the same guide we will now reconfigure the newest server to be the first master.

$ ipa-csreplica-manage set-renewal-master

Then check the configuration of pki-tomcat and httpd to make sure all the appropriate settings are configured per the guide. Finally, we can check the entire cluster for errors by running the IPA healthcheck tool:

$ ipa-healthcheck --failures-only

Troubleshooting - UID Block Assignment

While running the healthcheck above I ran ino an issue where some of my servers did not have UID blocks assigned to them. The issue is discussed some here and a more detailed description is available here for RedHat subscribers. The fix was to assign the full domain block to one of the servers using ipa-replica-manage dnarange-set and then go onto each of the other servers and try to create a new user. As it turns out, if they do not have a block assigned but another server in the cluster does, they will request half of it and assign it to themselves.