It's been discussed but i don't see the issue anywhere.
It's becoming pressing to do as some third-party puppet plugin we use starts dropping support (in their respective recent commit history).
Since the new version occurred, some modules we currently use in puppet 5 core, have been splitted up from the main puppet core 7.
They are dedicated module in newest version. So it should be a matter of seek and adding those new modules in Puppetfile for our
puppet code to be puppet 7 compliant.
Edited
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related.
Learn more.
This should be getting prioritized as bookworm does not support Puppet 5.5, and the Puppet 7 agent isn't compatible with the Puppet 5.5 server we're currently running. However, the puppet server provided by Puppet 7 and packaged in bookworm is supposed to be backwards-compatible with 5.5 hosts, so we should be able to connect existing hosts to a bookworm puppet server.
I don't think there's much value in keeping the current Puppet CA alive, and upgrading pergamon is going to be fairly involved, so I think we should spin up a separate puppet infra and migrate all machines to it, once we've validated that puppet 5.5 can indeed talk to the puppetserver 7
My idea for this migration would be the following:
set up a new puppet server (and puppetdb) from scratch using a fresh bookworm VM
to validate the set up, connect each of a buster, bullseye and bookworm VM to it (using our proxmox templates, supposedly)
once the set up is validated, do the following changes (manually) on all puppet-enabled machines:
change the puppet crontab entry to --noop mode
point the puppet config to the new puppet server (and enact other appropriate configuration changes)
move the SSL directory aside to avoid interference with the old CA (nodes will all get new certificates)
have puppet run in --noop mode at least three times on all servers (this will populate facts, puppetdb and exported resources on all the machines)
once we've validated the changes look as expected (the main one I expect to see is the internal icinga CA which reuses the puppet certificates), drop the --noop from the crontab entry.
In case of a rollback, we should revert the ssl directory and configuration changes.
The two main concerns for implementing the change of puppet server are:
continuity in the let's encrypt setup (which depends on the puppet file server serving the certificates to the hosts, and therefore on certificate generation happening on the puppet server host). The certbot config probably needs manual migration to the new puppet server.
continuity in the icinga setup (or at least regenerating it from scratch with the new CA), which should not be a problem as everything in the icinga setup is centralized in puppet.
Maybe. I don't know if I feel ready having such a central piece of our (stateful) infra inside k8s.
I would assume that the main sticking point for such a migration would be migrating to a supported way of managing credentials in hiera, e.g. eyaml, instead of our somewhat hacky "private" data overlay, which is probably not a bad idea all things considered.
To start picking at this task, I've currently done the following:
dumped the puppetdb postgres database
installed puppetdb from bookworm in a chroot
restored the puppetdb database into the chroot
started puppetdb
The postgres role is now into reader and writer, and the db dump didn't have permissions for the reader user, so a privilege grant was needed for the service to start up after all migrations were completed. Seems that it runs without a problem after migrating.
I'm pretty sure this is ready to be setup for real.
I'm not sure if we want one vm for puppetdb and one vm for puppet, or if we want to keep both on the same VM. I don't think it really matters in practice?