For the puppet part, the actual staging configuration needs some adaptations as the configuration install postgresql on version 11 and 13. Another point is the different clusters are not managed by puppet but it's the same for the production.
At least, the configuration needs to be adapted to install only the version 12 to have the same version as in the production environment (except for the indexer still using the version 11).
We will dig with @ardumont if it's also possible to manage the 3 psql clusters via puppet in a reasonable amount of time
The puppetlabs-postgresql module doesn't allow to manage several postgresql clusters. We have made the tradeoff to use only one cluster on db1 at the beginning to be able to deploy db1 via puppet as it's the priority. The module will be extended or replaced by something else later.
Road so far, after the puppet run which creates the dbs with guest, postgres
and swh-admin users.
As tryouts, from staging-db1 node (and a bit on the actual db1, which
we'll finish during the day):
$ cat init-db.sh#!/usr/bin/env bashset -xexport PGUSER=postgres# export PGHOST=$(ip a | grep eth1 | grep inet | awk '{print $2}' | cut -d'/' -f1)export PGHOST=db1.internal.staging.swh.networkexport PGPORT=5432# export PGPASSWORD=swh::deploy::db::postgres::passwordexport PGPASSWORD=<insert-postgres-pass-here>MODULE=${1}if [ "${MODULE}" == "storage" ]; then DBNAME="swh"else DBNAME="swh-${MODULE}"fi# Connect as super-admin to init the extensionsswh db init-admin --db-name $DBNAME $MODULE # should be changed to init-admin# Connect as owner of the db to init the schemaexport PGUSER=$DBNAMEexport PGPASSWORD=${2}swh db init --db-name $DBNAME $MODULE# Connect as owner to grant read access to schema to guest userpsql -c "grant select on all tables in schema public to guest;"
The actual initialization calls to bootstrap the different schema below (the
second parameter is the password, for vagrant node, coming from the censored
private data repository, for the actual production, this will be adapted
accordingly):
$ ./init-db.sh scheduler swh::deploy::scheduler::db::passwordDONE database for scheduler initialized at version 17$ ./init-db.sh indexer swh::deploy::indexer::storage::db::passwordDONE database for indexer initialized at version 132$ ./init-db.sh vault swh::deploy::vault::db::passwordDONE database for vault initialized at version 1$ ./init-db.sh storage swh::deploy::storage::db::passwordDONE database for storage initialized (flavor default) at version 163
Note:
swh db create is used for now, the code on the node is patched to comment out the sql create database call to prevent breaking (D4374).
the swh-admin superuser is used to allow privilege installation setup (infra/puppet/puppet-swh-site!243)
Staging nodes are now using storage1 (and db1).
So db0 and storage0 can be decomissionned (and thus orsay \o/).
We triggered some loading (swh loader run git..., some are still running btw, linux, nixpkgs, guix, ...).
Icinga checks have been fixed a bit... (firewall rule update, ...)
We need to shake some more other services (save code now, cooking task, trigger listings, ... )
Following the diff infra/puppet/puppet-swh-site!244, the zfs dsatasets were reconfigured tobe mounted on the /srv/softwareheritage/postgres/* :
systemctl stop postgresql@12-mainzfs set mountpoint=none data/postgres-indexer-12zfs set mountpoint=none data/postgres-secondary-12zfs set mountpoint=none data/postgres-main-12zfs set mountpoint=none data/postgres-misczfs set mountpoint=/srv/softwareheritage/postgres data/postgres-misczfs set mountpoint=/srv/softwareheritage/postgres/12/indexer data/postgres-indexer-12zfs set mountpoint=/srv/softwareheritage/postgres/12/secondary data/postgres-secondary-12zfs set mountpoint=/srv/softwareheritage/postgres/12/main data/postgres-main-12systemctl start postgresql@12-main
It must be done in this order because there is a hierarchical mount between postgres/ and postgres/*