Skip to content

Refactor the checker stack

David Douard requested to merge douardda/swh-scrubber:rework-cli into master

A checker configuration must now be created before being able to start a checker session. This configuration is stored in the database and consist in a triplet

  (datastore, object_type, nb_partitions)

Once done, any number of checker can be started for this specific checker configuration; each checher process will check partitions one by one, using the status stored in the database to get the next partition number to check on the next iteration.

This allows to dynamically adapt the number of checker processes.

For example, checking the shapshots splitting the hash space in 4096 partitions using 4 parallel workers could be like:

  $ export SWH_CONFIG_FILENAME=config.yml
  $ swh scrubber check init --object-type snapshot --nb-partitions 4096 --name cfg-snp
  Created configuration 3 for checking shapshot in postgresql storage

  $ for i in {1..4}; do (swh scrubber check storage cfg-snp &); done

This completely changes the way to deploy scrubber: it should (hopefully) simplify the deployment.

To store the configuration, it splits the checked_partition table in 2, extracting the "configuration" items in a new check_config table.

This new table stores the "configuration" for a scrubber. A configuration consists in a set of:

(datastore, object_type, nb_partitions)

This comes with a migration script; WARNING: this script needs to be checked before deployment on a productiion-sized big database. Any activity on the database should be stopped before execution.

This is the first step of a series to make the scrubber easier to deploy on elastic infrastructure.

related to #4695

Edited by David Douard

Merge request reports

Loading