Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
D
Development documentation
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Antoine R. Dumont
Development documentation
Commits
d589f3f1
Unverified
Commit
d589f3f1
authored
1 month ago
by
Antoine R. Dumont
Browse files
Options
Downloads
Patches
Plain Diff
docs/cassandra: Document debian upgrade procedure
Refs.
swh/infra/sysadm-environment#5556
parent
c114082f
No related branches found
No related tags found
No related merge requests found
Pipeline
#12956
passed
1 month ago
Stage: external
Changes
2
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
docs/sysadm/data-silos/cassandra/debian-upgrade.rst
+142
-0
142 additions, 0 deletions
docs/sysadm/data-silos/cassandra/debian-upgrade.rst
docs/sysadm/data-silos/cassandra/index.rst
+1
-0
1 addition, 0 deletions
docs/sysadm/data-silos/cassandra/index.rst
with
143 additions
and
0 deletions
docs/sysadm/data-silos/cassandra/debian-upgrade.rst
0 → 100644
+
142
−
0
View file @
d589f3f1
.. _upgrade-debian-cassandra-cluster:
Upgrade Procedure for Debian Nodes in a Cassandra Cluster
=========================================================
.. admonition:: Intended audience
:class: important
sysadm staff members
Purpose
--------
This page documents the steps to upgrade Debian nodes running in a Cassandra
cluster. The upgrade process involves various commands and checks before and
after rebooting the node.
Prerequisites
-------------
+ Familiarity with SSH and CLI-based command execution
+ Out-of-band Access to the node (IDRAC/ILO) for reboot
+ Access to the node through SSH (requires the vpn)
Step 0: Initial Steps
---------------------
Ensure the out of band access to the machine is ok. This definitely helps when
something goes wrong during a reboot (disk order or names change, network,
...).
Step 1: Migrate to the next debian suite
----------------------------------------
Update the Debian version of the node (e.g. bullseye to bookworm) using the
following command:
.. code::
root@node:~# /usr/local/bin/migrate-to-${NEXT_CODENAME}.sh
Note: The script should be present on the machine (installed through puppet).
Step 2: Run Puppet Agent
-------------------------
Once the upgrade procedure happened, run the puppet agent to apply any necessary
configuration changes (e.g. /etc/apt/sources.list change, etc...)
.. code::
root@node:~# puppet agent -t
Step 3: Stop Puppet Agent
-------------------------
As we will stop the service, we don't want the agent to start it back again.
.. code::
root@node:~# puppet agent --disable "Ongoing debian upgrade"
Step 4: Autoremove and Purge
-----------------------------
Perform autoremove to remove unnecessary packages left-over from the migration:
.. code::
root@node:~# apt autoremove
Step 5: Stop the cassandra service
----------------------------------
The cluster can support one non-responding node so it's ok to stop the
service.
.. code-block:: shell
$ nodetool drain
Lookup for the '- DRAINED' pattern in the service log to know it's done.
.. code-block:: shell
$ journalctl -e cassandra@instance1 | grep DRAINED
Nov 27 14:09:06 cassandra01 cassandra[769383]: INFO [RMI TCP Connection(20949)-192.168.100.181] 2024-11-27 14:09:06,084 StorageService.java:1635 - DRAINED
Then stop the cassandra service.
.. code-block:: shell
$ systemctl stop cassandra@instance1
In the output of the ``nodetool status``, the node whose service is stopped
should be marked as DN (``Down and Normal``):
$ nodetool -h cassandra02 status -r | grep DN
DN cassandra01.internal.softwareheritage.org 8.63 TiB 16 22.7% cb0695ee-b7f1-4b31-ba5e-9ed7a068d993 rack1
Step 6: Reboot the Node
------------------------
We are finally ready to reboot the node, so just do it:
.. code::
root@node:~# reboot
You can connect to the serial console of the machine to follow through the
reboot.
Step 7: Clean up some more
--------------------------
Once the machine is restarted, some cleanup might be necessary.
.. code::
root@node:~# apt autopurgey
Step 8: Activate puppet agent
-----------------------------
Activate back the puppet agent and make it run. This will start back the
cassandra service again.
.. code::
root@node:~# puppet agent --enable && puppet agent --test
Post cluster migration
----------------------
Once all the nodes of the cluster have been migrated:
- Remove the argocd sync window so the cluster is back to nominal state.
- Enable back the Rancher etcd snapshots.
- Check the `holderIdentity` value in `rke2` and `rke2-lease` leases and configmaps.
This diff is collapsed.
Click to expand it.
docs/sysadm/data-silos/cassandra/index.rst
+
1
−
0
View file @
d589f3f1
...
...
@@ -9,3 +9,4 @@ Cassandra
.. toctree::
installation
upgrade
debian-upgrade
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment