[Ceph] Decommission beaubourg as a ceph osd node
beaubourg is showing some signs of tiredness on its disks.
It causes some global latencies on the ceph cluster (cf #4876 (closed)).
mucem was added recently to replace beaubourd.
beaubourg is showing some signs of tiredness on its disks.
It causes some global latencies on the ceph cluster (cf #4876 (closed)).
mucem was added recently to replace beaubourd.
changed milestone to %MRO 2023
added activity::Deployment priority:High proxmox labels
marked this issue as related to #4876 (closed)
mentioned in issue #4924 (closed)
mentioned in issue #4925 (closed)
I've started working on this, by marking the OSDs as "out" in the ceph admin interface in proxmox.
assigned to @olasd
(one by one)
All OSDs that were hosted on beaubourg are now out of the ceph cluster. They still need to be fully decommissionned.
While the decommissionning was ongoing, hypervisor3 started swapping heavily. On a swap partition in its pretty slow boot ssd. I've disabled that now.
I've also decommissionned the ceph-mon, ceph-mgr and ceph-mds from beaubourg (leaving only three nodes for each on mucem, branly and hypervisor3).
After removing the OSDs and clearing the disks, beaubourg is now fully decommissionned as a ceph host.
closed