Skip to content

Use all the disks on esnode2 and esnode3

Currently, only 3 disks (on 4) are used and the partition formerly used by kafka is lost (~2to). The usage of zfs with the same partitioning scheme on each disk[1] is ok (tested on esnode1). It can be applied on esnode[2-3] to have an homogeneous configuration and add ~1.8To per node.

  • [1]:
root@esnode1:~# fdisk -l /dev/sda
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: HGST HUS726020AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 543964DA-9ECA-4222-952D-BA8A90FAB2B9

Device         Start        End    Sectors  Size Type
/dev/sda1       2048    1050623    1048576  512M EFI System
/dev/sda2    1050624   79175679   78125056 37.3G Linux filesystem
/dev/sda3   79175680  141676543   62500864 29.8G Linux swap
/dev/sda4  141676544 3907028991 3765352448  1.8T Linux filesystem

Procedure for each node :

  • Disable shard allocation on the node to gently remove the node from the cluster
  • wait until the node is emptied
  • Stop the node
  • Install zfs and create the pool
  • Restart the node
  • Activate shard allocation on the node
  • Wait for the cluster rebalancing

Migrated from T2958 (view on Phabricator)

Edited by Vincent Sellier