Upgrade rke2 to v1.28.10+rke2r1
Rancher has been upgraded, so we can update kubernetes distribution (rke2).
The target version is v1.28.10+rke2r1.
- test-staging-rke2
- archive-staging-rke2
- cluster-admin-rke2
- archive-production-rke2
Activity
-
Newest first Oldest first
-
Show all activity Show comments only Show history only
- Guillaume Samson added kubernetes label
added kubernetes label
- Guillaume Samson assigned to @guillaume
assigned to @guillaume
- Author Owner
Checking for deprecated APIs in rke2 clusters with kubent.
ᐅ for context in $(kubectx | awk '/rke2/');do awk '{print "\n## " toupper($0)}' <<< "$context" kubent --context "$context" --target-version 1.28.10 done ## ARCHIVE-PRODUCTION-RKE2 2:41PM INF >>> Kube No Trouble `kubent` <<< 2:41PM INF version 0.7.2 (git sha 25eb8a3757d1db39a04e94bb97a3f099fb5c9fb6) 2:41PM INF Initializing collectors and retrieving data 2:41PM INF Target K8s version is 1.28.10 2:41PM INF Retrieved 434 resources from collector name=Cluster 2:42PM INF Retrieved 270 resources from collector name="Helm v3" 2:42PM INF Loaded ruleset name=custom.rego.tmpl 2:42PM INF Loaded ruleset name=deprecated-1-16.rego 2:42PM INF Loaded ruleset name=deprecated-1-22.rego 2:42PM INF Loaded ruleset name=deprecated-1-25.rego 2:42PM INF Loaded ruleset name=deprecated-1-26.rego 2:42PM INF Loaded ruleset name=deprecated-1-27.rego 2:42PM INF Loaded ruleset name=deprecated-1-29.rego 2:42PM INF Loaded ruleset name=deprecated-future.rego ## ARCHIVE-STAGING-RKE2 2:42PM INF >>> Kube No Trouble `kubent` <<< 2:42PM INF version 0.7.2 (git sha 25eb8a3757d1db39a04e94bb97a3f099fb5c9fb6) 2:42PM INF Initializing collectors and retrieving data 2:42PM INF Target K8s version is 1.28.10 2:42PM INF Retrieved 531 resources from collector name=Cluster 2:42PM INF Retrieved 273 resources from collector name="Helm v3" 2:42PM INF Loaded ruleset name=custom.rego.tmpl 2:42PM INF Loaded ruleset name=deprecated-1-16.rego 2:42PM INF Loaded ruleset name=deprecated-1-22.rego 2:42PM INF Loaded ruleset name=deprecated-1-25.rego 2:42PM INF Loaded ruleset name=deprecated-1-26.rego 2:42PM INF Loaded ruleset name=deprecated-1-27.rego 2:42PM INF Loaded ruleset name=deprecated-1-29.rego 2:42PM INF Loaded ruleset name=deprecated-future.rego ## CLUSTER-ADMIN-RKE2 2:42PM INF >>> Kube No Trouble `kubent` <<< 2:42PM INF version 0.7.2 (git sha 25eb8a3757d1db39a04e94bb97a3f099fb5c9fb6) 2:42PM INF Initializing collectors and retrieving data 2:42PM INF Target K8s version is 1.28.10 2:42PM INF Retrieved 199 resources from collector name=Cluster 2:42PM INF Retrieved 342 resources from collector name="Helm v3" 2:42PM INF Loaded ruleset name=custom.rego.tmpl 2:42PM INF Loaded ruleset name=deprecated-1-16.rego 2:42PM INF Loaded ruleset name=deprecated-1-22.rego 2:42PM INF Loaded ruleset name=deprecated-1-25.rego 2:42PM INF Loaded ruleset name=deprecated-1-26.rego 2:42PM INF Loaded ruleset name=deprecated-1-27.rego 2:42PM INF Loaded ruleset name=deprecated-1-29.rego 2:42PM INF Loaded ruleset name=deprecated-future.rego ## TEST-STAGING-RKE2 2:42PM INF >>> Kube No Trouble `kubent` <<< 2:42PM INF version 0.7.2 (git sha 25eb8a3757d1db39a04e94bb97a3f099fb5c9fb6) 2:42PM INF Initializing collectors and retrieving data 2:42PM INF Target K8s version is 1.28.10 2:42PM INF Retrieved 183 resources from collector name=Cluster 2:42PM INF Retrieved 288 resources from collector name="Helm v3" 2:42PM INF Loaded ruleset name=custom.rego.tmpl 2:42PM INF Loaded ruleset name=deprecated-1-16.rego 2:42PM INF Loaded ruleset name=deprecated-1-22.rego 2:42PM INF Loaded ruleset name=deprecated-1-25.rego 2:42PM INF Loaded ruleset name=deprecated-1-26.rego 2:42PM INF Loaded ruleset name=deprecated-1-27.rego 2:42PM INF Loaded ruleset name=deprecated-1-29.rego 2:42PM INF Loaded ruleset name=deprecated-future.rego
- Author Owner
In v1.28 and v1.27, there is only one deprecation, CSIStorageCapacity.
We already use the right version,storage.k8s.io/v1
:ᐅ for context in $(kubectx | awk '/rke2/');do awk '{print "\n## " toupper($0)}' <<< "$context" kubectl --context "$context" api-resources| \ awk 'BEGIN{format="%-25s %-25s %-15s\n"} /CSIStorageCapacity/ {printf format, $1, $2, $4}' done ## ARCHIVE-PRODUCTION-RKE2 csistoragecapacities storage.k8s.io/v1 CSIStorageCapacity ## ARCHIVE-STAGING-RKE2 csistoragecapacities storage.k8s.io/v1 CSIStorageCapacity ## CLUSTER-ADMIN-RKE2 csistoragecapacities storage.k8s.io/v1 CSIStorageCapacity ## TEST-STAGING-RKE2 csistoragecapacities storage.k8s.io/v1 CSIStorageCapacity
- Author Owner
Rke2 is upgraded on the cluster
test-staging-rke2
:ᐅ kubectl --context test-staging-rke2 get nodes NAME STATUS ROLES AGE VERSION rancher-node-test-rke2-mgmt1 Ready control-plane,etcd,master 292d v1.28.10+rke2r1 rancher-node-test-rke2-worker1 Ready worker 292d v1.28.10+rke2r1 rancher-node-test-rke2-worker2 Ready worker 292d v1.28.10+rke2r1 rancher-node-test-rke2-worker3 Ready worker 75d v1.28.10+rke2r1
Checking pod creation:
ᐅ kubectl --context test-staging-rke2 run deb-test -ti --rm --image=debian:latest --restart=Never -n swh If you don't see a command prompt, try pressing enter. root@deb-test:/#
ᐅ kubectl --context test-staging-rke2 get pods -n swh NAME READY STATUS RESTARTS AGE deb-test 1/1 Running 0 30s
Snapshots created between each upgrades:
ᐅ kubectl --context local -n fleet-default get etcdsnapshots | \ awk 'NR == 1 || /test-staging-rke2-on-demand/' NAME AGE test-staging-rke2-on-demand-rancher-node-test-rke2-mgmt1-8e1f9b 176m test-staging-rke2-on-demand-rancher-node-test-rke2-mgmt1-b476d5 75m test-staging-rke2-on-demand-rancher-node-test-rke2-mgmt1-f55462 134m
- Author OwnerResolved by Guillaume Samson
When I want to update terraform resource, I've got an unattended update:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # rancher2_cluster_v2.test-staging-rke2 will be updated in-place ~ resource "rancher2_cluster_v2" "test-staging-rke2" { id = "fleet-default/test-staging-rke2" name = "test-staging-rke2" # (9 unchanged attributes hidden) ~ rke_config { ~ chart_values = <<-EOT rke2-calico: {} + rke2-coredns: + autoscaler: + coresPerReplica: 64 + max: 2 + preventSinglePointFailure: true + resources: + limits: + cpu: 8 + requests: + cpu: 500m + memory: 128Mi EOT # (2 unchanged attributes hidden) - etcd_snapshot_create { - generation = 3 -> null } ~ machine_selector_config { + config = {} } # (3 unchanged blocks hidden) } } Plan: 0 to add, 1 to change, 0 to destroy.
So I don't commit the
kubernetes_version
upgrade:diff --git a/proxmox/terraform/staging/cluster-test-staging-rke2.tf b/proxmox/terraform/staging/cluster-test-staging-rke2.tf index 2a1036c..b5ea0f7 100644 --- a/proxmox/terraform/staging/cluster-test-staging-rke2.tf +++ b/proxmox/terraform/staging/cluster-test-staging-rke2.tf @@ -1,6 +1,6 @@ resource "rancher2_cluster_v2" "test-staging-rke2" { name = "test-staging-rke2" - kubernetes_version = "v1.26.13+rke2r1" + kubernetes_version = "v1.28.10+rke2r1" rke_config { upgrade_strategy { worker_drain_options {
1 reply Last reply by Guillaume Samson
- Guillaume Samson mentioned in commit swh-sysadmin-provisioning@758e36e2
mentioned in commit swh-sysadmin-provisioning@758e36e2
- Guillaume Samson marked the checklist item test-staging-rke2 as completed
marked the checklist item test-staging-rke2 as completed
- Guillaume Samson mentioned in commit swh-sysadmin-provisioning@36f53c47
mentioned in commit swh-sysadmin-provisioning@36f53c47
- Guillaume Samson mentioned in commit swh-sysadmin-provisioning@f533cdd5
mentioned in commit swh-sysadmin-provisioning@f533cdd5
- Author Owner
Rke2 is upgraded on the cluster
archive-staging-rke2
:ᐅ kubectl --context archive-staging-rke2 get nodes NAME STATUS ROLES AGE VERSION db1 Ready worker 104d v1.28.10+rke2r1 rancher-node-staging-rke2-metal01 Ready worker 106d v1.28.10+rke2r1 rancher-node-staging-rke2-mgmt1 Ready control-plane,etcd,master 524d v1.28.10+rke2r1 rancher-node-staging-rke2-mgmt2 Ready control-plane,etcd,master 97d v1.28.10+rke2r1 rancher-node-staging-rke2-mgmt3 Ready control-plane,etcd,master 97d v1.28.10+rke2r1 rancher-node-staging-rke2-worker1 Ready worker 520d v1.28.10+rke2r1 rancher-node-staging-rke2-worker2 Ready worker 523d v1.28.10+rke2r1 rancher-node-staging-rke2-worker3 Ready worker 523d v1.28.10+rke2r1 rancher-node-staging-rke2-worker4 Ready worker 520d v1.28.10+rke2r1 rancher-node-staging-rke2-worker5 Ready worker 391d v1.28.10+rke2r1 rancher-node-staging-rke2-worker6 Ready worker 248d v1.28.10+rke2r1 storage1 Ready worker 203d v1.28.10+rke2r1
Snapshots created between each upgrade (except before v1.28.10(rke2r1):
ᐅ kubectl --context local -n fleet-default get etcdsnapshots | \ awk 'NR == 1 || /archive-staging-rke2-on-demand/' NAME AGE archive-staging-rke2-on-demand-rancher-node-staging-rke2-0216b5 88m archive-staging-rke2-on-demand-rancher-node-staging-rke2-06e46f 80m archive-staging-rke2-on-demand-rancher-node-staging-rke2-073561 122m archive-staging-rke2-on-demand-rancher-node-staging-rke2-12ca4a 4m2s archive-staging-rke2-on-demand-rancher-node-staging-rke2-2613df 9m40s archive-staging-rke2-on-demand-rancher-node-staging-rke2-345f21 80m archive-staging-rke2-on-demand-rancher-node-staging-rke2-38dd7c 80m archive-staging-rke2-on-demand-rancher-node-staging-rke2-45203e 4m2s archive-staging-rke2-on-demand-rancher-node-staging-rke2-664e49 9m39s archive-staging-rke2-on-demand-rancher-node-staging-rke2-69e86e 80m archive-staging-rke2-on-demand-rancher-node-staging-rke2-6d9425 53m archive-staging-rke2-on-demand-rancher-node-staging-rke2-6eecf7 53m archive-staging-rke2-on-demand-rancher-node-staging-rke2-7ae69d 54m archive-staging-rke2-on-demand-rancher-node-staging-rke2-80494b 9m40s archive-staging-rke2-on-demand-rancher-node-staging-rke2-904f53 80m archive-staging-rke2-on-demand-rancher-node-staging-rke2-91b0dd 54m archive-staging-rke2-on-demand-rancher-node-staging-rke2-95b401 122m archive-staging-rke2-on-demand-rancher-node-staging-rke2-95cb7e 121m archive-staging-rke2-on-demand-rancher-node-staging-rke2-ab764f 88m archive-staging-rke2-on-demand-rancher-node-staging-rke2-c968cb 4m2s archive-staging-rke2-on-demand-rancher-node-staging-rke2-fe19dd 88m
I removed the old archive-staging-rke2-on-demand (> 300 days) snapshot before v1.28.10 upgrade and the snapshot creation failed:
but after the upgrade the snapshot creation works fine.
When I updated the terraform resource, I've got an unattended update:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # rancher2_cluster_v2.archive-staging-rke2 will be updated in-place ~ resource "rancher2_cluster_v2" "archive-staging-rke2" { id = "fleet-default/archive-staging-rke2" name = "archive-staging-rke2" # (9 unchanged attributes hidden) ~ rke_config { ~ chart_values = <<-EOT rke2-calico: {} + rke2-coredns: + autoscaler: + coresPerReplica: 64 + max: 2 + preventSinglePointFailure: true + resources: + limits: + cpu: 8 + requests: + cpu: 500m + memory: 128Mi EOT # (2 unchanged attributes hidden) # (5 unchanged blocks hidden) } } Plan: 0 to add, 1 to change, 0 to destroy.
- Guillaume Samson marked the checklist item archive-staging-rke2 as completed
marked the checklist item archive-staging-rke2 as completed
- Author Owner
On the
cluster-admin-rke2
all the new snapshots have 0 MB size:There's no metadata in the snapshot:
ᐅ kb --context local -n fleet-default describe etcdsnapshots cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-271eb8 Name: cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-271eb8 Namespace: fleet-default Labels: rke.cattle.io/cluster-name=cluster-admin-rke2 rke.cattle.io/machine-id=5f0fc5e0227db835b1fdc33aa8b271bb3665e5d73fb2987dc97e7e8e034c5fd Annotations: etcdsnapshot.rke.io/snapshot-file-name: etcd-snapshot-rancher-node-admin-rke2-mgmt2-1718236803 etcdsnapshot.rke.io/storage: local API Version: rke.cattle.io/v1 Kind: ETCDSnapshot Metadata: Creation Timestamp: 2024-06-14T09:39:54Z Generation: 1 Owner References: API Version: cluster.x-k8s.io/v1beta1 Block Owner Deletion: true Controller: true Kind: Machine Name: custom-4db0e8632fc0 UID: 40edd50e-ec68-4216-b020-edd819068a2f Resource Version: 690886359 UID: 33723ff2-b6d6-419a-bd74-a8d6c569d052 Snapshot File: Location: file:///var/lib/rancher/rke2/server/db/snapshots/etcd-snapshot-rancher-node-admin-rke2-mgmt2-1718236803 Name: etcd-snapshot-rancher-node-admin-rke2-mgmt2-1718236803 Node Name: rancher-node-admin-rke2-mgmt2 Spec: Cluster Name: cluster-admin-rke2 Status: Missing: true Events: <none>
The old ones:
ᐅ kb --context local -n fleet-default describe etcdsnapshots cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-81b60e Name: cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-81b60e Namespace: fleet-default Labels: rke.cattle.io/cluster-name=cluster-admin-rke2 rke.cattle.io/machine-id=3e5113747c46317aaac3329d9a7f4f6d16c2d5fe1bb5c163b80cb4316da666b rke.cattle.io/node-name=rancher-node-admin-rke2-mgmt1 Annotations: etcdsnapshot.rke.io/snapshot-file-name: etcd-snapshot-rancher-node-admin-rke2-mgmt1-1709899204 etcdsnapshot.rke.io/snapshotbackpopulate-reconciled: true etcdsnapshot.rke.io/storage: local API Version: rke.cattle.io/v1 Kind: ETCDSnapshot Metadata: Creation Timestamp: 2024-03-08T12:07:54Z Generation: 2 Owner References: API Version: cluster.x-k8s.io/v1beta1 Block Owner Deletion: true Controller: true Kind: Machine Name: custom-c1dfaf8b895c UID: af81ee54-ece7-471c-a426-220f0ac31ecf Resource Version: 593899822 UID: 08661ad4-f0a4-4ab3-a665-b72c844d370f Snapshot File: Created At: 2024-03-08T12:00:04Z Location: file:///var/lib/rancher/rke2/server/db/snapshots/etcd-snapshot-rancher-node-admin-rke2-mgmt1-1709899204 Metadata: eyJwcm92aXNpb25pbmctY2x1c3Rlci1zcGVjIjoiSDRzSUFBQUFBQUFBLzZ5VFQyL2JNQXpGdjR1dXM1MC94ZjdBUUE5RGsyMm5MVUNIN2pEMHdNcU1SRmdXUFlwdUdnVDU3b05qeHkxNks1cGJ3dmY4RXlrOUhremRQYUJFVkV4M0tJazRtdEk4TG9ybHAySng5VUZxWE1yQ1pFWnF2T0c0SldmS2crbGFKMURoclFvb3VuMWZzaHhWT0d3Q1JGd0pVUHpWS25GTXZZWVJIZ0pXcHR4Q1NKaVpMWXZGNlIrNXlJSXJ3SWJqTFdveVpleENPTmZYSWl4cE1sY1lVSEhkdExwZmtheEE0Vm1pMUIremZpU3JweW5HdWhPd3VFRWhya3c1ejR4U2c5enA2WGVxcWYwRHBOOVlWaWZ5NzBHOFJjdXhTaWRQSzhOQVA1anJxYm1Xazc2dUhqT3pZNmxSM2p1L1NvZHZtSCt3djNuOHEvbkZMK0NZR2V0QjlBNUNoNU8xQWVzcDR2ZkFEeENlUTJRam1kSllDR1RaVFAyYjhtK2Z0V1ZPMFFtbWxFZEg4Y25jWjZlY0J0UWN4UFdtUEtjR0hPYk81cDZjejlVTEpzK2h1djQ4TjlsTE9mRHVoZnB4VUJ0NHlsdXUwdlZpTVRmM3g4d0lPa29xaEtjbmEyaTg5eUhaUUJFbEh5MzdJdkZXZHlEb1VVakJZY0hpaG9ldVdxYW9mWDlldFUzbGJGYXhyVkZ5QzlaalFWRlJJb1FDcW9aaWtYYStpS2g5YW1acDUyZVB5MU1yd3ljRjhYdVpFK2hNZHQ1ZWdqdGl6dFIvSGV3dlFCMHhaK3AwMi9XWGRBSDZLOXh3U3AvWU1aNGI1ckRDTFhTaFg4RkRyd1MyRUc1Q2x4VGxhNmQrUFIxL09HYmpWdjhjOEJzT1pQZmowaDMvQXdBQS8vOEJBQUQvLzZmUmoxOVZCUUFBIn0= Name: etcd-snapshot-rancher-node-admin-rke2-mgmt1-1709899204 Node Name: rancher-node-admin-rke2-mgmt1 Size: 82358304 Status: successful Spec: Cluster Name: cluster-admin-rke2 Events: <none>
- Author Owner
I stopped the rke2 upgrades on cluster
cluster-admin-rke2
because I'm not sure the current snapshots are usable.
The current rke2 version isv1.26.15+rke2r1
.
The current snapshots are crash-looping:2024/06/14 14:10:42 [INFO] [plansecret] Deleting etcd snapshot fleet-default/cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-936409 2024/06/14 14:10:42 [INFO] [plansecret] Deleting etcd snapshot fleet-default/cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-e06db4 2024/06/14 14:10:42 [INFO] [plansecret] Deleting etcd snapshot fleet-default/cluster-admin-rke2-on-demand-rancher-node-admin-rke2-mgmt-2167e 2024/06/14 14:10:42 [INFO] [plansecret] Deleting etcd snapshot fleet-default/cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-0afbc7 2024/06/14 14:10:42 [INFO] [plansecret] Deleting etcd snapshot fleet-default/cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-aa95a5 2024/06/14 14:10:42 [INFO] [plansecret] Deleting etcd snapshot fleet-default/cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-bfc6e7 2024/06/14 14:10:42 [INFO] [plansecret] Deleting etcd snapshot fleet-default/cluster-admin-rke2-on-demand-rancher-node-admin-rke2-mgmt-28e9f 2024/06/14 14:10:42 [INFO] [snapshotbackpopulate] rkecluster fleet-default/cluster-admin-rke2: processing configmap kube-system/rke2-etcd-snapshots 2024/06/14 14:10:42 [ERROR] error syncing 'kube-system/rke2-etcd-snapshots': handler snapshotbackpopulate: rkecluster fleet-default/cluster-admin-rke2: error while setting status missing=true on etcd snapshot /: Operation cannot be fulfilled on etcdsnapshots.rke.cattle.io "cluster-admin-rke2-on-demand-rancher-node-admin-rke2-mgmt-2167e": StorageError: invalid object, Code: 4, Key: /registry/rke.cattle.io/etcdsnapshots/fleet-default/cluster-admin-rke2-on-demand-rancher-node-admin-rke2-mgmt-2167e, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: dc8d6f14-334e-4e8a-ab50-e5aff110318b, UID in object meta: , requeuing 2024/06/14 14:10:42 [INFO] [snapshotbackpopulate] rkecluster fleet-default/cluster-admin-rke2: processing configmap kube-system/rke2-etcd-snapshots 2024/06/14 14:10:42 [ERROR] error syncing 'kube-system/rke2-etcd-snapshots': handler snapshotbackpopulate: rkecluster fleet-default/cluster-admin-rke2: error while setting status missing=true on etcd snapshot /: Operation cannot be fulfilled on etcdsnapshots.rke.cattle.io "cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-bfc6e7": StorageError: invalid object, Code: 4, Key: /registry/rke.cattle.io/etcdsnapshots/fleet-default/cluster-admin-rke2-etcd-snapshot-rancher-node-admin-rke2-bfc6e7, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 950563dc-4986-47af-8014-94ea79329265, UID in object meta: , requeuing 2024/06/14 14:10:42 [INFO] [snapshotbackpopulate] rkecluster fleet-default/cluster-admin-rke2: processing configmap kube-system/rke2-etcd-snapshots 2024/06/14 14:10:42 [INFO] [snapshotbackpopulate] rkecluster fleet-default/cluster-admin-rke2: processing configmap kube-system/rke2-etcd-snapshots 2024/06/14 14:10:42 [INFO] [snapshotbackpopulate] rkecluster fleet-default/cluster-admin-rke2: processing configmap kube-system/rke2-etcd-snapshots 2024/06/14 14:10:42 [INFO] [snapshotbackpopulate] rkecluster fleet-default/cluster-admin-rke2: processing configmap kube-system/rke2-etcd-snapshots 2024/06/14 14:10:43 [INFO] [snapshotbackpopulate] rkecluster fleet-default/cluster-admin-rke2: processing configmap kube-system/rke2-etcd-snapshots 2024/06/14 14:10:43 [INFO] [snapshotbackpopulate] rkecluster fleet-default/cluster-admin-rke2: processing configmap kube-system/rke2-etcd-snapshots
- Guillaume Samson mentioned in commit swh-sysadmin-provisioning@f5950742
mentioned in commit swh-sysadmin-provisioning@f5950742
- Author Owner
Rke2 is upgraded on the cluster
cluster-admin-rke2
:ᐅ kubectl --context cluster-admin-rke2 get nodes NAME STATUS ROLES AGE VERSION rancher-node-admin-rke2-mgmt1 Ready control-plane,etcd,master 299d v1.28.10+rke2r1 rancher-node-admin-rke2-mgmt2 Ready control-plane,etcd,master 102d v1.28.10+rke2r1 rancher-node-admin-rke2-mgmt3 Ready control-plane,etcd,master 102d v1.28.10+rke2r1 rancher-node-admin-rke2-node01 Ready worker 299d v1.28.10+rke2r1 rancher-node-admin-rke2-node02 Ready worker 299d v1.28.10+rke2r1 rancher-node-admin-rke2-node03 Ready worker 299d v1.28.10+rke2r1
1 - Guillaume Samson marked the checklist item cluster-admin-rke2 as completed
marked the checklist item cluster-admin-rke2 as completed
- Guillaume Samson mentioned in commit swh-sysadmin-provisioning@bee2fd3d
mentioned in commit swh-sysadmin-provisioning@bee2fd3d
- Author Owner
Rke2 is upgraded on the cluster
archive-production-rke2
:ᐅ kubectl --context archive-production-rke2 get nodes NAME STATUS ROLES AGE VERSION banco Ready worker 134d v1.28.10+rke2r1 rancher-node-metal01 Ready worker 545d v1.28.10+rke2r1 rancher-node-metal02 Ready worker 540d v1.28.10+rke2r1 rancher-node-metal03 Ready worker 285d v1.28.10+rke2r1 rancher-node-metal04 Ready worker 144d v1.28.10+rke2r1 rancher-node-metal05 Ready worker 88d v1.28.10+rke2r1 rancher-node-production-rke2-mgmt1 Ready control-plane,etcd,master 545d v1.28.10+rke2r1 rancher-node-production-rke2-mgmt2 Ready control-plane,etcd,master 103d v1.28.10+rke2r1 rancher-node-production-rke2-mgmt3 Ready control-plane,etcd,master 103d v1.28.10+rke2r1 saam Ready worker 148d v1.28.10+rke2r1
- Guillaume Samson marked the checklist item archive-production-rke2 as completed
marked the checklist item archive-production-rke2 as completed