You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
with these changes we'll be able to restore any
crd which stuck deletion due some dependencies
after after accidental deletion.
example:
kubectl rook-ceph -n <ns> restore <cr> <cr_name>
Signed-off-by: subhamkrai <[email protected]>
When a Rook CR is deleted, the Rook operator will respond to the deletion event to attempt to clean up the cluster resources. If any data is still present in the cluster, Rook will refuse to delete the CR to ensure data is not lost. The operator will refuse to remove the finalizer on the CR until the underlying data is deleted.
4
+
5
+
While the underlying Ceph data and daemons continue to be available, the CRs will be stuck indefinitely in a Deleting state in which the operator will not continue to ensure cluster health. Upgrades will be blocked, further updates to the CRs are prevented, and so on. Since Kubernetes does not allow undeleting resources, the command below will allow repairing the CRs without even necessarily suffering cluster downtime.
6
+
7
+
## Restore Command
8
+
9
+
-`<CRD>`: the CRD type that is to be restored, such as CephCluster, CephFilesystem, CephBlockPool and so on.
10
+
-`[CRName]`: is the name of the specific CR which you want to restore since there can be multiple instances under the same CRD. For example, if there are multiple CephFilesystems stuck in deleting state, a specific filesystem can be restored: `restore-deleted cephfilesystem filesystem-2`.
11
+
12
+
```bash
13
+
kubectl rook-ceph restore-deleted <CRD> [CRName]
14
+
15
+
Info: Detecting which resources to restore for crd "cephcluster"
16
+
Info: Restoring CR my-cluster
17
+
Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no
18
+
19
+
Info: skipped prompt since ROOK_PLUGIN_SKIP_PROMPTS=true
20
+
Info: Scaling down the operator to 0
21
+
Info: Backing up kubernetes and crd resources
22
+
---
23
+
---
24
+
---
25
+
Info: Removing owner references for service rook-ceph-mgr
26
+
Info: Removed ownerReference for service: rook-ceph-mgr
27
+
28
+
Info: Removing owner references for service rook-ceph-mgr-dashboard
29
+
Info: Removed ownerReference for service: rook-ceph-mgr-dashboard
30
+
31
+
Info: Removing owner references for service rook-ceph-mon-a
32
+
Info: Removed ownerReference for service: rook-ceph-mon-a
33
+
---
34
+
---
35
+
---
36
+
Info: Removing finalizers from cephcluster/my-cluster
37
+
Info: cephcluster.ceph.rook.io/my-cluster patched
38
+
39
+
Info: Re-creating the CR cephcluster from file cephcluster-my-cluster.yaml created above
40
+
Info: cephcluster.ceph.rook.io/my-cluster created
41
+
42
+
Info: Scaling up the operator to 1
43
+
Info: CR is successfully restored. Please watch the operator logs and check the crd
0 commit comments