You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kubectl delete -f crd.yml
# In a second shell
kubectl -n rook-ceph edit cephobjectstores.ceph.rook.io nextcloud-obj-store
# comment out the finalizer
Then create the obj store again, and it will fail. The Progress column of the below command will be Failure, and the Progressing and then loop. rgw pod will also fail to be created.
Monitor commands:
kubectl -n rook-ceph get pods --watch
kubectl -n rook-ceph get cephobjectstores.ceph.rook.io --watch
If the finalizer is removed from the CephObjectStore resource while deleting, Rook will not be able to gracefully clean up the pools because they still could have user data in them. This can then prevent the CephObjectStore from being recreated. Likely, this is what occurred. If true, this is a matter of Rook working as intended to ensure user data safefty.
If the finalizer is removed from the CephObjectStore resource while deleting, Rook will not be able to gracefully clean up the pools because they still could have user data in them. This can then prevent the CephObjectStore from being recreated. Likely, this is what occurred. If true, this is a matter of Rook working as intended to ensure user data safefty.
Yes, as the command was hanging, I thought it was not able to delete the object store due to the finalizer. However, it makes sense that Rook disallows overwriting the pool to protect its data.
For future reference, wait for the delete command to complete gracefully to avoid this issue.
Deviation from expected behavior:
The rgw pod does not get created after the CephObjectStore CRDs has been deleted.
Expected behavior:
Spin up CephObjectStore with rgw pod even though the object store has been created before.
How to reproduce it (minimal and precise):
Create the following CRD:
Then delete it:
Then create the obj store again, and it will fail. The
Progress
column of the below command will beFailure
, and theProgressing
and then loop.rgw
pod will also fail to be created.Monitor commands:
Logs to submit:
Cluster Status to submit:
All healthy.
Environment:
uname -a
): Debian 6.1.106-3rook version
inside of a Rook Pod):ceph -v
):kubectl version
):Temporary Solution
It will succeed if I delete every rgw service typed pool, including the
.rgw.root
pool.The text was updated successfully, but these errors were encountered: