-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1 mgr handle_mgr_map respawning because set of enabled modules changed! #12520
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
@yehaifeng can you please provide more details on how to reproduce the issue? ... for instance did you enable/disable any mgr modules? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
I'm also having this issue. Included are operator and mgr logs from when the reload happened. The mgrs will reload without any user action. They seam to do it every 5-10 minutes. My guess is that it's tied to operator reconciliation. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
Is this a bug report or feature request?
Deviation from expected behavior:
Expected behavior:
How to reproduce it (minimal and precise):
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
logs for
mgr.b
Operator's logs, if necessary
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the
insert code
button from the Github UI.Read GitHub documentation if you need help.
Cluster Status to submit:
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, use
kubectl rook-ceph ceph status
For more details, see the Rook Krew Plugin
Environment:
OS (e.g. from /etc/os-release):
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
Kernel (e.g.
uname -a
):Linux hci001.yhf.com 4.18.0-338.el8.x86_64 Monitor bootstrapping with libcephd #1 SMP Fri Aug 27 17:32:14 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Cloud provider or hardware configuration:
Rook version (use
rook version
inside of a Rook Pod): v1.11.9Storage backend version (e.g. for ceph do
ceph -v
): ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)Kubernetes version (use
kubectl version
): v1.27.2Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): kubeadm v1.27.2
Storage backend status (e.g. for Ceph use
ceph health
in the Rook Ceph toolbox):I use
helm
deploy the rook-ceph and rook-ceph-cluster,but themgr.b
always restart due toenabled modules changed
,mgr.a
is also have error.The text was updated successfully, but these errors were encountered: