Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubelet: memorymanager static policy startup error #113130

Open
gaohuatao-1 opened this issue Oct 18, 2022 · 20 comments · May be fixed by #114501
Open

kubelet: memorymanager static policy startup error #113130

gaohuatao-1 opened this issue Oct 18, 2022 · 20 comments · May be fixed by #114501
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@gaohuatao-1
Copy link

gaohuatao-1 commented Oct 18, 2022

What happened?

In our scenario, memoryManager are enabled. There are two numa nodes: node0 and node1 on the host.
The relevant parameter values are as follows:
memoryManagerPolicy: Static

Initially, no pods are running on this node with two numa node, 220G memory per numa node. Follow the steps bellow to create and delete pods:

  1. create guaranteed Pod1 with one container, memory req and limit: 240G

  2. create guaranteed Pod2 with one container, memory req and limit: 20G
    At this point, machineState is as follows:
    image

  3. delete Pod2
    At this point, machineState will be as follows:
    image

  4. create guaranteed Pod3 with one container, memory req and limit: 10G
    At this point, actual machineState is as follows:
    image

Now, restarting kubelet will fail. When kubelet restart, the expected machineState is as follows that is not equal to actual machineState above.
image

What did you expect to happen?

Pod creation and deletion order should not cause kubelet restart to fail.

How can we reproduce it (as minimally and precisely as possible)?

See analysis above.

Anything else we need to know?

No response

Kubernetes version

v1.21 and above versions have this problem

Cloud provider

None

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@gaohuatao-1 gaohuatao-1 added the kind/bug Categorizes issue or PR as related to a bug. label Oct 18, 2022
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 18, 2022
@gaohuatao-1
Copy link
Author

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 18, 2022
@SergeyKanzhelev
Copy link
Member

/triage needs-information

thank you for beautifully crafted error report!!!

The k8s version you are running is old and out of support (kubelet v1.21.0). Would it be possible to check the behavior on latest version of k8s?

I know that there is an existing limitation around cpu manager (not sure about memory manager) that requires to handle checkpoint file before the restart. Not sure if this applies here. Please check on latest k8s and report back.

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Oct 19, 2022
@gaohuatao-1
Copy link
Author

Thanks for you reply.

/triage needs-information

thank you for beautifully crafted error report!!!

The k8s version you are running is old and out of support (kubelet v1.21.0). Would it be possible to check the behavior on latest version of k8s?

I know that there is an existing limitation around cpu manager (not sure about memory manager) that requires to handle checkpoint file before the restart. Not sure if this applies here. Please check on latest k8s and report back.

Thanks for your reply. I have confirmed that the latest version kubelet has this problem, too.
When kubelet start , static policy of memorymanager will validate the checkpoint file, the verification logic is as follows:
image

b.NUMAAffinity on line 563 is in ascending order. That is to say, when the func validateState generates expectedMachineState, it always allocate memory from the lowest numa node first, regardless of the current memory allocation of containers. That will lead to that the actual machineState is not equal to the calculated one.

@gaohuatao-1
Copy link
Author

gaohuatao-1 commented Nov 3, 2022

@SergeyKanzhelev
Follow the above process, this problem is easy to reproduce. I reproduced the problem on version 1.25.3.
There is logic flaws in memorymanager of kubelet, which causes the kubelet to fail to restart.
The result is as follows:
image

@ffromani
Copy link
Contributor

ffromani commented Nov 3, 2022

/cc
I'm interested in this issue

@cynepco3hahue
Copy link

Hmm in general the memory manager should not allow allocating the same NUMA nodes for cross and single NUMA node allocations it is one of the limitations that we have, see https://kubernetes.io/blog/2021/08/11/kubernetes-1-22-feature-memory-manager-moves-to-beta/#single-vs-cross-numa-node-allocation. @gaohuatao-1 Which topology manager do you use?

@gaohuatao-1
Copy link
Author

Hmm in general the memory manager should not allow allocating the same NUMA nodes for cross and single NUMA node allocations it is one of the limitations that we have, see https://kubernetes.io/blog/2021/08/11/kubernetes-1-22-feature-memory-manager-moves-to-beta/#single-vs-cross-numa-node-allocation. @gaohuatao-1 Which topology manager do you use?

Thanks for your comment.
Using the default topology manager policy: none. In the above example, there is no pod with single NUMA node.

@cynepco3hahue
Copy link

I see, probably it will not be a problem when we will have both pods to be pinned to multiple NUMA nodes. And looks like you are correct under the comment #113130 (comment), we should validate the state in descending order.
BTW kudos for the report :)

@donggangcj
Copy link

@gaohuatao-1 I had a similar problem and I submitted a PR to fix it, Can you help review the code?

@donggangcj
Copy link

For a node group, we cannot restore the free and reserved size of each node in the group after multiple allocations and releases of one resource.

@aimuz
Copy link
Contributor

aimuz commented Dec 15, 2022

In the third step, is the ideal case that numa1 equals 220 and numa2 equals 20? @gaohuatao-1

@gaohuatao-1
Copy link
Author

  1. nteed Pod1 with one container

Yes, you are right.

@gaohuatao-1
Copy link
Author

@gaohuatao-1 I had a similar problem and I submitted a PR to fix it, Can you help review the code?

Thanks for your work, I will review it later.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 10, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 10, 2023
@SergeyKanzhelev
Copy link
Member

/remove-lifecycle rotten

since we have an active PR I will move this to triaged

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels May 17, 2023
@k8s-ci-robot k8s-ci-robot removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 17, 2023
@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot removed the triage/accepted Indicates an issue or PR is ready to be actively worked on. label May 16, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 16, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 14, 2024
@SergeyKanzhelev
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
Status: Triaged
Development

Successfully merging a pull request may close this issue.

8 participants