-
Notifications
You must be signed in to change notification settings - Fork 40.9k
[PodLevelResources] Event for unsupported pod-level resource manager alignment #132634
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
[PodLevelResources] Event for unsupported pod-level resource manager alignment #132634
Conversation
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @KevinTMtz. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: KevinTMtz The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @ndixita |
b111ff0
to
5c27349
Compare
5c27349
to
5c3437a
Compare
resourcehelper.IsPodLevelResourcesSet(pod) && | ||
v1qos.GetPodQOS(pod) == v1.PodQOSGuaranteed { | ||
|
||
if kl.containerManager.GetNodeConfig().CPUManagerPolicy == string(cpumanager.PolicyStatic) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please no, this is fragile and create unnecessary coupling among modules. This is another instance (alongside in-place VPA) which calls for #128728
I acknowledge the current solution is pretty much the only solution, but the proper way forward IMO is to add better internal API (and later de-entangle the container manager...)
/ok-to-test |
@@ -1898,6 +1900,23 @@ func (kl *Kubelet) SyncPod(ctx context.Context, updateType kubetypes.SyncPodType | |||
} | |||
} | |||
|
|||
// If pod-level resources are set, CPU and Memory managers alignment is skipped, | |||
// an event is surfaced to inform the user when skipped in the corresponding pod. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not want to skip alignment for all cases.
- If the container-level resources are set (regardless of pod-resources being set), align the resources at container-level
- If only pod-level resources are set, then skip the alignment
@@ -463,6 +464,14 @@ func (p *staticPolicy) allocateCPUs(s state.State, numCPUs int, numaAffinity bit | |||
|
|||
func (p *staticPolicy) guaranteedCPUs(pod *v1.Pod, container *v1.Container) int { | |||
qos := v1qos.GetPodQOS(pod) | |||
|
|||
// The CPU manager static policy does not support pod-level resources. | |||
if utilfeature.DefaultFeatureGate.Enabled(features.PodLevelResources) && resourcehelper.IsPodLevelResourcesSet(pod) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this needs to be checked after QoS check, otherwise the message would be logged for pods with burstable and best-effort QoS classes as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also need changes for topology manager. For guaranteed pods,
- If container-level resources are set with R=L (regardless or pod-level resources being set/unset) and scope=container, topology manager will still align the resources
- If only pod-level resources are set, or container-level are set but R!=L, skip alignment with the event being recorded
What type of PR is this?
What this PR does / why we need it:
Which issue(s) this PR is related to:
Fixes #132445
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: