-
Notifications
You must be signed in to change notification settings - Fork 41.5k
Closed
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.Must be staffed and worked on either currently, or very soon, ideally in time for the next release.sig/nodeCategorizes an issue or PR as relevant to SIG Node.Categorizes an issue or PR as relevant to SIG Node.triage/acceptedIndicates an issue or PR is ready to be actively worked on.Indicates an issue or PR is ready to be actively worked on.
Description
What happened?
The following:
> kubectl get --raw "/api/v1/nodes/k8s-dev-worker/proxy/metrics/resource"
# HELP container_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the container in core-seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{container="kindnet-cni",namespace="kube-system",pod="kindnet-ndczz"} 1.121325 1735032838055
container_cpu_usage_seconds_total{container="kube-proxy",namespace="kube-system",pod="kube-proxy-l5jhs"} 1.100665 1735032838936
container_cpu_usage_seconds_total{container="metrics-server",namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 7.333964 1735032837430
# HELP container_memory_working_set_bytes [STABLE] Current working set of the container in bytes
# TYPE container_memory_working_set_bytes gauge
container_memory_working_set_bytes{container="kindnet-cni",namespace="kube-system",pod="kindnet-ndczz"} 3.2923648e+07 1735032838055
container_memory_working_set_bytes{container="kube-proxy",namespace="kube-system",pod="kube-proxy-l5jhs"} 4.0628224e+07 1735032838936
container_memory_working_set_bytes{container="metrics-server",namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 4.7026176e+07 1735032837430
# HELP container_start_time_seconds [STABLE] Start time of the container since unix epoch in seconds
# TYPE container_start_time_seconds gauge
container_start_time_seconds{container="kindnet-cni",namespace="kube-system",pod="kindnet-ndczz"} 1.7350309825441425e+09
container_start_time_seconds{container="kube-proxy",namespace="kube-system",pod="kube-proxy-l5jhs"} 1.7350309819809804e+09
container_start_time_seconds{container="metrics-server",namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 1.7350309993126562e+09
# HELP node_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the node in core-seconds
# TYPE node_cpu_usage_seconds_total counter
node_cpu_usage_seconds_total 71.41304 1735032832343
# HELP node_memory_working_set_bytes [STABLE] Current working set of the node in bytes
# TYPE node_memory_working_set_bytes gauge
node_memory_working_set_bytes 2.134016e+08 1735032832343
# HELP pod_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the pod in core-seconds
# TYPE pod_cpu_usage_seconds_total counter
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kindnet-ndczz"} 1.145182 1735032830497
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-proxy-l5jhs"} 1.108676 1735032837395
pod_cpu_usage_seconds_total{namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 7.336168 1735032831254
# HELP pod_memory_working_set_bytes [STABLE] Current working set of the pod in bytes
# TYPE pod_memory_working_set_bytes gauge
pod_memory_working_set_bytes{namespace="kube-system",pod="kindnet-ndczz"} 3.3222656e+07 1735032830497
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-proxy-l5jhs"} 4.0914944e+07 1735032837395
pod_memory_working_set_bytes{namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 4.732928e+07 1735032831254
# HELP resource_scrape_error [STABLE] 1 if there was an error while getting container metrics, 0 otherwise
# TYPE resource_scrape_error gauge
resource_scrape_error 0
As can be seen, swap stats is not shown here:
> kubectl get --raw "/api/v1/nodes/k8s-dev-worker/proxy/metrics/resource" | grep -i swap
>
What did you expect to happen?
Swap to be included in metrics/resource endpoint stats.
How can we reproduce it (as minimally and precisely as possible)?
- Bring up a cluster
- Install metrics server
- Run
kubectl get --raw "/api/v1/nodes/<NODE-NAME>/proxy/metrics/resource" | grep -i swap
Anything else we need to know?
Swap stats were introduced in this PR: #118865.
It also shows the expected output.
Kubernetes version
$ kubectl version
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.32.0
Cloud provider
Using a kind development cluster
OS version
# On Linux:
$ cat /etc/os-release
NAME="Fedora Linux"
VERSION="40 (Forty)"
ID=fedora
VERSION_ID=40
VERSION_CODENAME=""
PLATFORM_ID="platform:f40"
PRETTY_NAME="Fedora Linux 40 (Forty)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:40"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f40/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=40
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=40
SUPPORT_END=2025-05-13
$ uname -a
Linux fedora40-eve 6.12.5-100.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Dec 16 15:00:58 UTC 2024 x86_64 GNU/Linux
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
Install tools
kind
Container runtime (CRI) and version (if applicable)
> crio version
Version: 1.31.3
GitCommit: 5865976a62cd3abd1b56ec21ea4fde19a730fe87
GitCommitDate: 2024-12-02T10:48:49Z
GitTreeState: dirty
BuildDate: 1970-01-01T00:00:00Z
GoVersion: go1.22.5
Compiler: gc
Platform: linux/amd64
Linkmode: static
BuildTags:
static
netgo
osusergo
exclude_graphdriver_btrfs
seccomp
apparmor
selinux
exclude_graphdriver_devicemapper
LDFlags: unknown
SeccompEnabled: true
AppArmorEnabled: false
Related plugins (CNI, CSI, ...) and versions (if applicable)
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.Must be staffed and worked on either currently, or very soon, ideally in time for the next release.sig/nodeCategorizes an issue or PR as relevant to SIG Node.Categorizes an issue or PR as relevant to SIG Node.triage/acceptedIndicates an issue or PR is ready to be actively worked on.Indicates an issue or PR is ready to be actively worked on.
Type
Projects
Status
Done