-
Notifications
You must be signed in to change notification settings - Fork 40.6k
Fix response schema for delete batch/v1/jobs #93477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix response schema for delete batch/v1/jobs #93477
Conversation
Welcome @JoeryH! |
Hi @JoeryH. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @thockin |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@janetkuo Can you help me find a home for this? I got assigned but I have no real context :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is generated and shouldn't be modified manually, ref https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md
DELETE batch/v1/jobs actually returns a Job resource, not the generic Status object. We can use ReturnDeletedObject: true in the storage configuration to fix this. Without this fix clients that are generated will be incorrectly unmarshalling the json and, at least for the Java client, throw an exception.
a90075b
to
c8ea423
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: JoeryH The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
@@ -68,6 +68,7 @@ func NewREST(optsGetter generic.RESTOptionsGetter) (*REST, *StatusREST, error) { | |||
UpdateStrategy: job.Strategy, | |||
DeleteStrategy: job.Strategy, | |||
ResetFieldsStrategy: job.Strategy, | |||
ReturnDeletedObject: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this actually changes the API behavior of deleting a single job... I don't think we want to do that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm... deleting a single job sometimes returning status, as advertised:
kubectl delete jobs --all --v=9
I0917 12:42:00.787591 6800 request.go:1181] Request Body: {"propagationPolicy":"Background"}
I0917 12:42:00.787648 6800 round_trippers.go:435] curl -v -XDELETE -H "User-Agent: kubectl/v1.23.0 (darwin/amd64) kubernetes/5fb1442" -H "Accept: application/json" -H "Content-Type: application/json" 'https://localhost:6443/apis/batch/v1/namespaces/default/jobs/pi'
I0917 12:42:00.791982 6800 round_trippers.go:454] DELETE https://localhost:6443/apis/batch/v1/namespaces/default/jobs/pi 200 OK in 4 milliseconds
I0917 12:42:00.792008 6800 round_trippers.go:460] Response Headers:
I0917 12:42:00.792017 6800 round_trippers.go:463] Content-Length: 168
I0917 12:42:00.792025 6800 round_trippers.go:463] Date: Fri, 17 Sep 2021 16:42:00 GMT
I0917 12:42:00.792031 6800 round_trippers.go:463] Audit-Id: 6182eca4-fab7-40b0-96ff-406c439e6539
I0917 12:42:00.792037 6800 round_trippers.go:463] Cache-Control: no-cache, private
I0917 12:42:00.792043 6800 round_trippers.go:463] Content-Type: application/json
I0917 12:42:00.792049 6800 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: f882718c-ee58-4367-baee-cefdd030ae39
I0917 12:42:00.792055 6800 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: 5d04e46f-af20-40f0-a8f6-bcd5093a5d39
I0917 12:42:00.792072 6800 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"pi","group":"batch","kind":"jobs","uid":"09946e4e-6ba2-4e95-8852-8b6d1ddeabdd"}}
job.batch "pi" deleted
but can also return the full job:
kubectl delete --raw /apis/batch/v1/namespaces/default/jobs/pi --v=9
I0917 12:40:58.372314 5319 loader.go:372] Config loaded from file: /Users/liggitt/go/src/k8s.io/kubernetes/_output/certs/admin.kubeconfig
I0917 12:40:58.373235 5319 round_trippers.go:435] curl -v -XDELETE -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.23.0 (darwin/amd64) kubernetes/5fb1442" 'https://localhost:6443/apis/batch/v1/namespaces/default/jobs/pi'
I0917 12:40:58.383272 5319 round_trippers.go:454] DELETE https://localhost:6443/apis/batch/v1/namespaces/default/jobs/pi 200 OK in 10 milliseconds
I0917 12:40:58.383309 5319 round_trippers.go:460] Response Headers:
I0917 12:40:58.383323 5319 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: 5d04e46f-af20-40f0-a8f6-bcd5093a5d39
I0917 12:40:58.383332 5319 round_trippers.go:463] Content-Length: 2009
I0917 12:40:58.383337 5319 round_trippers.go:463] Date: Fri, 17 Sep 2021 16:40:58 GMT
I0917 12:40:58.383341 5319 round_trippers.go:463] Audit-Id: 95d873ed-b5cb-4f8b-90ea-50d1f03ff23a
I0917 12:40:58.383362 5319 round_trippers.go:463] Cache-Control: no-cache, private
I0917 12:40:58.383367 5319 round_trippers.go:463] Content-Type: application/json
I0917 12:40:58.383372 5319 round_trippers.go:463] Warning: 299 - "child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning"
I0917 12:40:58.383377 5319 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: f882718c-ee58-4367-baee-cefdd030ae39
Warning: child pods are preserved by default when jobs are deleted; set propagationPolicy=Background to remove them or set propagationPolicy=Orphan to suppress this warning
{"kind":"Job","apiVersion":"batch/v1","metadata":{"name":"pi","namespace":"default","uid":"907a1e0e-f72a-4e42-9988-560925192ba3","resourceVersion":"436","generation":2,"creationTimestamp":"2021-09-17T16:40:50Z","deletionTimestamp":"2021-09-17T16:40:58Z","deletionGracePeriodSeconds":0,"labels":{"controller-uid":"907a1e0e-f72a-4e42-9988-560925192ba3","job-name":"pi"},"finalizers":["orphan"],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"batch/v1","time":"2021-09-17T16:40:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:active":{},"f:startTime":{}}},"subresource":"status"},{"manager":"kubectl-create","operation":"Update","apiVersion":"batch/v1","time":"2021-09-17T16:40:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:backoffLimit":{},"f:completionMode":{},"f:completions":{},"f:parallelism":{},"f:suspend":{},"f:template":{"f:metadata":{"f:name":{}},"f:spec":{"f:containers":{"k:{\"name\":\"pi\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},"spec":{"parallelism":1,"completions":1,"backoffLimit":6,"selector":{"matchLabels":{"controller-uid":"907a1e0e-f72a-4e42-9988-560925192ba3"}},"template":{"metadata":{"name":"pi","creationTimestamp":null,"labels":{"controller-uid":"907a1e0e-f72a-4e42-9988-560925192ba3","job-name":"pi"}},"spec":{"containers":[{"name":"pi","image":"perl","command":["perl","-Mbignum=bpi","-wle","print bpi(2000)"],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler"}},"completionMode":"NonIndexed","suspend":false},"status":{"startTime":"2021-09-17T16:40:50Z","active":1}}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the difference is whether or not propagationPolicy=Background is specified.
When it isn't, Status is returned. When it is, Job is returned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I think I'm specifying propagationPolicy=Foregound.
Any reason to not always return a Job? Most clients crash right now because of type mismatch.
Edit: looks like there is a workaround? kubernetes-client/java#1445
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to not always return a Job?
well... some clients generated to expect status and not specifying propagationPolicy
are successfully working ... switching to always return a Job will break them...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay thanks for looking into it, will close this then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's worth trying to improve the schema to indicate the object type is a possible return schema for the delete endpoints, even if we don't change the server behavior for existing types
There's a few things going on here:
So, for objects which do not set However, if we change to set Since all clients have to be able to handle Status in all cases (since it can be returned for forbidden errors, not found errors, etc), I think the safest thing to do is to make schema generation stop paying attention to |
hoisted to #105104 to avoid losing the investigation and details |
DELETE batch/v1/jobs actually returns a Job resource,
not the Status the swagger docs currently have.
This causes the clients to be generated incorrectly and
the reference docs to be incorrect.
What type of PR is this?
/kind bug
What this PR does / why we need it:
This PR updates the response schema for DELETE batch/v1/jobs in swagger.json.
We need this because swagger.json is currently incorrect (for at least v1.16, v1.17 and latest) and says a Status is returned instead of Job. This causes clients to be generated incorrectly and crashing on deserialisation.
Which issue(s) this PR fixes:
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: