-
Notifications
You must be signed in to change notification settings - Fork 900
Description
Part of umbrella issue to migrate the kubernetes project away from use of GCP project google-containers: #1571
This issue covers the deprecation of and migration away from the following google.com assets:
- the google.com-owned GCS bucket
gs://kubernetes-release
living in GCP projectgoogle-containers
, in favor of the community-owned GCS bucketgs://k8s-release
living in GCP project TBD (currentlyk8s-release
) - the region-specific GCS buckets
gs://kubernetes-release-asia
andgs://kubernetes-release-eu
, same as above butgs://k8s-release-eu
andgs://k8s-release-asia
instead - TODO: are there container images involved here as well, or did we already address that with k8s.gcr.io?
These are not labeled as steps just yet because not everything needs to be completed to full fidelity in strict sequential order. I would prefer that we get a sense sooner rather than later what the impact of shifting dl.k8s.io traffic will be, in terms of how much budget, and what percentage of traffic that represents vs. hardcoded traffic.
Determine new-to-deprecated sync implementation and deprecation window
There are likely a lot of people out there that have gs://kubernetes-release
hardcoded. It's unreasonable to stop putting new releases there without some kind of advance warning. So after announcing our intent to deprecate gs://kubernetes-release
, we should decide how we're going to sync new releases back there (and its region-specific buckets)
gsutil rsync
- Google Cloud Storage Transfer Service
- etc.
As for the deprecation window itself, I think it's fair to treat this with a deprecation clock equivalent to disabling a v1 API.
Determine gs://k8s-release project location and geo-sync implementation
- Someone (probably me) manually created
gs://k8s-release
and its other buckets to prevent someone else from grabbing the name - The
-eu
and-asia
buckets are not actually region-specific, and should be recreated as such - We should decide how we're going to implement region syncing (same as above)
- We should decide at this stage whether we want to block on a binary artifact promotion process, or get by with one of the syncing mechanisms from above
Use dl.k8s.io where possible and identify remaining hardcoded bucket name references across the project
The only time a kubernetes release artifact GCS bucket name needs to show up in a URI is if gsutil is involved, or someone is explicitly interested in browsing the bucket. For tools like curl
or wget
that retrieve binaries via HTTP, we have https://dl.k8s.io
, which will allow us to automatically shift traffic from one bucket to the next depending on the requested URIs
I started doing this for a few projects while working on #2318, e.g.
- use dl.k8s.io where possible cloud-provider-gcp#252
- 🌱 Use dl.k8s.io instead of hardcoded GCS URIs kubernetes-sigs/cluster-api#4958
TODO: a cs.k8s.io query and resulting checklist of repos to investigate
Shift dl.k8s.io traffic to gs://k8s-release-dev
TODO: there is a separate issue for this.
We will pre-seed gs://k8s-release with everything in gs://kubernetes-release, and gradually modify dl.k8s.io to redirect more and more traffic to gs://k8s-release.
The idea is not to flip a switch, just in case that sends us way more traffic than our budget is prepared to handle. Instead, let's consider shifting traffic gradually for certain URI patterns, or a certain percentage of requests, etc. It's unclear whether this will be as straightforward as adding lines to nginx, or whether we'll want GCLB changes as well.
Change remaining project references to gs://k8s-release
/area artifacts
/area prow
/area release-eng
/sig release
/sig testing
/wg k8s-infra
/priority important-soon
/kind cleanup
/milestone v1.23
Metadata
Metadata
Assignees
Labels
Type
Projects
Status