10000 fix(examples): remove dead code comment by coadler · Pull Request #12194 · coder/coder · GitHub
[go: up one dir, main page]

Skip to content

fix(examples): remove dead code comment #12194

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Feb 17, 2024

Conversation

coadler
Copy link
Contributor
@coadler coadler commented Feb 16, 2024

No description provided.

Copy link
Contributor Author
coadler commented Feb 16, 2024

This stack of pull requests is managed by Graphite. Learn more about stacking.

Join @coadler and the rest of your teammates on Graphite Graphite

@coadler coadler requested a review from matifali February 16, 2024 18:59
@@ -242,8 +242,7 @@ resource "kubernetes_deployment" "main" {
}

spec {
# replicas = data.coder_workspace.me.start_count
replicas = 1
replicas = data.coder_workspace.me.start_count
selector {
match_labels = {
"app.kubernetes.io/name" = "coder-workspace"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Re: lines 279 to 288]

Also, FWIW, I was using this on a GKE autopilot cluster which doesn't support overprovisioning and my pods only ever got `250m` CPU. I think this should probably be configurable, rather than the only option.

See this comment inline on Graphite.

Copy link
Member
@matifali matifali Feb 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are using the count on deployment and the deployment gets scaled to 0. Does the count value on the pod matter?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see, I'm dumb. Though thinking about it now, I wonder if it would be better to just scale up and down the replicas rather than delete and recreate the deployment each time. I'll just remove the dead code comment for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My issue still stands about the overprovisioning issue, though. Would like your thoughts

@@ -242,8 +242,7 @@ resource "kubernetes_deployment" "main" {
}

spec {
# replicas = data.coder_workspace.me.start_count
replicas = 1
replicas = data.coder_workspace.me.start_count
selector {
match_labels = {
"app.kubernetes.io/name" = "coder-workspace"
Copy link
Member
@matifali matifali Feb 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are using the count on deployment and the deployment gets scaled to 0. Does the count value on the pod matter?

@coadler coadler force-pushed the colin/fixexamplesproperlyshutdownkubernetesdeployment branch from 5526932 to 78e1c15 Compare February 17, 2024 17:34
@coadler coadler changed the title fix(examples): properly shutdown kubernetes deployment fix(examples): remove dead code comment Feb 17, 2024
@coadler coadler enabled auto-merge (squash) February 17, 2024 17:36
@coadler coadler merged commit 817cc78 into main Feb 17, 2024
74FF
@coadler coadler deleted the colin/fixexamplesproperlyshutdownkubernetesdeployment branch February 17, 2024 17:38
@github-actions github-actions bot locked and limited conversation to collaborators Feb 17, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0