feat,fix: civo,google,vultr argo upgrade and vultr fix#794
Conversation
fix,feat: google,vultr k8s and argo upgrade
patrickdappollonio
left a comment
There was a problem hiding this comment.
Looks good to me, seems though that we changed one value for another and I don't have context with that, so I'll have to leave it to someone else to review.
| name = local.cluster_name | ||
| network_id = civo_network.kubefirst.id | ||
| firewall_id = civo_firewall.kubefirst.id | ||
| cluster_type = "talos" |
There was a problem hiding this comment.
It seems this line got lost, was that intentional?
There was a problem hiding this comment.
I am using k3s instead of talos as civo talos doesn't support argo 2.12 ,so I justed changed that and added indentation
There was a problem hiding this comment.
this detail is a little risky. we had at one point (4 months ago) a known problem with management cluster provisioning where argocd would intermittently break the cluster completely as soon as it laid argocd overtop of itself. this was a civo issue that was sidestepped by switching to talos. i'm not saying that we must stay on talos, but reintroducing k3s clusters in civo is a moderately risky play. i'm okay returning to k3s as the default if we get 4 civo management clusters in a row without the cluster locking up. to be clear though, i am okay getting this merged, we just need to do some exhaustive civo management cluster confirmations with it.
| name = local.cluster_name | ||
| network_id = civo_network.kubefirst.id | ||
| firewall_id = civo_firewall.kubefirst.id | ||
| cluster_type = "talos" |
There was a problem hiding this comment.
this detail is a little risky. we had at one point (4 months ago) a known problem with management cluster provisioning where argocd would intermittently break the cluster completely as soon as it laid argocd overtop of itself. this was a civo issue that was sidestepped by switching to talos. i'm not saying that we must stay on talos, but reintroducing k3s clusters in civo is a moderately risky play. i'm okay returning to k3s as the default if we get 4 civo management clusters in a row without the cluster locking up. to be clear though, i am okay getting this merged, we just need to do some exhaustive civo management cluster confirmations with it.
|
Yes sir got it,I have tested everything 2 times on GitHub and 2 times for
gitlab, everything was working perfectly
…On Thu, Aug 22, 2024, 22:33 John Dietz ***@***.***> wrote:
***@***.**** approved this pull request.
------------------------------
In civo-github/terraform/civo/main.tf
<#794 (comment)>
:
> @@ -38,11 +38,11 @@ resource "civo_firewall" "kubefirst" {
}
resource "civo_kubernetes_cluster" "kubefirst" {
- name = local.cluster_name
- network_id = civo_network.kubefirst.id
- firewall_id = civo_firewall.kubefirst.id
- cluster_type = "talos"
this detail is a little risky. we had at one point (4 months ago) a known
problem with management cluster provisioning where argocd would
intermittently break the cluster completely as soon as it laid argocd
overtop of itself. this was a civo issue that was sidestepped by switching
to talos. i'm not saying that we must stay on talos, but reintroducing k3s
clusters in civo is a moderately risky play. i'm okay returning to k3s as
the default if we get 4 civo management clusters in a row without the
cluster locking up. to be clear though, i am okay getting this merged, we
just need to do some exhaustive civo management cluster confirmations with
it.
—
Reply to this email directly, view it on GitHub
<#794 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3442ZD5JZCA3AMPBUQN77LZSYKVJAVCNFSM6AAAAABM4RHNHSVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDENJVGE4DGMJZGI>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
Description
Changed civo cluster to k3s 1.28 as argo 1.12 needs k8s 1.28+ versions and point argo manifests to Konstruct