fix: use replace instead of patch for k8s resource updates
Lint Checks / Run linter (push) Failing after 0s
Details
Publish / Build and publish (push) Failing after 0s
Details
Deploy Test / Run deploy test suite (push) Failing after 0s
Details
Webapp Test / Run webapp test suite (push) Failing after 0s
Details
Smoke Test / Run basic test suite (push) Failing after 0s
Details
Lint Checks / Run linter (push) Failing after 0s
Details
Publish / Build and publish (push) Failing after 0s
Details
Deploy Test / Run deploy test suite (push) Failing after 0s
Details
Webapp Test / Run webapp test suite (push) Failing after 0s
Details
Smoke Test / Run basic test suite (push) Failing after 0s
Details
Strategic merge patch preserves fields not present in the patch body. This means removed volumes, ports, and env vars persist in the running Deployment after a restart. Replace sends the complete spec built from the current compose files — removed fields are actually deleted. Affects Deployment, Service, Ingress, and NodePort updates. Service replace preserves clusterIP (immutable field) by reading it from the existing resource before replacing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>afd-dumpster-local-testing
parent
ea610bb8d6
commit
6ace024cd3
|
|
@ -437,10 +437,18 @@ class K8sDeployer(Deployer):
|
|||
print(f"Created Deployment {name}")
|
||||
except ApiException as e:
|
||||
if e.status == 409:
|
||||
# Already exists — patch to trigger rolling update
|
||||
# Already exists — replace to ensure removed fields
|
||||
# (volumes, mounts, env vars) are actually deleted.
|
||||
# Patch uses strategic merge which preserves old fields.
|
||||
existing = self.apps_api.read_namespaced_deployment(
|
||||
name=name, namespace=self.k8s_namespace
|
||||
)
|
||||
deployment.metadata.resource_version = (
|
||||
existing.metadata.resource_version
|
||||
)
|
||||
deployment_resp = cast(
|
||||
client.V1Deployment,
|
||||
self.apps_api.patch_namespaced_deployment(
|
||||
self.apps_api.replace_namespaced_deployment(
|
||||
name=name,
|
||||
namespace=self.k8s_namespace,
|
||||
body=deployment,
|
||||
|
|
@ -469,8 +477,16 @@ class K8sDeployer(Deployer):
|
|||
print(f"Created Service {svc_name}")
|
||||
except ApiException as e:
|
||||
if e.status == 409:
|
||||
# Service exists — patch it (preserves clusterIP)
|
||||
service_resp = self.core_api.patch_namespaced_service(
|
||||
# Replace to ensure removed ports are deleted.
|
||||
# Must preserve clusterIP (immutable) and resourceVersion.
|
||||
existing = self.core_api.read_namespaced_service(
|
||||
name=svc_name, namespace=self.k8s_namespace
|
||||
)
|
||||
service.metadata.resource_version = (
|
||||
existing.metadata.resource_version
|
||||
)
|
||||
service.spec.cluster_ip = existing.spec.cluster_ip
|
||||
service_resp = self.core_api.replace_namespaced_service(
|
||||
name=svc_name,
|
||||
namespace=self.k8s_namespace,
|
||||
body=service,
|
||||
|
|
@ -624,7 +640,13 @@ class K8sDeployer(Deployer):
|
|||
print(f"Created Ingress {ing_name}")
|
||||
except ApiException as e:
|
||||
if e.status == 409:
|
||||
self.networking_api.patch_namespaced_ingress(
|
||||
existing = self.networking_api.read_namespaced_ingress(
|
||||
name=ing_name, namespace=self.k8s_namespace
|
||||
)
|
||||
ingress.metadata.resource_version = (
|
||||
existing.metadata.resource_version
|
||||
)
|
||||
self.networking_api.replace_namespaced_ingress(
|
||||
name=ing_name,
|
||||
namespace=self.k8s_namespace,
|
||||
body=ingress,
|
||||
|
|
@ -648,7 +670,14 @@ class K8sDeployer(Deployer):
|
|||
)
|
||||
except ApiException as e:
|
||||
if e.status == 409:
|
||||
self.core_api.patch_namespaced_service(
|
||||
existing = self.core_api.read_namespaced_service(
|
||||
name=np_name, namespace=self.k8s_namespace
|
||||
)
|
||||
nodeport.metadata.resource_version = (
|
||||
existing.metadata.resource_version
|
||||
)
|
||||
nodeport.spec.cluster_ip = existing.spec.cluster_ip
|
||||
self.core_api.replace_namespaced_service(
|
||||
name=np_name,
|
||||
namespace=self.k8s_namespace,
|
||||
body=nodeport,
|
||||
|
|
|
|||
Loading…
Reference in New Issue