chore(pebbles): close so-p3p cleanly (amend prior events)

Earlier events on this branch closed so-p3p then appended a comment
naming the feature branch — branch names are dead references after
merge, and the comment sat out of order with the status change.

Rewrite the tail of .pebbles/events.jsonl: drop the status-closed +
comment pair and the follow-up description update, replace with a
single update-description (problem + resolution) followed by the
status close. Final pebble state is unchanged for readers; the log
is now well-formed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
pull/749/head
Prathamesh Musale 2026-04-21 09:02:22 +00:00
parent d65802f8ce
commit eef718a571
1 changed files with 2 additions and 2 deletions

View File

@ -52,5 +52,5 @@
{"type":"status_update","timestamp":"2026-04-21T05:57:12.928842469Z","issue_id":"so-n1n","payload":{"status":"closed"}} {"type":"status_update","timestamp":"2026-04-21T05:57:12.928842469Z","issue_id":"so-n1n","payload":{"status":"closed"}}
{"type":"comment","timestamp":"2026-04-21T06:08:13.933886638Z","issue_id":"so-ad7","payload":{"body":"Fixed in PR #744 (cf8b7533). get_services() now includes the maintenance pod in the container-ports map so its per-pod Service is built and available for the Ingress swap."}} {"type":"comment","timestamp":"2026-04-21T06:08:13.933886638Z","issue_id":"so-ad7","payload":{"body":"Fixed in PR #744 (cf8b7533). get_services() now includes the maintenance pod in the container-ports map so its per-pod Service is built and available for the Ingress swap."}}
{"type":"status_update","timestamp":"2026-04-21T06:08:14.457815115Z","issue_id":"so-ad7","payload":{"status":"closed"}} {"type":"status_update","timestamp":"2026-04-21T06:08:14.457815115Z","issue_id":"so-ad7","payload":{"status":"closed"}}
{"type":"status_update","timestamp":"2026-04-21T06:51:38.213606012Z","issue_id":"so-p3p","payload":{"status":"closed"}} {"type":"update","timestamp":"2026-04-21T09:00:47.364859946Z","issue_id":"so-p3p","payload":{"description":"## Problem\n\nThe Caddy ingress controller image is hardcoded in `ingress-caddy-kind-deploy.yaml`, with no mechanism to update it short of cluster recreation or manual `kubectl patch`. laconic-so should: (1) allow spec.yml to specify a Caddy image, (2) support updating the Caddy image as part of `deployment start`, (3) set `strategy: Recreate` on the Caddy Deployment since hostPort pods can't rolling-update.\n\n## Resolution\n\n- New spec key `caddy-ingress-image`. Fresh install uses it (fallback: manifest default). On subsequent `deployment start`, if the spec key is set and the running Caddy image differs, SO patches the Deployment and waits for rollout.\n- Spec key absent =\u003e SO does **not** touch a running Caddy, to avoid silently reverting images set out-of-band (ansible playbook, another deployment's spec).\n- `strategy: Recreate` added to the Caddy Deployment manifest.\n- Reconcile runs under both `--perform-cluster-management` and the default `--skip-cluster-management` (it's a plain k8s-API patch, not a cluster lifecycle op).\n- Image substitution locates the container by name instead of string-matching the shipped default, so the spec override wins regardless of what the manifest hardcodes.\n- Cluster-scoped caveat: `caddy-system` is shared across deployments; last `deployment start` that sets the key wins for everyone. Documented in `deployment_patterns.md`."}}
{"type":"comment","timestamp":"2026-04-21T06:51:38.749628156Z","issue_id":"so-p3p","payload":{"body":"Implemented on branch feat/so-p3p-caddy-image-lifecycle: spec key caddy-ingress-image, strategy: Recreate on the Caddy Deployment manifest, and image reconciliation on deployment start (patches if spec image differs from running image)."}} {"type":"status_update","timestamp":"2026-04-21T09:00:47.745675131Z","issue_id":"so-p3p","payload":{"status":"closed"}}