{"type":"create","timestamp":"2026-03-08T06:56:07.080584539Z","issue_id":"so-076","payload":{"description":"Currently laconic-so maps one stack to one deployment to one pod. All containers\nin a stack's compose files become containers in a single k8s pod. This means:\n\n- Can't upgrade doublezero without restarting agave-validator\n- Can't restart monitoring without disrupting the validator\n- Can't independently scale or lifecycle-manage components\n\nThe fix is stack composition. A meta-stack (e.g. biscayne-stack) composes\nsub-stacks (agave, doublezero, agave-monitoring), each becoming its own\nk8s Deployment with independent lifecycle.","priority":"2","title":"Stack composition: deploy multiple stacks into one kind cluster","type":"epic"}} {"type":"create","timestamp":"2026-03-08T06:56:07.551986919Z","issue_id":"so-ab0","payload":{"description":"Add laconic-so deployment prepare that creates cluster infrastructure without pods. Already implemented, needs review.","priority":"2","title":"deployment prepare command","type":"task"}} {"type":"create","timestamp":"2026-03-08T06:56:07.884418759Z","issue_id":"so-04f","payload":{"description":"deployment stop on ANY deployment deletes the shared kind cluster. Should only delete its own namespace.","priority":"2","title":"deployment stop should not destroy shared cluster","type":"bug"}} {"type":"create","timestamp":"2026-03-08T06:56:08.253520249Z","issue_id":"so-370","payload":{"description":"Allow stack.yml to reference sub-stacks. Each sub-stack becomes its own k8s Deployment sharing namespace and PVs.","priority":"2","title":"Add stacks: field to stack.yml for composition","type":"task"}} {"type":"create","timestamp":"2026-03-08T06:56:08.646764337Z","issue_id":"so-f7c","payload":{"description":"Create three independent stacks from the monolithic agave-stack. Each gets its own compose file and independent lifecycle.","priority":"2","title":"Split agave-stack into agave + doublezero + monitoring","type":"task"}} {"type":"rename","timestamp":"2026-03-08T06:56:14.499990161Z","issue_id":"so-ab0","payload":{"new_id":"so-076.1"}} {"type":"dep_add","timestamp":"2026-03-08T06:56:14.499992031Z","issue_id":"so-076.1","payload":{"dep_type":"parent-child","depends_on":"so-076"}} {"type":"rename","timestamp":"2026-03-08T06:56:14.786407752Z","issue_id":"so-04f","payload":{"new_id":"so-076.2"}} {"type":"dep_add","timestamp":"2026-03-08T06:56:14.786409842Z","issue_id":"so-076.2","payload":{"dep_type":"parent-child","depends_on":"so-076"}} {"type":"rename","timestamp":"2026-03-08T06:56:15.058959714Z","issue_id":"so-370","payload":{"new_id":"so-076.3"}} {"type":"dep_add","timestamp":"2026-03-08T06:56:15.058961364Z","issue_id":"so-076.3","payload":{"dep_type":"parent-child","depends_on":"so-076"}} {"type":"rename","timestamp":"2026-03-08T06:56:15.410080785Z","issue_id":"so-f7c","payload":{"new_id":"so-076.4"}} {"type":"dep_add","timestamp":"2026-03-08T06:56:15.410082305Z","issue_id":"so-076.4","payload":{"dep_type":"parent-child","depends_on":"so-076"}} {"type":"dep_add","timestamp":"2026-03-08T06:56:16.313585082Z","issue_id":"so-076.3","payload":{"dep_type":"blocks","depends_on":"so-076.2"}} {"type":"dep_add","timestamp":"2026-03-08T06:56:16.567629422Z","issue_id":"so-076.4","payload":{"dep_type":"blocks","depends_on":"so-076.3"}} {"type": "create", "timestamp": "2026-03-18T14:45:07.038870Z", "issue_id": "so-a1a", "payload": {"title": "deploy create should support external credential injection", "type": "feature", "priority": "2", "description": "deploy create generates config.env but provides no mechanism to inject external credentials (API keys, tokens, etc.) at creation time. Operators must append to config.env after the fact, which mutates a build artifact. deploy create should accept --credentials-file or similar to include secrets in the generated config.env."}} {"type": "create", "timestamp": "2026-03-18T14:45:07.038942Z", "issue_id": "so-b2b", "payload": {"title": "REGISTRY_TOKEN / imagePullSecret flow undocumented", "type": "bug", "priority": "2", "description": "create_registry_secret() exists in deployment_create.py and is called during up(), but REGISTRY_TOKEN is not documented in spec.yml or any user-facing docs. The restart command warns \"Registry token env var REGISTRY_TOKEN not set, skipping registry secret\" but doesn't explain how to set it. For GHCR private images, this is required and the flow from spec.yml -> config.env -> imagePullSecret needs documentation."}} {"type": "create", "timestamp": "2026-03-18T19:10:00.000000Z", "issue_id": "so-k1k", "payload": {"title": "Stack path resolution differs between deploy create and deployment restart", "type": "bug", "priority": "2", "description": "deploy create resolves --stack as a relative path from cwd. deployment restart resolves --stack-path as absolute, then computes repo_root as 4 parents up (assuming stack_orchestrator/data/stacks/name structure). External stacks with different nesting depths (e.g. stack-orchestrator/stacks/name = 3 levels) get wrong repo_root, causing --spec-file resolution to fail. The two commands should use the same path resolution logic."}} {"type": "create", "timestamp": "2026-03-18T19:25:00.000000Z", "issue_id": "so-l2l", "payload": {"title": "deployment restart should update in place, not delete/recreate", "type": "bug", "priority": "1", "description": "deployment restart deletes the entire namespace then recreates everything from scratch. This causes:\n\n1. **Downtime** — nothing serves traffic between delete and successful recreate\n2. **No rollback** — deleting the namespace destroys ReplicaSet revision history\n3. **Race conditions** — namespace may still be terminating when up() tries to create\n4. **Cascading failures** — if ANY container fails to start, the entire site is down with no fallback\n\nFix: three changes needed.\n\n**A. up() should create-or-update, not just create.** Use patch/apply semantics for Deployments, Services, Ingresses. When the pod spec changes (new env vars, new image), k8s creates a new ReplicaSet, scales it up, waits for readiness probes, then scales the old one down. Old pods serve traffic until new pods are healthy.\n\n**B. down() should never delete the namespace on restart.** Only on explicit teardown. The namespace owns the revision history. Current code: _delete_namespace() on every down(). Should: delete individual resources by label for teardown, do nothing for restart (let update-in-place handle it).\n\n**C. All containers need readiness probes.** Without them k8s considers pods ready immediately, defeating rolling update safety. laconic-so should generate readiness probes from the http-proxy routes in spec.yml (if a container has an http route, probe that port).\n\nWith these changes, k8s native rolling updates provide zero-downtime deploys and automatic rollback (if new pods fail readiness, rollout stalls, old pods keep serving).\n\nSource files:\n- deploy_k8s.py: up(), down(), _create_deployment(), _delete_namespace()\n- cluster_info.py: pod spec generation (needs readiness probes)\n- deployment.py: restart() orchestration"}} {"type": "create", "timestamp": "2026-03-18T20:15:03.000000Z", "issue_id": "so-m3m", "payload": {"title": "Add credentials-files spec key for on-disk credential injection", "type": "feature", "priority": "1", "description": "deployment restart regenerates config.env from spec.yml, wiping credentials that were appended from on-disk files (e.g. ~/.credentials/*.env). Operators must append credentials after deploy create, which is fragile and breaks on restart.\n\nFix: New top-level spec key credentials-files. _write_config_file() reads each file and appends its contents to config.env after writing config vars. Files are read at deploy time from the deployment host.\n\nSpec syntax:\n credentials-files:\n - ~/.credentials/dumpster-secrets.env\n - ~/.credentials/dumpster-r2.env\n\nFiles:\n- deploy/spec.py: add get_credentials_files() returning list of paths\n- deploy/deployment_create.py: in _write_config_file(), after writing config vars, read and append each credentials file (expand ~ to home dir)\n\nAlso update dumpster-stack spec.yml to use the new key and remove the ansible credential append workaround from woodburn_deployer (group_vars/all.yml credentials_env_files, stack_deploy role append tasks, restart_dumpster.yml credential steps). Those cleanups are in the woodburn_deployer repo."}} {"type":"status_update","timestamp":"2026-03-18T21:54:12.59148256Z","issue_id":"so-m3m","payload":{"status":"in_progress"}} {"type":"close","timestamp":"2026-03-18T21:55:31.6035544Z","issue_id":"so-m3m","payload":{}} {"type": "create", "timestamp": "2026-03-20T23:05:00.000000Z", "issue_id": "so-n1n", "payload": {"title": "Merge kind-mount-propagation branch — HostToContainer propagation for extraMounts", "type": "feature", "priority": "2", "description": "The kind-mount-root feature was cherry-picked to main (commit 8d03083d) but the mount propagation fix (commit 929bdab8 on branch enya-ac868cc4-kind-mount-propagation-fix) adds HostToContainer propagation so host submounts propagate into the Kind node. This is needed for ZFS child datasets and tmpfs mounts under the root. Cherry-pick 929bdab8 to main."}} {"type": "create", "timestamp": "2026-03-20T23:05:00.000000Z", "issue_id": "so-o2o", "payload": {"title": "etcd cert backup not persisting across cluster deletion", "type": "bug", "priority": "1", "description": "The extraMount for etcd at data/cluster-backups//etcd is configured but after cluster deletion the directory is empty. Caddy TLS certificates stored in etcd are lost. Either etcd isn't writing to the host mount, or the cleanup code is deleting the backup. Investigate _clean_etcd_keeping_certs in helpers.py."}} {"type": "create", "timestamp": "2026-03-21T00:20:00.000000Z", "issue_id": "so-p3p", "payload": {"title": "laconic-so should manage Caddy ingress image lifecycle", "type": "feature", "priority": "2", "description": "The Caddy ingress controller image is hardcoded in ingress-caddy-kind-deploy.yaml. There's no mechanism to update it without manual kubectl commands or cluster recreation. laconic-so should: 1) Allow spec.yml to specify a custom Caddy image, 2) Support updating the Caddy image as part of deployment restart, 3) Set strategy: Recreate on the Caddy Deployment (hostPort pods can't do RollingUpdate). This would let cryovial or similar tooling trigger Caddy updates through the normal deployment pipeline."}} {"type":"create","timestamp":"2026-04-08T05:51:31.557582604Z","issue_id":"so-5cd","payload":{"description":"The DockerDeployer.up() in stack_orchestrator/deploy/compose/deploy_docker.py accepts image_overrides as a parameter but silently drops it — only k8s mode (deploy_k8s.py) actually applies overrides.\n\nImpact: the --image container=image CLI flag on 'laconic-so deployment start' is a no-op for compose-mode deployments. Spec-level image-overrides: keys are also ignored in compose mode (they reach up() via deployment.py but are never applied).\n\nUse case: gorchain-stacks test scripts build :local images via build-containers, but compose files reference ghcr.io/gorbagana-dev/*:latest (so prod pulls work). Without image override support in compose mode, tests either need to docker tag the builds or the compose file needs to be rewritten before start — both ugly workarounds for what should be a first-class mechanism.\n\nFix sketch: in DockerDeployer.up(), when image_overrides is non-empty, write a temporary docker-compose.override.yml with {services: {name: {image: ref}}} and construct a new DockerClient with compose_files + [override_path]. Keeps k8s path untouched, reuses existing --image CLI flag and spec-level image-overrides: plumbing.","priority":"2","title":"Compose deployer ignores image_overrides","type":"bug"}} {"type": "create", "timestamp": "2026-04-13T09:54:05.207241Z", "issue_id": "so-c71", "payload": {"title": "extraPortMappings maps all compose ports unconditionally", "type": "bug", "priority": "2", "description": "Commit fb69cc58 added compose service port mapping to Kind extraPortMappings. The intent was to support network_mode: host services (RPC, gossip), but the implementation maps ALL compose ports unconditionally. Internal-only ports (postgres 5432, redis 6379) get exposed on the host, causing conflicts with local services. The port mapping should only apply to services with network_mode: host, or be controlled by a spec-level opt-in.", "source_commit": "fb69cc58"}} {"type": "create", "timestamp": "2026-04-14T09:53:31.040118Z", "issue_id": "so-078", "payload": {"title": "Deployments should be self-sufficient: copy hooks into deployment dir", "type": "feature", "priority": "1", "description": "deploy/commands.py hooks are resolved from the stack repo at runtime via get_stack_path. The deployment dir has no copy. This means: (1) the repo must remain at the same path after deploy create, (2) deployment start/restart fail with 'stack does not exist' if cwd differs from deploy create time (stack-source in deployment.yml is relative), (3) deployments cannot be moved or run independently of the source repo. Fix: deploy create should copy deploy/commands.py into the deployment dir alongside compose files and configmaps. call_stack_deploy_start should load from the deployment dir. The deployment becomes self-sufficient."}} {"type":"comment","timestamp":"2026-04-15T06:12:45.58660796Z","issue_id":"so-c71","payload":{"body":"Fixed in commit e909357a on fix/extraport-host-only branch. Only map ports for services with network_mode: host. Ports 80/443 for Caddy always mapped."}} {"type":"close","timestamp":"2026-04-15T06:12:45.832454065Z","issue_id":"so-c71","payload":{}} {"type":"comment","timestamp":"2026-04-15T06:18:02.64056792Z","issue_id":"so-b2b","payload":{"body":"Fixed. create_registry_secret() in deployment_create.py:583 reads image-pull-secret from spec, resolves token via token-env/token-file. Spec key renamed from registry-credentials to image-pull-secret (spec.py:140). Documented in docs/deployment_patterns.md with REGISTRY_TOKEN usage example."}} {"type":"close","timestamp":"2026-04-15T06:18:02.965856003Z","issue_id":"so-b2b","payload":{}} {"type":"comment","timestamp":"2026-04-15T06:18:04.543850719Z","issue_id":"so-k1k","payload":{"body":"Largely resolved. deployment restart (deployment.py:324) now uses 'git rev-parse --show-toplevel' to find repo_root dynamically (lines 364-378), removing the fixed 4-parents-up assumption. External stacks with varying nesting depths now work for restart. deploy create still uses get_stack_path(stack_name) which is a different mechanism but works correctly with --stack-path. Closing — the underlying breakage is gone."}} {"type":"close","timestamp":"2026-04-15T06:18:04.856542806Z","issue_id":"so-k1k","payload":{}} {"type":"comment","timestamp":"2026-04-15T06:18:08.436540869Z","issue_id":"so-076.2","payload":{"body":"Partially mitigated by commit cc6acd5f which flipped --skip-cluster-management default to true, so 'deployment stop' no longer destroys the cluster by default. Root fix still open: down() in deploy_k8s.py:904-936 unconditionally calls _delete_namespace() (line 929) and destroy_cluster() (line 936) when --perform-cluster-management is passed. No logic distinguishes shared vs dedicated clusters."}} {"type":"comment","timestamp":"2026-04-15T06:18:11.374723274Z","issue_id":"so-l2l","payload":{"body":"Partially addressed. Readiness probes are now generated in cluster_info.py:652-671 (part C of the original fix). Parts A and B still open: up() does not use patch/apply (delete/recreate semantics remain), and down() still calls _delete_namespace() unconditionally at deploy_k8s.py:929 on every restart. A 'fix: never delete namespace on deployment down' commit (ae2cea34) exists on a remote branch but is not merged to main."}} {"type":"create","timestamp":"2026-04-15T11:11:15.584733236Z","issue_id":"so-328","payload":{"description":"deployment restart runs create_operation(update=True) which uses copytree(dirs_exist_ok=True) to sync the stack repo into the deployment dir (deployment_create.py:1079, 1130). This is additive only — it overwrites and adds files, but never removes them. Two resulting gaps:\n\n1. Deletions don't propagate. If a script, configmap file, or compose service is removed from the stack repo, the deployment dir keeps it, and up_operation keeps applying it. The k8s ConfigMap retains removed keys; removed Deployments/Services are not cleaned up (up() is create/patch, not full reconciliation). Operators see stale files and orphan workloads that won't disappear without manual kubectl intervention or a full teardown.\n\n2. stack.yml structural changes don't auto-surface in the spec. If a stack.yml gains a new configmap declaration or a new compose file reference, restart doesn't pull it in unless the operator's spec.yml already references it. The spec is the contract, so additions to the stack aren't applied to live deployments just by pulling the repo.\n\nTeardown + redeploy is the only reliable way to clean up today, which is not practical in production.\n\nFix direction: create_operation(update=True) should treat the deployment dir as reconciled state — diff the desired tree (from the stack repo + spec) against what's on disk and remove files that no longer exist upstream. up_operation should then delete k8s resources (Deployments, Services, ConfigMaps) that are no longer declared by any compose/configmap source, likely scoped by an 'app.kubernetes.io/managed-by: laconic-so' label to avoid nuking unrelated resources. For new stack.yml entries, consider whether the spec needs operator action or whether restart can auto-detect and warn.","priority":"3","title":"deployment restart does not propagate repo deletions or new stack.yml entries","type":"bug"}} {"type":"comment","timestamp":"2026-04-16T06:24:38.826132538Z","issue_id":"so-l2l","payload":{"body":"Fixed in so-l2l Parts A and B on this branch:\n\n**Part A (up() as create-or-update):** Deployments, Services, ConfigMaps, Secrets, Ingresses, and Endpoints already used create-or-replace in up(). Completed coverage by adding 409 skip-if-exists for Jobs (one-shot, re-run unwanted). Readiness probes (Part C) were already present.\n\n**Part B (down() preserves namespace):** _delete_labeled_resources now deletes by 'app.kubernetes.io/stack' label and keeps the namespace Active. Full-teardown option is a new --delete-namespace flag on stop/down. down() is synchronous (waits for resources to actually be gone before returning) so tests and ansible can assume clean state on return. Orphan PVs from prior --delete-namespace runs are also cleaned on subsequent stop --delete-volumes.\n\nrestart no longer calls down() at all (deployment.py:421-468), so the original wd-d92-style namespace termination race is structurally impossible. In-cluster rolling updates work via k8s native semantics when Deployment pod specs change via replace."}} {"type":"close","timestamp":"2026-04-16T06:24:39.175431401Z","issue_id":"so-l2l","payload":{}} {"type":"comment","timestamp":"2026-04-16T06:24:41.70556861Z","issue_id":"so-076.2","payload":{"body":"Fixed on chore/pebble-status-audit. stop now uses label-scoped cleanup (app.kubernetes.io/stack=\u003cstack\u003e) and keeps the namespace Active by default. The Kind cluster is not destroyed unless --perform-cluster-management is passed. Full namespace teardown is opt-in via the new --delete-namespace flag. Multiple stacks sharing a namespace/cluster are now cleaned up independently, not blown away en masse."}} {"type":"close","timestamp":"2026-04-16T06:24:42.153940477Z","issue_id":"so-076.2","payload":{}} {"type":"create","timestamp":"2026-04-16T07:26:56.820142001Z","issue_id":"so-ad7","payload":{"description":"_restart_with_maintenance in deployment.py patches Ingress backends to point at the maintenance Service, but that Service is never created. get_services() in cluster_info.py only builds per-pod ClusterIP Services for pods referenced by http-proxy routes (cluster_info.py:991-992 'if not ports_set: continue'). The maintenance pod has no http-proxy route by design, so no Service is built for it.\n\nResult: during a restart with maintenance-service configured, the Ingress points to a non-existent Service. Caddy has no valid backend, connection fails, users see 'site cannot be reached' instead of the maintenance page. Cryovial logs correctly report the swap happened.\n\n_resolve_service_name_for_container (cluster_info.py:183) and get_services() (cluster_info.py:945) operate on inconsistent premises — the resolver assumes every pod has a {app_name}-{pod_name}-service; the builder only creates one for http-proxy-referenced pods.\n\nFix: create_services() should also build a Service for the container named by spec's maintenance-service: key.","priority":"3","title":"Maintenance swap routes Ingress to non-existent Service","type":"bug"}} {"type":"create","timestamp":"2026-04-16T08:21:00.832961223Z","issue_id":"so-b9a","payload":{"description":"_resolve_service_name_for_container (cluster_info.py:183) mechanically returns {app_name}-{pod_name}-service for any container, with no awareness of whether get_services() actually built that Service. get_services() only builds Services for pods referenced by http-proxy or maintenance-service.\n\nCurrent callers happen to be safe: get_ingress() only passes http-proxy containers, _restart_with_maintenance passes the maintenance container (covered by so-ad7). But any future caller that passes a container outside {http-proxy ∪ maintenance-service} gets a ghost Service name and silent failure.\n\nFix direction (when a third caller emerges): either teach the resolver to return None / raise when the Service wasn't built, or make get_services() build a per-pod Service unconditionally for every pod with compose ports, aligning structure with the resolver's assumption.","priority":"4","title":"Service-name resolver and builder operate on inconsistent premises","type":"bug"}} {"type":"comment","timestamp":"2026-04-16T13:36:20.150833128Z","issue_id":"so-o2o","payload":{"body":"Reproduced and partially diagnosed locally. Original 'backup not persisting' framing turns out to be inaccurate — the host bind-mount works fine and the cleanup function runs end-to-end. The actual bug is downstream of those.\n\nWhat we confirmed:\n- The etcd extraMount at \u003cdeployment_dir\u003e/data/cluster-backups/\u003cid\u003e/etcd is honored. After 'kind delete', the host-side data persists (16MB db file, snap files intact, owned by root mode 0700).\n- _clean_etcd_keeping_certs (helpers.py:120-279) actually runs to completion. Evidence: timestamped 'member.backup-YYYYMMDD-HHMMSS' dirs accumulate (created at line 257-260, the last step before the swap-in).\n\nWhat actually breaks:\n- After cleanup + 'kind create cluster', kubeadm init fails. kube-apiserver never opens :6443 ('connection refused' loop until kubeadm gives up). kubelet itself is healthy.\n- Hypothesis (high confidence, not yet proven by inspecting an etcd container log): version skew. Cleanup uses gcr.io/etcd-development/etcd:v3.5.9 (helpers.py:148) which produces v3.5-format on-disk data. Kind v0.32 ships kindest/node:v1.35.1 with etcd v3.6.x, which can't read v3.5-format data and crashes — apiserver can't reach it.\n- Diagnostic that nails the version skew: moving the persisted etcd dir aside ('mv etcd etcd.away') and re-running 'start --perform-cluster-management' succeeds cleanly. With persisted-etcd present, fails. So the cleanup output is what breaks the new cluster.\n\nWhy prod hasn't hit this: woodburn runs kind v0.20.0 (kindest/node:v1.27.x with etcd v3.5.x) — compatible with the v3.5.9 cleanup image. Bug is dormant there until kind is bumped.\n\nWhat we do NOT know:\n- Whether Caddy certs would actually survive a successful recreate. Cluster never came up after cleanup, so we couldn't inspect /registry/secrets/caddy-system in the new etcd. The cleanup function's whitelist preserves them in theory, but end-to-end preservation is unverified.\n\nWhat's also broken regardless of root cause:\n- _clean_etcd_keeping_certs gates ALL its diagnostic prints on opts.o.debug (lines 141, 145, 274, 278) and returns False silently on failure. With a normal (non-debug) run, the operator gets zero indication that cleanup attempted, succeeded, or failed. Silent failure was 90% of why this took so long to diagnose.\n\nFix direction:\n1. Source etcdctl/etcdutl from the same kindest/node image kind is using, so on-disk format always matches what the cluster will boot with. Self-adapts to kind upgrades.\n2. Make failure messages unconditional prints, not gated on debug.\n3. After (1), re-test cert preservation end-to-end and update findings."}} {"type":"comment","timestamp":"2026-04-16T14:34:14.74327248Z","issue_id":"so-o2o","payload":{"body":"Resolved direction: replace the etcd-level cleanup mechanism with a kubectl-level Caddy secret backup/restore in SO.\n\nWhy the layer change: the etcd approach (current code, or even a cleaner snapshot save/restore) shares one fundamental problem — restoring an entire etcd snapshot resurrects ALL prior cluster state, conflicts with the new cluster's bootstrap, and forces us into a maintained whitelist (kindnet/coredns/etc.). It also couples SO to etcd binary versions. Working at the k8s/kubectl layer instead gives us a selective, version-portable, operator-readable backup that exactly matches what we want: a small set of Caddy Secrets.\n\nKey insight from the use case: Caddy's secret_store reuses any valid cert it finds in its store on startup. ACME challenges only fire for domains it has no cert for. So if we restore the right Secrets BEFORE Caddy comes up on a new cluster, no Let's Encrypt traffic happens, no rate limit risk, no TLS gap, and new domains added later still provision normally.\n\nProposed spec API:\n network:\n caddy-cert-backup:\n path: ~/.credentials/caddy-certs/ # operator-owned host dir; presence enables\n\nLifecycle:\n1. destroy_cluster, before 'kind delete', if path configured:\n kubectl get secrets -n caddy-system -l manager=caddy -o yaml \u003e \u003cpath\u003e/caddy-secrets.yaml\n (loud failure if cluster gone or kubectl errors)\n2. install_ingress_for_kind, after manifest applied:\n kubectl apply -f \u003cpath\u003e/caddy-secrets.yaml if it exists\n\nWhat this lets us delete:\n- _clean_etcd_keeping_certs (~150 LoC of docker-in-docker shell-in-Python)\n- _get_etcd_host_path_from_kind_config\n- etcd extraMount setup in _generate_kind_mounts\n- Hardcoded gcr.io/etcd-development/etcd:v3.5.9 image\n- The whitelist that needs maintenance per kind version\n- All silent-failure paths\n\nWhat woodburn can simplify after the SO fix:\n- inject_caddy_certs.yml and the kubectl-backup half of cluster_recreate.yml become redundant\n- extract-caddy-certs.sh (auger) stays as a disaster-recovery tool for cases where the in-SO backup didn't run\n\nPros vs. alternatives:\n- Operator-readable PEM YAML, can be git-committed / sops-encrypted / mirrored to S3\n- Zero version coupling\n- Selective by design (only what we want)\n- Lifecycle hooks are atomic with cluster lifecycle, no remembering to run a separate backup playbook\n- ~30 LoC in SO, no docker-in-docker\n\nBranch and implementation pending operator decision to proceed."}} {"type":"comment","timestamp":"2026-04-17T08:13:32.753112339Z","issue_id":"so-o2o","payload":{"body":"Tested the version-detection fix (commit 832ab66d) locally. Fix works for its scope but surfaces two more bugs downstream. Current approach is broken at the architectural level, not just one-bug-fixable.\n\nWhat 832ab66d does: captures etcd image ref from crictl after cluster create, writes to {backup_dir}/etcd-image.txt, reads it on subsequent cleanup runs. Self-adapts to Kind upgrades. No more hardcoded v3.5.9. Confirmed locally: etcd-image.txt is written after first create, cleanup on second start uses it, member.backup-YYYYMMDD-HHMMSS dir is produced (proves cleanup ran end-to-end).\n\nWhat still fails after version fix: kubeadm init on cluster recreate. apiserver comes up but returns:\n- 403 Forbidden: User \"kubernetes-admin\" cannot get path /livez\n- 500: Body was not decodable ... json: cannot unmarshal array into Go value of type struct\n- eventually times out waiting for apiserver /livez\n\nTwo new bugs behind those:\n\n(a) Restore step corrupts binary values. In _clean_etcd_keeping_certs the restore loop is:\n key=$(echo $encoded | base64 -d | jq -r .key | base64 -d)\n val=$(echo $encoded | base64 -d | jq -r .value | base64 -d)\n echo \"$val\" | /backup/etcdctl put \"$key\"\nk8s stores objects as protobuf. Piping raw protobuf through bash variable expansion + echo mangles non-printable bytes, truncates at null bytes, and appends a trailing newline. Explains the \"cannot unmarshal\" from apiserver — the kubernetes Service/Endpoints objects in /registry are corrupted on re-put.\n\n(b) Whitelist is too narrow. We keep only /registry/secrets/caddy-system and the /registry/services entries for kubernetes. Everything else is deleted — including /registry/clusterrolebindings (cluster-admin is gone), /registry/serviceaccounts, /registry/secrets/kube-system (bootstrap tokens), RBAC roles, apiserver's auth config. Explains the 403 for kubernetes-admin — cluster-admin binding doesn't exist yet and kubeadm's pre-addon health check can't authorize.\n\nFixing (a) would mean rewriting the restore step to not use shell piping — either use a proper etcdctl-based Go tool, or write directly to the on-disk snapshot format. Fixing (b) means exhaustively whitelisting everything kubeadm/apiserver bootstrapping needs — a moving target across k8s versions. Both together are a significant undertaking for the actual requirement (\"keep 4 Caddy secrets across cluster recreate\").\n\nDecision: merge 832ab66d for the narrow version-detection fix + diagnosis trail, then implement the kubectl-level backup/restore on a separate branch. The etcd approach is not salvageable at reasonable cost."}} {"type":"comment","timestamp":"2026-04-17T11:04:26.542659482Z","issue_id":"so-o2o","payload":{"body":"Shipped in PR #746. Etcd-persistence approach replaced with a kubectl-level Caddy Secret backup/restore gated on kind-mount-root.\n\nSummary of what landed:\n- components/ingress/caddy-cert-backup.yaml: SA/Role/RoleBinding + CronJob (alpine/kubectl:1.35.3) firing every 5min, writes {kind-mount-root}/caddy-cert-backup/caddy-secrets.yaml via atomic tmp+rename.\n- install_ingress_for_kind splits into 3 phases: pre-Deployment manifests → _restore_caddy_certs (kubectl apply from backup file) → Caddy Deployment → _install_caddy_cert_backup. Caddy pod can't exist until phase 3, so certs are always in place before secret_store startup.\n- Deleted _clean_etcd_keeping_certs, _get_etcd_host_path_from_kind_config, _capture_etcd_image, _read_etcd_image_ref, _etcd_image_ref_path and the etcd+PKI block in _generate_kind_mounts.\n- No new spec keys.\n\nTest coverage in tests/k8s-deploy/run-deploy-test.sh: install assertion after first --perform-cluster-management start, plus full E2E (seed fake manager=caddy Secret → trigger CronJob → verify backup file → stop/start --perform-cluster-management for cluster recreate → assert secret restored with matching decoded value).\n\nWoodburn migration: one-shot host-kubectl export to seed {kind-mount-root}/caddy-cert-backup/caddy-secrets.yaml was done manually on the running cluster (the in-cluster CronJob couldn't reach the host because the /srv/kind → /mnt extraMount was staged in kind-config.yml but never applied to the running cluster — it was added after cluster creation). File is in place for the eventual cluster recreate."}} {"type":"close","timestamp":"2026-04-17T11:04:26.999711375Z","issue_id":"so-o2o","payload":{}} {"type":"create","timestamp":"2026-04-20T13:14:26.312724048Z","issue_id":"so-7fc","payload":{"description":"## Problem\n\nFile-level host-path compose volumes (e.g. `../config/foo.sh:/opt/foo.sh`) were synthesized into a kind extraMount + k8s hostPath PV chain with a sanitized containerPath (`/mnt/host-path-\u003csanitized\u003e`).\n\n- On kind: two deployments of the same stack sharing a cluster collide at that containerPath — kind only honors the first deployment's bind, so subsequent deployments' pods silently read the first's file. No error, no warning.\n- On real k8s: the same code emits `hostPath: /mnt/host-path-*` but nothing populates that path on worker nodes — effectively broken.\n\nFile-level host-path binds are conceptually k8s ConfigMaps. The `snowballtools-base-backend` stack already uses the ConfigMap-backed named-volume pattern manually; this issue is to make that automatic for all stacks.\n\n## Resolution\n\nImplemented on branch `feat/so-b86-auto-configmap-host-path` (commit `cb84388d`), stacked on top of `feat/kind-mount-invariant-check`.\n\n**No deployment-dir file rewriting.** Compose files, spec.yml, and `{deployment_dir}/config/\u003cpod\u003e/` are untouched — trivially diffable against stack source, no synthetic volume names. ConfigMaps are materialized at deploy start and visible only in k8s (`kubectl get cm -n \u003cns\u003e`).\n\n### Deploy create — validation only\n\n| Source shape | Behavior |\n|---|---|\n| Single file | Accepted |\n| Flat directory, no subdirs, ≤ ~700 KiB | Accepted |\n| Directory with subdirs | `DeployerException` — guidance: embed in image / split configmaps / initContainer |\n| File or directory \u003e ~700 KiB | `DeployerException` — ConfigMap budget (accounts for base64 + metadata) |\n| `:rw` on any host-path bind | `DeployerException` — use a named volume for writable data |\n\n### Deploy start — k8s object generation\n\n- `cluster_info.get_configmaps()` walks pod + job compose volumes and emits a `V1ConfigMap` per host-path bind (deduped by sanitized name), content read from `{deployment_dir}/config/\u003cpod\u003e/\u003cfile\u003e`.\n- `volumes_for_pod_files` emits `V1ConfigMapVolumeSource` instead of `V1HostPathVolumeSource` for host-path binds.\n- `volume_mounts_for_service` stats the source and sets `V1VolumeMount.sub_path` to the filename when source is a regular file.\n- `_generate_kind_mounts` no longer emits `/mnt/host-path-*` extraMounts — ConfigMap path bypasses the kind node FS entirely.\n\n### Transition\n\nThe `/mnt/host-path-*` skip in `check_mounts_compatible` is retained as a transition tolerance for deployments created before this change. Test coverage in `tests/k8s-deploy/run-deploy-test.sh` asserts host-path ConfigMaps exist in the namespace, compose/spec in deployment dir unchanged, and no `/mnt/host-path-*` entries in kind-config.yml.","priority":"2","title":"File-level host-path compose volumes alias across deployments sharing a kind cluster","type":"bug"}} {"type":"status_update","timestamp":"2026-04-20T13:14:26.833816262Z","issue_id":"so-7fc","payload":{"status":"closed"}}