feat(k8s): auto-ConfigMap for file-level host-path compose volumes

File-level host-path compose volumes (e.g. `../config/foo.sh:/opt/foo.sh`)
were synthesized into a kind extraMount + hostPath PV chain with a
sanitized containerPath (`/mnt/host-path-<sanitized>`). The sanitized
name is derived from the compose volume source and is identical across
deployments of the same stack, so two deployments sharing a cluster
collided at the containerPath — kind only honors the first deployment's
bind, subsequent deployments' pods silently read the first's content.
The same code path was also broken on real k8s, which has no way to
populate `/mnt/host-path-*` on worker nodes.

File-level compose binds are conceptually k8s ConfigMaps. The snowball
stack already uses the ConfigMap-backed named-volume pattern by hand.
Make that automatic at the k8s object-generation layer, without
touching deployment-dir compose or spec files.

Behavior at deploy create (validation only, no file mutation):
- :rw on a host-path bind        -> DeployerException (use a named
                                     volume for writable data)
- Directory with subdirectories  -> DeployerException (embed in image,
                                     split into configmaps, or use
                                     initContainer)
- Directory or file > ~700 KiB   -> DeployerException (ConfigMap budget)
- File, or flat small directory  -> accepted, handled at deploy start

Behavior at deploy start:
- cluster_info.get_configmaps() additionally walks pod + job compose
  volumes and emits a V1ConfigMap per host-path bind (deduped by
  sanitized name across all pods/services). Content read from
  {deployment_dir}/config/<pod>/<file> (already populated by
  _copy_extra_config_dirs).
- volumes_for_pod_files emits V1ConfigMapVolumeSource instead of
  V1HostPathVolumeSource for host-path binds.
- volume_mounts_for_service stats the source and sets V1VolumeMount
  sub_path to the filename when source is a regular file — single-key
  ConfigMaps land as files, whole-dir ConfigMaps land as directories.
- _generate_kind_mounts no longer emits `/mnt/host-path-*` extraMounts
  for these binds (the ConfigMap path bypasses kind node FS entirely).

Deployment dir layout is unchanged. Compose files, spec.yml, and
{deployment_dir}/config/<pod>/ remain exactly as today — trivially
diffable against stack source, no synthetic volume names. ConfigMaps
are visible only in k8s (kubectl get cm -n <ns>).

The existing `/mnt/host-path-*` skip in check_mounts_compatible is
retained as a transition tolerance for deployments created before
this change.

Updates:
- deployment_create: _validate_host_path_mounts() called per pod/job
  in the create loops; 700 KiB ConfigMap budget (accounts for base64
  + metadata overhead)
- helpers: _generate_kind_mounts skips host-path entries;
  volumes_for_pod_files emits ConfigMap-backed V1Volume;
  volume_mounts_for_service takes optional deployment_dir and
  auto-sets sub_path for single-file sources
- cluster_info: new _host_path_bind_configmaps() walked from
  get_configmaps(); volume_mounts_for_service call passes
  deployment_dir from spec.file_path
- docs: document the behavior and the rejected shapes in
  deployment_patterns.md
- tests: k8s-deploy asserts the host-path ConfigMaps exist,
  compose/spec unchanged, and no `/mnt/host-path-*` extraMounts

Refs: so-b86

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
pull/748/head
Prathamesh Musale 2026-04-20 13:13:43 +00:00
parent 1d019f9c4b
commit cb84388d00
5 changed files with 333 additions and 33 deletions

View File

@ -242,6 +242,55 @@ their host paths under it.
against the live mounts on the control-plane container. Any mismatch against the live mounts on the control-plane container. Any mismatch
(wrong host path, or mount missing) fails the deploy. (wrong host path, or mount missing) fails the deploy.
### Static files in compose volumes → auto-ConfigMap
Compose volumes that bind a host file or flat directory into a container
(e.g. `../config/test/script.sh:/opt/run.sh`) are used to inject static
content that ships with the stack. k8s doesn't have a native notion of
this — the canonical way to inject static content is a ConfigMap.
At `deploy start`, laconic-so auto-generates a namespace-scoped
ConfigMap per host-path compose volume (deduped by source) and mounts
it into the pod instead of routing the bind through the kind node:
| Source shape | Behavior |
|---|---|
| Single file | ConfigMap with one key (the filename); pod mount uses `subPath` so the single key lands at the compose target path |
| Flat directory (no subdirs, ≤ ~700 KiB) | ConfigMap with one key per file; pod mount exposes all keys at the target path |
| Directory with subdirs, or over budget | Rejected at `deploy create` — embed in the container image, split into multiple ConfigMaps, or use an initContainer |
| `:rw` on any host-path bind | Rejected at `deploy create` — use a named volume with a spec-configured host path for writable data |
The deployment dir layout is unchanged: compose files stay verbatim and
`spec.yml` is not rewritten. Source files remain under
`{deployment_dir}/config/{pod}/` (as copied by `deploy create`); the
ConfigMap is built from them at deploy start and no kind extraMount is
emitted for these paths.
This works identically on kind and real k8s (ConfigMaps are
cluster-native; no node-side landing pad required), and two deployments
of the same stack sharing a cluster get their own per-namespace
ConfigMaps — no aliasing.
### Writable / generated data → named volume + host path
For volumes the workload *writes to* (databases, ledgers, caches, logs),
use a named volume backed by a spec-configured host path under
`kind-mount-root`:
```yaml
# compose
volumes:
- my-data:/var/lib/foo
# spec.yml
kind-mount-root: /srv/kind
volumes:
my-data: /srv/kind/my-stack/data
```
Works on both kind (via the umbrella mount) and real k8s (operator
provisions `/srv/kind/my-stack/data` on each node).
### Migrating an Existing Cluster ### Migrating an Existing Cluster
If a cluster was created without an umbrella mount and you need to add a If a cluster was created without an umbrella mount and you need to add a

View File

@ -51,8 +51,10 @@ from stack_orchestrator.util import (
) )
from stack_orchestrator.deploy.spec import Spec from stack_orchestrator.deploy.spec import Spec
from stack_orchestrator.deploy.deploy_types import LaconicStackSetupCommand from stack_orchestrator.deploy.deploy_types import LaconicStackSetupCommand
from stack_orchestrator.deploy.deployer import DeployerException
from stack_orchestrator.deploy.deployer_factory import getDeployerConfigGenerator from stack_orchestrator.deploy.deployer_factory import getDeployerConfigGenerator
from stack_orchestrator.deploy.deployment_context import DeploymentContext from stack_orchestrator.deploy.deployment_context import DeploymentContext
from stack_orchestrator.deploy.k8s.helpers import is_host_path_mount
def _make_default_deployment_dir(): def _make_default_deployment_dir():
@ -287,6 +289,113 @@ def call_stack_deploy_start(deployment_context):
# Inspect the pod yaml to find config files referenced in subdirectories # Inspect the pod yaml to find config files referenced in subdirectories
# Safety margin under the k8s ConfigMap 1 MiB hard limit. Accounts for
# base64 expansion (~33%) and ConfigMap metadata overhead.
_HOST_PATH_CONFIGMAP_BUDGET_BYTES = 700 * 1024
def _validate_host_path_mounts(parsed_pod_file, pod_name, pod_file_path):
"""Fail fast at deploy create on unsupported host-path compose volumes.
Host-path compose volumes (`<src>:<dst>[:opts]` with src starting
with /, ., or ~) flow through auto-generated ConfigMaps at deploy
start. ConfigMaps can't represent:
- directories with subdirectories (flat key space)
- content exceeding ~700 KiB (k8s 1 MiB limit minus base64/overhead)
- writable mounts (ConfigMap mounts are read-only)
Reject those shapes up front with a clear error so users don't hit
the failure later at start time.
Source resolution: compose paths like `../config/foo.sh` are
relative to the compose file location in the stack source tree at
deploy create time. At deploy start, the file is read from the
matching copy under `{deployment_dir}/config/{pod}/` that deploy
create lays down.
"""
compose_stack_dir = Path(pod_file_path).resolve().parent
services = parsed_pod_file.get("services") or {}
for service_name, service_info in services.items():
for volume_str in service_info.get("volumes") or []:
parts = volume_str.split(":")
if len(parts) < 2:
continue
src = parts[0]
if not is_host_path_mount(src):
continue
mount_opts = parts[2] if len(parts) > 2 else None
opt_tokens = (
[t.strip() for t in mount_opts.split(",") if t.strip()]
if mount_opts
else []
)
if "rw" in opt_tokens:
raise DeployerException(
f"Writable host-path bind not supported: "
f"'{volume_str}' in {pod_name}/{service_name}.\n"
"Host-path binds from the deployment directory are "
"static content injected as ConfigMaps (read-only). "
"Use a named volume with a spec-configured host path "
"under 'kind-mount-root' for writable data. See "
"docs/deployment_patterns.md."
)
abs_src = (compose_stack_dir / src).resolve()
if not abs_src.exists():
# Preserve existing behavior — compose-level binds with
# missing sources fail later; don't introduce a new
# early failure mode here.
continue
if abs_src.is_file():
# Single files are always fine — one-key ConfigMap with
# subPath. Budget check here too in case of huge single
# files.
size = abs_src.stat().st_size
if size > _HOST_PATH_CONFIGMAP_BUDGET_BYTES:
raise DeployerException(
f"Host-path bind '{volume_str}' in "
f"{pod_name}/{service_name} points at a file of "
f"{size} bytes, exceeding the ConfigMap budget "
f"({_HOST_PATH_CONFIGMAP_BUDGET_BYTES} bytes "
f"after base64/overhead).\n\n"
"Embed the file in the container image at build "
"time, or split into multiple smaller files."
)
continue
if abs_src.is_dir():
entries = list(abs_src.iterdir())
if any(p.is_dir() for p in entries):
raise DeployerException(
f"Directory host-path bind '{volume_str}' in "
f"{pod_name}/{service_name} contains "
"subdirectories, which cannot be represented "
"in a k8s ConfigMap.\n\n"
"Restructure the stack to either:\n"
" - embed the directory in the container "
"image at build time,\n"
" - split into multiple ConfigMap entries "
"(one per subdir),\n"
" - or use an initContainer to populate the "
"content at runtime.\n\n"
"See docs/deployment_patterns.md."
)
total = sum(
p.stat().st_size for p in entries if p.is_file()
)
if total > _HOST_PATH_CONFIGMAP_BUDGET_BYTES:
raise DeployerException(
f"Directory host-path bind '{volume_str}' in "
f"{pod_name}/{service_name} totals {total} "
f"bytes, exceeding the ConfigMap budget "
f"({_HOST_PATH_CONFIGMAP_BUDGET_BYTES} bytes "
f"after base64/overhead).\n\n"
"Embed the content in the container image at "
"build time, or split into smaller ConfigMaps. "
"See docs/deployment_patterns.md."
)
# _find_extra_config_dirs: Find config dirs referenced in the pod files
# other than the one associated with the pod # other than the one associated with the pod
def _find_extra_config_dirs(parsed_pod_file, pod): def _find_extra_config_dirs(parsed_pod_file, pod):
config_dirs = set() config_dirs = set()
@ -1058,6 +1167,12 @@ def _write_deployment_files(
if pod_file_path is None: if pod_file_path is None:
continue continue
parsed_pod_file = yaml.load(open(pod_file_path, "r")) parsed_pod_file = yaml.load(open(pod_file_path, "r"))
# Reject host-path compose volumes whose shape can't land as a
# ConfigMap (dir-with-subdirs, oversize, writable). File-level
# and flat-dir host-path binds are accepted — they auto-convert
# to ConfigMaps at deploy start via cluster_info.get_configmaps.
if parsed_spec.is_kubernetes_deployment():
_validate_host_path_mounts(parsed_pod_file, pod, pod_file_path)
extra_config_dirs = _find_extra_config_dirs(parsed_pod_file, pod) extra_config_dirs = _find_extra_config_dirs(parsed_pod_file, pod)
destination_pod_dir = destination_pods_dir.joinpath(pod) destination_pod_dir = destination_pods_dir.joinpath(pod)
os.makedirs(destination_pod_dir, exist_ok=True) os.makedirs(destination_pod_dir, exist_ok=True)
@ -1138,6 +1253,10 @@ def _write_deployment_files(
job_file_path = get_job_file_path(stack_name, parsed_stack, job) job_file_path = get_job_file_path(stack_name, parsed_stack, job)
if job_file_path and job_file_path.exists(): if job_file_path and job_file_path.exists():
parsed_job_file = yaml.load(open(job_file_path, "r")) parsed_job_file = yaml.load(open(job_file_path, "r"))
if parsed_spec.is_kubernetes_deployment():
_validate_host_path_mounts(
parsed_job_file, job, job_file_path
)
_fixup_pod_file(parsed_job_file, parsed_spec, destination_compose_dir) _fixup_pod_file(parsed_job_file, parsed_spec, destination_compose_dir)
with open( with open(
destination_compose_jobs_dir.joinpath( destination_compose_jobs_dir.joinpath(

View File

@ -15,6 +15,7 @@
import os import os
import base64 import base64
from pathlib import Path
from kubernetes import client from kubernetes import client
from typing import Any, List, Optional, Set from typing import Any, List, Optional, Set
@ -22,7 +23,10 @@ from typing import Any, List, Optional, Set
from stack_orchestrator.opts import opts from stack_orchestrator.opts import opts
from stack_orchestrator.util import env_var_map_from_file from stack_orchestrator.util import env_var_map_from_file
from stack_orchestrator.deploy.k8s.helpers import ( from stack_orchestrator.deploy.k8s.helpers import (
is_host_path_mount,
named_volumes_from_pod_files, named_volumes_from_pod_files,
resolve_host_path_for_kind,
sanitize_host_path_to_volume_name,
volume_mounts_for_service, volume_mounts_for_service,
volumes_for_pod_files, volumes_for_pod_files,
) )
@ -433,8 +437,91 @@ class ClusterInfo:
binary_data=data, binary_data=data,
) )
result.append(spec) result.append(spec)
# Auto-generated ConfigMaps for file-level and flat-dir host-path
# compose volumes. Avoids the aliasing failure mode where two
# deployments sharing a cluster would collide at the same kind
# node path — each deployment gets its own namespace-scoped
# ConfigMap instead. See docs/deployment_patterns.md.
result.extend(self._host_path_bind_configmaps())
return result return result
def _host_path_bind_configmaps(self) -> List[client.V1ConfigMap]:
"""Build V1ConfigMap objects for host-path compose volumes.
Walks every service in every parsed pod/job compose file. For each
volume whose source is a host path (starts with /, ., or ~),
reads the resolved file or flat directory from the deployment
directory and packages it as a V1ConfigMap.
Dedupes by sanitized name across pods and services a source
referenced from N places yields one ConfigMap.
"""
if self.spec.file_path is None:
return []
deployment_dir = Path(self.spec.file_path).parent
seen: Set[str] = set()
result: List[client.V1ConfigMap] = []
all_pod_maps = [self.parsed_pod_yaml_map, self.parsed_job_yaml_map]
for pod_map in all_pod_maps:
for _pod_key, pod in pod_map.items():
services = pod.get("services") or {}
for _svc_name, svc in services.items():
for mount_string in svc.get("volumes") or []:
parts = mount_string.split(":")
if len(parts) < 2:
continue
src = parts[0]
if not is_host_path_mount(src):
continue
sanitized = sanitize_host_path_to_volume_name(src)
if sanitized in seen:
continue
seen.add(sanitized)
abs_src = resolve_host_path_for_kind(
src, deployment_dir
)
data = self._read_host_path_source(abs_src, mount_string)
cm = client.V1ConfigMap(
metadata=client.V1ObjectMeta(
name=f"{self.app_name}-{sanitized}",
labels=self._stack_labels(
{"configmap-label": sanitized}
),
),
binary_data=data,
)
result.append(cm)
return result
def _read_host_path_source(
self, abs_src: Path, mount_string: str
) -> dict:
"""Read file or flat-directory content for a host-path ConfigMap.
Validates shape at read time as a defensive second check the
same rules are enforced earlier at `deploy create`, but deploy-
dir content may have been edited since then.
"""
if not abs_src.exists():
raise RuntimeError(
f"Source for host-path compose volume does not exist: "
f"{abs_src} (volume: '{mount_string}')"
)
data = {}
if abs_src.is_file():
with open(abs_src, "rb") as f:
data[abs_src.name] = base64.b64encode(f.read()).decode("ASCII")
elif abs_src.is_dir():
for entry in abs_src.iterdir():
if entry.is_file():
with open(entry, "rb") as f:
data[entry.name] = base64.b64encode(f.read()).decode(
"ASCII"
)
return data
def get_pvs(self): def get_pvs(self):
result = [] result = []
spec_volumes = self.spec.get_volumes() spec_volumes = self.spec.get_volumes()
@ -621,7 +708,13 @@ class ClusterInfo:
if self.spec.get_image_registry() is not None if self.spec.get_image_registry() is not None
else image else image
) )
volume_mounts = volume_mounts_for_service(parsed_yaml_map, service_name) volume_mounts = volume_mounts_for_service(
parsed_yaml_map,
service_name,
Path(self.spec.file_path).parent
if self.spec.file_path
else None,
)
# Handle command/entrypoint from compose file # Handle command/entrypoint from compose file
# In docker-compose: entrypoint -> k8s command, command -> k8s args # In docker-compose: entrypoint -> k8s command, command -> k8s args
container_command = None container_command = None

View File

@ -607,7 +607,7 @@ def get_kind_pv_bind_mount_path(
return f"/mnt/{volume_name}" return f"/mnt/{volume_name}"
def volume_mounts_for_service(parsed_pod_files, service): def volume_mounts_for_service(parsed_pod_files, service, deployment_dir=None):
result = [] result = []
# Find the service # Find the service
for pod in parsed_pod_files: for pod in parsed_pod_files:
@ -631,11 +631,24 @@ def volume_mounts_for_service(parsed_pod_files, service):
mount_options = ( mount_options = (
mount_split[2] if len(mount_split) == 3 else None mount_split[2] if len(mount_split) == 3 else None
) )
# For host path mounts, use sanitized name sub_path = None
# For host path mounts, use sanitized name.
# When the source resolves to a single file,
# the auto-generated ConfigMap has one key
# (the file basename). Set subPath so the
# mount lands at the compose target as a
# single file, not as a directory with the
# key as a child entry.
if is_host_path_mount(volume_name): if is_host_path_mount(volume_name):
k8s_volume_name = sanitize_host_path_to_volume_name( k8s_volume_name = sanitize_host_path_to_volume_name(
volume_name volume_name
) )
if deployment_dir is not None:
abs_src = resolve_host_path_for_kind(
volume_name, deployment_dir
)
if abs_src.is_file():
sub_path = abs_src.name
else: else:
k8s_volume_name = volume_name k8s_volume_name = volume_name
if opts.o.debug: if opts.o.debug:
@ -643,10 +656,12 @@ def volume_mounts_for_service(parsed_pod_files, service):
print(f"k8s_volume_name: {k8s_volume_name}") print(f"k8s_volume_name: {k8s_volume_name}")
print(f"mount path: {mount_path}") print(f"mount path: {mount_path}")
print(f"mount options: {mount_options}") print(f"mount options: {mount_options}")
print(f"sub_path: {sub_path}")
volume_device = client.V1VolumeMount( volume_device = client.V1VolumeMount(
mount_path=mount_path, mount_path=mount_path,
name=k8s_volume_name, name=k8s_volume_name,
read_only="ro" == mount_options, read_only="ro" == mount_options,
sub_path=sub_path,
) )
result.append(volume_device) result.append(volume_device)
return result return result
@ -679,7 +694,11 @@ def volumes_for_pod_files(parsed_pod_files, spec, app_name):
) )
result.append(volume) result.append(volume)
# Handle host path mounts from service volumes # File-level and flat-dir host-path compose volumes flow through
# auto-generated ConfigMaps. Emit a ConfigMap-backed V1Volume so
# the pod reads from the namespace-scoped ConfigMap rather than
# a kind-node hostPath (which would alias across deployments
# sharing a cluster and not work on real k8s at all).
if "services" in parsed_pod_file: if "services" in parsed_pod_file:
services = parsed_pod_file["services"] services = parsed_pod_file["services"]
for service_name in services: for service_name in services:
@ -694,19 +713,19 @@ def volumes_for_pod_files(parsed_pod_files, spec, app_name):
) )
if sanitized_name not in seen_host_path_volumes: if sanitized_name not in seen_host_path_volumes:
seen_host_path_volumes.add(sanitized_name) seen_host_path_volumes.add(sanitized_name)
# Create hostPath volume for mount inside kind node config_map = client.V1ConfigMapVolumeSource(
kind_mount_path = get_kind_host_path_mount_path( name=f"{app_name}-{sanitized_name}",
sanitized_name default_mode=0o755,
)
host_path_source = client.V1HostPathVolumeSource(
path=kind_mount_path, type="FileOrCreate"
) )
volume = client.V1Volume( volume = client.V1Volume(
name=sanitized_name, host_path=host_path_source name=sanitized_name, config_map=config_map
) )
result.append(volume) result.append(volume)
if opts.o.debug: if opts.o.debug:
print(f"Created hostPath volume: {sanitized_name}") print(
f"Created configmap-backed host-path "
f"volume: {sanitized_name}"
)
return result return result
@ -725,7 +744,6 @@ def _make_absolute_host_path(data_mount_path: Path, deployment_dir: Path) -> Pat
def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context): def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context):
volume_definitions = [] volume_definitions = []
volume_host_path_map = _get_host_paths_for_volumes(deployment_context) volume_host_path_map = _get_host_paths_for_volumes(deployment_context)
seen_host_path_mounts = set() # Track to avoid duplicate mounts
kind_mount_root = deployment_context.spec.get_kind_mount_root() kind_mount_root = deployment_context.spec.get_kind_mount_root()
# When kind-mount-root is set, emit a single extraMount for the root. # When kind-mount-root is set, emit a single extraMount for the root.
@ -762,26 +780,12 @@ def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context):
mount_path = mount_split[1] mount_path = mount_split[1]
if is_host_path_mount(volume_name): if is_host_path_mount(volume_name):
# Host path mount - add extraMount for kind # File-level host-path binds (e.g. compose
sanitized_name = sanitize_host_path_to_volume_name( # `../config/foo.sh:/opt/foo.sh`) flow
volume_name # through an auto-generated k8s ConfigMap at
) # deploy start — no extraMount needed. See
if sanitized_name not in seen_host_path_mounts: # cluster_info.get_configmaps().
seen_host_path_mounts.add(sanitized_name) continue
# Resolve path relative to compose directory
host_path = resolve_host_path_for_kind(
volume_name, deployment_dir
)
container_path = get_kind_host_path_mount_path(
sanitized_name
)
volume_definitions.append(
f" - hostPath: {host_path}\n"
f" containerPath: {container_path}\n"
f" propagation: HostToContainer\n"
)
if opts.o.debug:
print(f"Added host path mount: {host_path}")
else: else:
# Named volume # Named volume
if opts.o.debug: if opts.o.debug:

View File

@ -166,6 +166,41 @@ for kind in serviceaccount role rolebinding cronjob; do
done done
echo "caddy-cert-backup install test: passed" echo "caddy-cert-backup install test: passed"
# Host-path compose volumes (../config/test/script.sh, ../config/test/settings.env)
# should flow through auto-generated per-namespace ConfigMaps — no kind
# extraMount, no compose/spec rewriting. The pod mount lands via
# ConfigMap + subPath.
for cm_name in \
"${deployment_id}-host-path-config-test-script-sh" \
"${deployment_id}-host-path-config-test-settings-env"; do
if ! kubectl get configmap "$cm_name" -n "$deployment_ns" >/dev/null 2>&1; then
echo "host-path configmap test: ConfigMap $cm_name not found"
cleanup_and_exit
fi
done
echo "host-path configmap test: passed"
# Deployment dir should be untouched — compose file still has the
# original host-path volume entries and no synthetic configmap dirs.
if ! grep -q '\.\./config/test/script\.sh:/opt/run\.sh' \
"$test_deployment_dir/compose/docker-compose-test.yml"; then
echo "compose unchanged test: host-path volume entry missing"
cleanup_and_exit
fi
if [ -d "$test_deployment_dir/configmaps/host-path-config-test-script-sh" ]; then
echo "compose unchanged test: unexpected configmaps/host-path-* dir present"
cleanup_and_exit
fi
echo "compose unchanged test: passed"
# kind-config.yml should NOT contain /mnt/host-path-* extraMounts —
# they are replaced by the ConfigMap mechanism.
if grep -q 'containerPath: /mnt/host-path-' "$test_deployment_dir/kind-config.yml"; then
echo "no-host-path-extramount test: FAILED"
cleanup_and_exit
fi
echo "no-host-path-extramount test: passed"
# Check logs command works # Check logs command works
wait_for_log_output wait_for_log_output
sleep 1 sleep 1