Compare commits

..

2 Commits

Author SHA1 Message Date
Prathamesh Musale 5af6a83fa2 Add Job and secrets support for k8s-kind deployments (#995)
Publish / Build and publish (push) Failing after 0s Details
K8s Deployment Control Test / Run deployment control suite on kind/k8s (push) Failing after 0s Details
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Failing after 0s Details
Webapp Test / Run webapp test suite (push) Failing after 0s Details
Smoke Test / Run basic test suite (push) Failing after 0s Details
Lint Checks / Run linter (push) Failing after 0s Details
Deploy Test / Run deploy test suite (push) Failing after 0s Details
Part of https://plan.wireit.in/deepstack/browse/VUL-315

Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/995
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2026-03-11 03:56:21 +00:00
AFDudley 8cc0a9a19a add/local-test-runner (#996)
Publish / Build and publish (push) Failing after 0s Details
Deploy Test / Run deploy test suite (push) Failing after 0s Details
Webapp Test / Run webapp test suite (push) Failing after 0s Details
Lint Checks / Run linter (push) Failing after 0s Details
Smoke Test / Run basic test suite (push) Failing after 0s Details
Co-authored-by: A. F. Dudley <a.frederick.dudley@gmail.com>
Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/996
2026-03-09 20:04:58 +00:00
21 changed files with 601 additions and 97 deletions

19
TODO.md
View File

@ -7,6 +7,25 @@ We need an "update stack" command in stack orchestrator and cleaner documentatio
**Context**: Currently, `deploy init` generates a spec file and `deploy create` creates a deployment directory. The `deployment update` command (added by Thomas Lackey) only syncs env vars and restarts - it doesn't regenerate configurations. There's a gap in the workflow for updating stack configurations after initial deployment. **Context**: Currently, `deploy init` generates a spec file and `deploy create` creates a deployment directory. The `deployment update` command (added by Thomas Lackey) only syncs env vars and restarts - it doesn't regenerate configurations. There's a gap in the workflow for updating stack configurations after initial deployment.
## Bugs
### `deploy create` doesn't auto-generate volume mappings for new pods
When a new pod is added to `stack.yml` (e.g. `monitoring`), `deploy create`
does not generate default host path mappings in spec.yml for the new pod's
volumes. The deployment then fails at scheduling because the PVCs don't exist.
**Expected**: `deploy create` enumerates all volumes from all compose files
in the stack and generates default host paths for any that aren't already
mapped in the spec.yml `volumes:` section.
**Actual**: Only volumes already in spec.yml get PVs. New volumes are silently
missing, causing `FailedScheduling: persistentvolumeclaim not found`.
**Workaround**: Manually add volume entries to spec.yml and create host dirs.
**Files**: `deployment_create.py` (`_write_config_file`, volume handling)
## Architecture Refactoring ## Architecture Refactoring
### Separate Deployer from Stack Orchestrator CLI ### Separate Deployer from Stack Orchestrator CLI

View File

@ -68,7 +68,7 @@ $ laconic-so build-npms --include <package-name> --force-rebuild
## deploy ## deploy
The `deploy` command group manages persistent deployments. The general workflow is `deploy init` to generate a spec file, then `deploy create` to create a deployment directory from the spec, then runtime commands like `deploy up` and `deploy down`. The `deploy` command group manages persistent deployments. The general workflow is `deploy init` to generate a spec file, then `deploy create` to create a deployment directory from the spec, then runtime commands like `deployment start` and `deployment stop`.
### deploy init ### deploy init
@ -101,35 +101,91 @@ Options:
- `--spec-file` (required): spec file to use - `--spec-file` (required): spec file to use
- `--deployment-dir`: target directory for deployment files - `--deployment-dir`: target directory for deployment files
- `--update`: update an existing deployment directory, preserving data volumes and env file. Changed files are backed up with a `.bak` suffix. The deployment's `config.env` and `deployment.yml` are also preserved. - `--update`: update an existing deployment directory, preserving data volumes and env file. Changed files are backed up with a `.bak` suffix. The deployment's `config.env` and `deployment.yml` are also preserved.
- `--helm-chart`: generate Helm chart instead of deploying (k8s only)
- `--network-dir`: network configuration supplied in this directory - `--network-dir`: network configuration supplied in this directory
- `--initial-peers`: initial set of persistent peers - `--initial-peers`: initial set of persistent peers
### deploy up ## deployment
Start a deployment: Runtime commands for managing a created deployment. Use `--dir` to specify the deployment directory.
### deployment start
Start a deployment (`up` is a legacy alias):
``` ```
$ laconic-so deployment --dir <deployment-dir> up $ laconic-so deployment --dir <deployment-dir> start
``` ```
### deploy down Options:
- `--stay-attached` / `--detatch-terminal`: attach to container stdout (default: detach)
- `--skip-cluster-management` / `--perform-cluster-management`: skip kind cluster creation/teardown (default: perform management). Only affects k8s-kind deployments. Use this when multiple stacks share a single cluster.
Stop a deployment: ### deployment stop
```
$ laconic-so deployment --dir <deployment-dir> down
```
Use `--delete-volumes` to also remove data volumes.
### deploy ps Stop a deployment (`down` is a legacy alias):
```
$ laconic-so deployment --dir <deployment-dir> stop
```
Options:
- `--delete-volumes` / `--preserve-volumes`: delete data volumes on stop (default: preserve)
- `--skip-cluster-management` / `--perform-cluster-management`: skip kind cluster teardown (default: perform management). Use this to stop a single deployment without destroying a shared cluster.
### deployment restart
Restart a deployment with GitOps-aware workflow. Pulls latest stack code, syncs the deployment directory from the git-tracked spec, and restarts services:
```
$ laconic-so deployment --dir <deployment-dir> restart
```
See [deployment_patterns.md](deployment_patterns.md) for the recommended GitOps workflow.
### deployment ps
Show running services: Show running services:
``` ```
$ laconic-so deployment --dir <deployment-dir> ps $ laconic-so deployment --dir <deployment-dir> ps
``` ```
### deploy logs ### deployment logs
View service logs: View service logs:
``` ```
$ laconic-so deployment --dir <deployment-dir> logs $ laconic-so deployment --dir <deployment-dir> logs
``` ```
Use `-f` to follow and `-n <count>` to tail. Use `-f` to follow and `-n <count>` to tail.
### deployment exec
Execute a command in a running service container:
```
$ laconic-so deployment --dir <deployment-dir> exec <service-name> "<command>"
```
### deployment status
Show deployment status:
```
$ laconic-so deployment --dir <deployment-dir> status
```
### deployment port
Show mapped ports for a service:
```
$ laconic-so deployment --dir <deployment-dir> port <service-name> <port>
```
### deployment push-images
Push deployment images to a registry:
```
$ laconic-so deployment --dir <deployment-dir> push-images
```
### deployment run-job
Run a one-time job in the deployment:
```
$ laconic-so deployment --dir <deployment-dir> run-job <job-name>
```

View File

@ -30,7 +30,7 @@ git commit -m "Add my-stack deployment configuration"
git push git push
# On deployment server: deploy from git-tracked spec # On deployment server: deploy from git-tracked spec
laconic-so deploy create \ laconic-so --stack my-stack deploy create \
--spec-file /path/to/operator-repo/spec.yml \ --spec-file /path/to/operator-repo/spec.yml \
--deployment-dir my-deployment --deployment-dir my-deployment

View File

@ -29,6 +29,7 @@ network_key = "network"
http_proxy_key = "http-proxy" http_proxy_key = "http-proxy"
image_registry_key = "image-registry" image_registry_key = "image-registry"
configmaps_key = "configmaps" configmaps_key = "configmaps"
secrets_key = "secrets"
resources_key = "resources" resources_key = "resources"
volumes_key = "volumes" volumes_key = "volumes"
security_key = "security" security_key = "security"

View File

@ -0,0 +1,5 @@
services:
test-job:
image: cerc/test-container:local
entrypoint: /bin/sh
command: ["-c", "echo 'Job completed successfully'"]

View File

@ -7,3 +7,5 @@ containers:
- cerc/test-container - cerc/test-container
pods: pods:
- test - test
jobs:
- test-job

View File

@ -35,6 +35,7 @@ from stack_orchestrator.util import (
get_dev_root_path, get_dev_root_path,
stack_is_in_deployment, stack_is_in_deployment,
resolve_compose_file, resolve_compose_file,
get_job_list,
) )
from stack_orchestrator.deploy.deployer import DeployerException from stack_orchestrator.deploy.deployer import DeployerException
from stack_orchestrator.deploy.deployer_factory import getDeployer from stack_orchestrator.deploy.deployer_factory import getDeployer
@ -130,6 +131,7 @@ def create_deploy_context(
compose_files=cluster_context.compose_files, compose_files=cluster_context.compose_files,
compose_project_name=cluster_context.cluster, compose_project_name=cluster_context.cluster,
compose_env_file=cluster_context.env_file, compose_env_file=cluster_context.env_file,
job_compose_files=cluster_context.job_compose_files,
) )
return DeployCommandContext(stack, cluster_context, deployer) return DeployCommandContext(stack, cluster_context, deployer)
@ -403,7 +405,7 @@ def _make_cluster_context(ctx, stack, include, exclude, cluster, env_file):
stack_config = get_parsed_stack_config(stack) stack_config = get_parsed_stack_config(stack)
if stack_config is not None: if stack_config is not None:
# TODO: syntax check the input here # TODO: syntax check the input here
pods_in_scope = stack_config["pods"] pods_in_scope = stack_config.get("pods") or []
cluster_config = ( cluster_config = (
stack_config["config"] if "config" in stack_config else None stack_config["config"] if "config" in stack_config else None
) )
@ -477,6 +479,22 @@ def _make_cluster_context(ctx, stack, include, exclude, cluster, env_file):
if ctx.verbose: if ctx.verbose:
print(f"files: {compose_files}") print(f"files: {compose_files}")
# Gather job compose files (from compose-jobs/ directory in deployment)
job_compose_files = []
if deployment and stack:
stack_config = get_parsed_stack_config(stack)
if stack_config:
jobs = get_job_list(stack_config)
compose_jobs_dir = stack.joinpath("compose-jobs")
for job in jobs:
job_file_name = os.path.join(
compose_jobs_dir, f"docker-compose-{job}.yml"
)
if os.path.exists(job_file_name):
job_compose_files.append(job_file_name)
if ctx.verbose:
print(f"job files: {job_compose_files}")
return ClusterContext( return ClusterContext(
ctx, ctx,
cluster, cluster,
@ -485,6 +503,7 @@ def _make_cluster_context(ctx, stack, include, exclude, cluster, env_file):
post_start_commands, post_start_commands,
cluster_config, cluster_config,
env_file, env_file,
job_compose_files=job_compose_files if job_compose_files else None,
) )

View File

@ -29,6 +29,7 @@ class ClusterContext:
post_start_commands: List[str] post_start_commands: List[str]
config: Optional[str] config: Optional[str]
env_file: Optional[str] env_file: Optional[str]
job_compose_files: Optional[List[str]] = None
@dataclass @dataclass

View File

@ -34,7 +34,12 @@ def getDeployerConfigGenerator(type: str, deployment_context):
def getDeployer( def getDeployer(
type: str, deployment_context, compose_files, compose_project_name, compose_env_file type: str,
deployment_context,
compose_files,
compose_project_name,
compose_env_file,
job_compose_files=None,
): ):
if type == "compose" or type is None: if type == "compose" or type is None:
return DockerDeployer( return DockerDeployer(
@ -54,6 +59,7 @@ def getDeployer(
compose_files, compose_files,
compose_project_name, compose_project_name,
compose_env_file, compose_env_file,
job_compose_files=job_compose_files,
) )
else: else:
print(f"ERROR: deploy-to {type} is not valid") print(f"ERROR: deploy-to {type} is not valid")

View File

@ -265,6 +265,25 @@ def call_stack_deploy_create(deployment_context, extra_args):
imported_stack.create(deployment_context, extra_args) imported_stack.create(deployment_context, extra_args)
def call_stack_deploy_start(deployment_context):
"""Call start() hooks after k8s deployments and jobs are created.
The start() hook receives the DeploymentContext, allowing stacks to
create additional k8s resources (Services, etc.) in the deployment namespace.
The namespace can be derived as f"laconic-{deployment_context.id}".
"""
python_file_paths = _commands_plugin_paths(deployment_context.stack.name)
for python_file_path in python_file_paths:
if python_file_path.exists():
spec = util.spec_from_file_location("commands", python_file_path)
if spec is None or spec.loader is None:
continue
imported_stack = util.module_from_spec(spec)
spec.loader.exec_module(imported_stack)
if _has_method(imported_stack, "start"):
imported_stack.start(deployment_context)
# Inspect the pod yaml to find config files referenced in subdirectories # Inspect the pod yaml to find config files referenced in subdirectories
# other than the one associated with the pod # other than the one associated with the pod
def _find_extra_config_dirs(parsed_pod_file, pod): def _find_extra_config_dirs(parsed_pod_file, pod):
@ -477,6 +496,9 @@ def init_operation(
spec_file_content["volumes"] = {**volume_descriptors, **orig_volumes} spec_file_content["volumes"] = {**volume_descriptors, **orig_volumes}
if configmap_descriptors: if configmap_descriptors:
spec_file_content["configmaps"] = configmap_descriptors spec_file_content["configmaps"] = configmap_descriptors
if "k8s" in deployer_type:
if "secrets" not in spec_file_content:
spec_file_content["secrets"] = {}
if opts.o.debug: if opts.o.debug:
print( print(
@ -982,17 +1004,7 @@ def _write_deployment_files(
script_paths = get_pod_script_paths(parsed_stack, pod) script_paths = get_pod_script_paths(parsed_stack, pod)
_copy_files_to_directory(script_paths, destination_script_dir) _copy_files_to_directory(script_paths, destination_script_dir)
if parsed_spec.is_kubernetes_deployment(): if not parsed_spec.is_kubernetes_deployment():
for configmap in parsed_spec.get_configmaps():
source_config_dir = resolve_config_dir(stack_name, configmap)
if os.path.exists(source_config_dir):
destination_config_dir = target_dir.joinpath(
"configmaps", configmap
)
copytree(
source_config_dir, destination_config_dir, dirs_exist_ok=True
)
else:
# TODO: # TODO:
# This is odd - looks up config dir that matches a volume name, # This is odd - looks up config dir that matches a volume name,
# then copies as a mount dir? # then copies as a mount dir?
@ -1014,9 +1026,22 @@ def _write_deployment_files(
dirs_exist_ok=True, dirs_exist_ok=True,
) )
# Copy the job files into the target dir (for Docker deployments) # Copy configmap directories for k8s deployments (outside the pod loop
# so this works for jobs-only stacks too)
if parsed_spec.is_kubernetes_deployment():
for configmap in parsed_spec.get_configmaps():
source_config_dir = resolve_config_dir(stack_name, configmap)
if os.path.exists(source_config_dir):
destination_config_dir = target_dir.joinpath(
"configmaps", configmap
)
copytree(
source_config_dir, destination_config_dir, dirs_exist_ok=True
)
# Copy the job files into the target dir
jobs = get_job_list(parsed_stack) jobs = get_job_list(parsed_stack)
if jobs and not parsed_spec.is_kubernetes_deployment(): if jobs:
destination_compose_jobs_dir = target_dir.joinpath("compose-jobs") destination_compose_jobs_dir = target_dir.joinpath("compose-jobs")
os.makedirs(destination_compose_jobs_dir, exist_ok=True) os.makedirs(destination_compose_jobs_dir, exist_ok=True)
for job in jobs: for job in jobs:

View File

@ -72,15 +72,17 @@ def to_k8s_resource_requirements(resources: Resources) -> client.V1ResourceRequi
class ClusterInfo: class ClusterInfo:
parsed_pod_yaml_map: Any parsed_pod_yaml_map: Any
parsed_job_yaml_map: Any
image_set: Set[str] = set() image_set: Set[str] = set()
app_name: str app_name: str
stack_name: str
environment_variables: DeployEnvVars environment_variables: DeployEnvVars
spec: Spec spec: Spec
def __init__(self) -> None: def __init__(self) -> None:
pass self.parsed_job_yaml_map = {}
def int(self, pod_files: List[str], compose_env_file, deployment_name, spec: Spec): def int(self, pod_files: List[str], compose_env_file, deployment_name, spec: Spec, stack_name=""):
self.parsed_pod_yaml_map = parsed_pod_files_map_from_file_names(pod_files) self.parsed_pod_yaml_map = parsed_pod_files_map_from_file_names(pod_files)
# Find the set of images in the pods # Find the set of images in the pods
self.image_set = images_for_deployment(pod_files) self.image_set = images_for_deployment(pod_files)
@ -90,10 +92,23 @@ class ClusterInfo:
} }
self.environment_variables = DeployEnvVars(env_vars) self.environment_variables = DeployEnvVars(env_vars)
self.app_name = deployment_name self.app_name = deployment_name
self.stack_name = stack_name
self.spec = spec self.spec = spec
if opts.o.debug: if opts.o.debug:
print(f"Env vars: {self.environment_variables.map}") print(f"Env vars: {self.environment_variables.map}")
def init_jobs(self, job_files: List[str]):
"""Initialize parsed job YAML map from job compose files."""
self.parsed_job_yaml_map = parsed_pod_files_map_from_file_names(job_files)
if opts.o.debug:
print(f"Parsed job yaml map: {self.parsed_job_yaml_map}")
def _all_named_volumes(self) -> list:
"""Return named volumes from both pod and job compose files."""
volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map)
volumes.extend(named_volumes_from_pod_files(self.parsed_job_yaml_map))
return volumes
def get_nodeports(self): def get_nodeports(self):
nodeports = [] nodeports = []
for pod_name in self.parsed_pod_yaml_map: for pod_name in self.parsed_pod_yaml_map:
@ -257,7 +272,7 @@ class ClusterInfo:
def get_pvcs(self): def get_pvcs(self):
result = [] result = []
spec_volumes = self.spec.get_volumes() spec_volumes = self.spec.get_volumes()
named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) named_volumes = self._all_named_volumes()
resources = self.spec.get_volume_resources() resources = self.spec.get_volume_resources()
if not resources: if not resources:
resources = DEFAULT_VOLUME_RESOURCES resources = DEFAULT_VOLUME_RESOURCES
@ -301,7 +316,7 @@ class ClusterInfo:
def get_configmaps(self): def get_configmaps(self):
result = [] result = []
spec_configmaps = self.spec.get_configmaps() spec_configmaps = self.spec.get_configmaps()
named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) named_volumes = self._all_named_volumes()
for cfg_map_name, cfg_map_path in spec_configmaps.items(): for cfg_map_name, cfg_map_path in spec_configmaps.items():
if cfg_map_name not in named_volumes: if cfg_map_name not in named_volumes:
if opts.o.debug: if opts.o.debug:
@ -337,7 +352,7 @@ class ClusterInfo:
def get_pvs(self): def get_pvs(self):
result = [] result = []
spec_volumes = self.spec.get_volumes() spec_volumes = self.spec.get_volumes()
named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) named_volumes = self._all_named_volumes()
resources = self.spec.get_volume_resources() resources = self.spec.get_volume_resources()
if not resources: if not resources:
resources = DEFAULT_VOLUME_RESOURCES resources = DEFAULT_VOLUME_RESOURCES
@ -394,15 +409,55 @@ class ClusterInfo:
result.append(pv) result.append(pv)
return result return result
# TODO: put things like image pull policy into an object-scope struct def _any_service_has_host_network(self):
def get_deployment(self, image_pull_policy: Optional[str] = None):
containers = []
services = {}
resources = self.spec.get_container_resources()
if not resources:
resources = DEFAULT_CONTAINER_RESOURCES
for pod_name in self.parsed_pod_yaml_map: for pod_name in self.parsed_pod_yaml_map:
pod = self.parsed_pod_yaml_map[pod_name] pod = self.parsed_pod_yaml_map[pod_name]
for svc in pod.get("services", {}).values():
if svc.get("network_mode") == "host":
return True
return False
def _resolve_container_resources(
self, container_name: str, service_info: dict, global_resources: Resources
) -> Resources:
"""Resolve resources for a container using layered priority.
Priority: spec per-container > compose deploy.resources
> spec global > DEFAULT
"""
# 1. Check spec.yml for per-container override
per_container = self.spec.get_container_resources_for(container_name)
if per_container:
return per_container
# 2. Check compose service_info for deploy.resources
deploy_block = service_info.get("deploy", {})
compose_resources = deploy_block.get("resources", {}) if deploy_block else {}
if compose_resources:
return Resources(compose_resources)
# 3. Fall back to spec.yml global (already resolved with DEFAULT fallback)
return global_resources
def _build_containers(
self,
parsed_yaml_map: Any,
image_pull_policy: Optional[str] = None,
) -> tuple:
"""Build k8s container specs from parsed compose YAML.
Returns a tuple of (containers, services, volumes) where:
- containers: list of V1Container objects
- services: the last services dict processed (used for annotations/labels)
- volumes: list of V1Volume objects
"""
containers = []
services = {}
global_resources = self.spec.get_container_resources()
if not global_resources:
global_resources = DEFAULT_CONTAINER_RESOURCES
for pod_name in parsed_yaml_map:
pod = parsed_yaml_map[pod_name]
services = pod["services"] services = pod["services"]
for service_name in services: for service_name in services:
container_name = service_name container_name = service_name
@ -459,7 +514,7 @@ class ClusterInfo:
else image else image
) )
volume_mounts = volume_mounts_for_service( volume_mounts = volume_mounts_for_service(
self.parsed_pod_yaml_map, service_name parsed_yaml_map, service_name
) )
# Handle command/entrypoint from compose file # Handle command/entrypoint from compose file
# In docker-compose: entrypoint -> k8s command, command -> k8s args # In docker-compose: entrypoint -> k8s command, command -> k8s args
@ -483,6 +538,19 @@ class ClusterInfo:
) )
) )
] ]
# Mount user-declared secrets from spec.yml
for user_secret_name in self.spec.get_secrets():
env_from.append(
client.V1EnvFromSource(
secret_ref=client.V1SecretEnvSource(
name=user_secret_name,
optional=True,
)
)
)
container_resources = self._resolve_container_resources(
container_name, service_info, global_resources
)
container = client.V1Container( container = client.V1Container(
name=container_name, name=container_name,
image=image_to_use, image=image_to_use,
@ -501,11 +569,18 @@ class ClusterInfo:
if self.spec.get_capabilities() if self.spec.get_capabilities()
else None, else None,
), ),
resources=to_k8s_resource_requirements(resources), resources=to_k8s_resource_requirements(container_resources),
) )
containers.append(container) containers.append(container)
volumes = volumes_for_pod_files( volumes = volumes_for_pod_files(
self.parsed_pod_yaml_map, self.spec, self.app_name parsed_yaml_map, self.spec, self.app_name
)
return containers, services, volumes
# TODO: put things like image pull policy into an object-scope struct
def get_deployment(self, image_pull_policy: Optional[str] = None):
containers, services, volumes = self._build_containers(
self.parsed_pod_yaml_map, image_pull_policy
) )
registry_config = self.spec.get_image_registry_config() registry_config = self.spec.get_image_registry_config()
if registry_config: if registry_config:
@ -516,6 +591,8 @@ class ClusterInfo:
annotations = None annotations = None
labels = {"app": self.app_name} labels = {"app": self.app_name}
if self.stack_name:
labels["app.kubernetes.io/stack"] = self.stack_name
affinity = None affinity = None
tolerations = None tolerations = None
@ -568,6 +645,7 @@ class ClusterInfo:
) )
) )
use_host_network = self._any_service_has_host_network()
template = client.V1PodTemplateSpec( template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(annotations=annotations, labels=labels), metadata=client.V1ObjectMeta(annotations=annotations, labels=labels),
spec=client.V1PodSpec( spec=client.V1PodSpec(
@ -577,6 +655,8 @@ class ClusterInfo:
affinity=affinity, affinity=affinity,
tolerations=tolerations, tolerations=tolerations,
runtime_class_name=self.spec.get_runtime_class(), runtime_class_name=self.spec.get_runtime_class(),
host_network=use_host_network or None,
dns_policy=("ClusterFirstWithHostNet" if use_host_network else None),
), ),
) )
spec = client.V1DeploymentSpec( spec = client.V1DeploymentSpec(
@ -592,3 +672,75 @@ class ClusterInfo:
spec=spec, spec=spec,
) )
return deployment return deployment
def get_jobs(self, image_pull_policy: Optional[str] = None) -> List[client.V1Job]:
"""Build k8s Job objects from parsed job compose files.
Each job compose file produces a V1Job with:
- restartPolicy: Never
- backoffLimit: 0
- Name: {app_name}-job-{job_name}
"""
if not self.parsed_job_yaml_map:
return []
jobs = []
registry_config = self.spec.get_image_registry_config()
if registry_config:
secret_name = f"{self.app_name}-registry"
image_pull_secrets = [client.V1LocalObjectReference(name=secret_name)]
else:
image_pull_secrets = []
for job_file in self.parsed_job_yaml_map:
# Build containers for this single job file
single_job_map = {job_file: self.parsed_job_yaml_map[job_file]}
containers, _services, volumes = self._build_containers(
single_job_map, image_pull_policy
)
# Derive job name from file path: docker-compose-<name>.yml -> <name>
base = os.path.basename(job_file)
# Strip docker-compose- prefix and .yml suffix
job_name = base
if job_name.startswith("docker-compose-"):
job_name = job_name[len("docker-compose-"):]
if job_name.endswith(".yml"):
job_name = job_name[: -len(".yml")]
elif job_name.endswith(".yaml"):
job_name = job_name[: -len(".yaml")]
# Use a distinct app label for job pods so they don't get
# picked up by pods_in_deployment() which queries app={app_name}.
pod_labels = {
"app": f"{self.app_name}-job",
**({"app.kubernetes.io/stack": self.stack_name} if self.stack_name else {}),
}
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(
labels=pod_labels
),
spec=client.V1PodSpec(
containers=containers,
image_pull_secrets=image_pull_secrets,
volumes=volumes,
restart_policy="Never",
),
)
job_spec = client.V1JobSpec(
template=template,
backoff_limit=0,
)
job_labels = {"app": self.app_name, **({"app.kubernetes.io/stack": self.stack_name} if self.stack_name else {})}
job = client.V1Job(
api_version="batch/v1",
kind="Job",
metadata=client.V1ObjectMeta(
name=f"{self.app_name}-job-{job_name}",
labels=job_labels,
),
spec=job_spec,
)
jobs.append(job)
return jobs

View File

@ -95,6 +95,7 @@ class K8sDeployer(Deployer):
type: str type: str
core_api: client.CoreV1Api core_api: client.CoreV1Api
apps_api: client.AppsV1Api apps_api: client.AppsV1Api
batch_api: client.BatchV1Api
networking_api: client.NetworkingV1Api networking_api: client.NetworkingV1Api
k8s_namespace: str k8s_namespace: str
kind_cluster_name: str kind_cluster_name: str
@ -110,6 +111,7 @@ class K8sDeployer(Deployer):
compose_files, compose_files,
compose_project_name, compose_project_name,
compose_env_file, compose_env_file,
job_compose_files=None,
) -> None: ) -> None:
self.type = type self.type = type
self.skip_cluster_management = False self.skip_cluster_management = False
@ -124,15 +126,24 @@ class K8sDeployer(Deployer):
# Use deployment-specific namespace for resource isolation and easy cleanup # Use deployment-specific namespace for resource isolation and easy cleanup
self.k8s_namespace = f"laconic-{compose_project_name}" self.k8s_namespace = f"laconic-{compose_project_name}"
self.cluster_info = ClusterInfo() self.cluster_info = ClusterInfo()
# stack.name may be an absolute path (from spec "stack:" key after
# path resolution). Extract just the directory basename for labels.
raw_name = deployment_context.stack.name if deployment_context else ""
stack_name = Path(raw_name).name if raw_name else ""
self.cluster_info.int( self.cluster_info.int(
compose_files, compose_files,
compose_env_file, compose_env_file,
compose_project_name, compose_project_name,
deployment_context.spec, deployment_context.spec,
stack_name=stack_name,
) )
# Initialize job compose files if provided
if job_compose_files:
self.cluster_info.init_jobs(job_compose_files)
if opts.o.debug: if opts.o.debug:
print(f"Deployment dir: {deployment_context.deployment_dir}") print(f"Deployment dir: {deployment_context.deployment_dir}")
print(f"Compose files: {compose_files}") print(f"Compose files: {compose_files}")
print(f"Job compose files: {job_compose_files}")
print(f"Project name: {compose_project_name}") print(f"Project name: {compose_project_name}")
print(f"Env file: {compose_env_file}") print(f"Env file: {compose_env_file}")
print(f"Type: {type}") print(f"Type: {type}")
@ -150,6 +161,7 @@ class K8sDeployer(Deployer):
self.core_api = client.CoreV1Api() self.core_api = client.CoreV1Api()
self.networking_api = client.NetworkingV1Api() self.networking_api = client.NetworkingV1Api()
self.apps_api = client.AppsV1Api() self.apps_api = client.AppsV1Api()
self.batch_api = client.BatchV1Api()
self.custom_obj_api = client.CustomObjectsApi() self.custom_obj_api = client.CustomObjectsApi()
def _ensure_namespace(self): def _ensure_namespace(self):
@ -256,6 +268,11 @@ class K8sDeployer(Deployer):
print(f"{cfg_rsp}") print(f"{cfg_rsp}")
def _create_deployment(self): def _create_deployment(self):
# Skip if there are no pods to deploy (e.g. jobs-only stacks)
if not self.cluster_info.parsed_pod_yaml_map:
if opts.o.debug:
print("No pods defined, skipping Deployment creation")
return
# Process compose files into a Deployment # Process compose files into a Deployment
deployment = self.cluster_info.get_deployment( deployment = self.cluster_info.get_deployment(
image_pull_policy=None if self.is_kind() else "Always" image_pull_policy=None if self.is_kind() else "Always"
@ -293,6 +310,26 @@ class K8sDeployer(Deployer):
print("Service created:") print("Service created:")
print(f"{service_resp}") print(f"{service_resp}")
def _create_jobs(self):
# Process job compose files into k8s Jobs
jobs = self.cluster_info.get_jobs(
image_pull_policy=None if self.is_kind() else "Always"
)
for job in jobs:
if opts.o.debug:
print(f"Sending this job: {job}")
if not opts.o.dry_run:
job_resp = self.batch_api.create_namespaced_job(
body=job, namespace=self.k8s_namespace
)
if opts.o.debug:
print("Job created:")
if job_resp.metadata:
print(
f" {job_resp.metadata.namespace} "
f"{job_resp.metadata.name}"
)
def _find_certificate_for_host_name(self, host_name): def _find_certificate_for_host_name(self, host_name):
all_certificates = self.custom_obj_api.list_namespaced_custom_object( all_certificates = self.custom_obj_api.list_namespaced_custom_object(
group="cert-manager.io", group="cert-manager.io",
@ -384,6 +421,7 @@ class K8sDeployer(Deployer):
self._create_volume_data() self._create_volume_data()
self._create_deployment() self._create_deployment()
self._create_jobs()
http_proxy_info = self.cluster_info.spec.get_http_proxy() http_proxy_info = self.cluster_info.spec.get_http_proxy()
# Note: we don't support tls for kind (enabling tls causes errors) # Note: we don't support tls for kind (enabling tls causes errors)
@ -426,6 +464,11 @@ class K8sDeployer(Deployer):
print("NodePort created:") print("NodePort created:")
print(f"{nodeport_resp}") print(f"{nodeport_resp}")
# Call start() hooks — stacks can create additional k8s resources
if self.deployment_context:
from stack_orchestrator.deploy.deployment_create import call_stack_deploy_start
call_stack_deploy_start(self.deployment_context)
def down(self, timeout, volumes, skip_cluster_management): def down(self, timeout, volumes, skip_cluster_management):
self.skip_cluster_management = skip_cluster_management self.skip_cluster_management = skip_cluster_management
self.connect_api() self.connect_api()
@ -574,14 +617,14 @@ class K8sDeployer(Deployer):
def logs(self, services, tail, follow, stream): def logs(self, services, tail, follow, stream):
self.connect_api() self.connect_api()
pods = pods_in_deployment(self.core_api, self.cluster_info.app_name) pods = pods_in_deployment(self.core_api, self.cluster_info.app_name, namespace=self.k8s_namespace)
if len(pods) > 1: if len(pods) > 1:
print("Warning: more than one pod in the deployment") print("Warning: more than one pod in the deployment")
if len(pods) == 0: if len(pods) == 0:
log_data = "******* Pods not running ********\n" log_data = "******* Pods not running ********\n"
else: else:
k8s_pod_name = pods[0] k8s_pod_name = pods[0]
containers = containers_in_pod(self.core_api, k8s_pod_name) containers = containers_in_pod(self.core_api, k8s_pod_name, namespace=self.k8s_namespace)
# If pod not started, logs request below will throw an exception # If pod not started, logs request below will throw an exception
try: try:
log_data = "" log_data = ""
@ -599,6 +642,10 @@ class K8sDeployer(Deployer):
return log_stream_from_string(log_data) return log_stream_from_string(log_data)
def update(self): def update(self):
if not self.cluster_info.parsed_pod_yaml_map:
if opts.o.debug:
print("No pods defined, skipping update")
return
self.connect_api() self.connect_api()
ref_deployment = self.cluster_info.get_deployment() ref_deployment = self.cluster_info.get_deployment()
if not ref_deployment or not ref_deployment.metadata: if not ref_deployment or not ref_deployment.metadata:
@ -659,16 +706,10 @@ class K8sDeployer(Deployer):
def run_job(self, job_name: str, helm_release: Optional[str] = None): def run_job(self, job_name: str, helm_release: Optional[str] = None):
if not opts.o.dry_run: if not opts.o.dry_run:
from stack_orchestrator.deploy.k8s.helm.job_runner import run_helm_job
# Check if this is a helm-based deployment # Check if this is a helm-based deployment
chart_dir = self.deployment_dir / "chart" chart_dir = self.deployment_dir / "chart"
if not chart_dir.exists(): if chart_dir.exists():
# TODO: Implement job support for compose-based K8s deployments from stack_orchestrator.deploy.k8s.helm.job_runner import run_helm_job
raise Exception(
f"Job support is only available for helm-based "
f"deployments. Chart directory not found: {chart_dir}"
)
# Run the job using the helm job runner # Run the job using the helm job runner
run_helm_job( run_helm_job(
@ -679,6 +720,29 @@ class K8sDeployer(Deployer):
timeout=600, timeout=600,
verbose=opts.o.verbose, verbose=opts.o.verbose,
) )
else:
# Non-Helm path: create job from ClusterInfo
self.connect_api()
jobs = self.cluster_info.get_jobs(
image_pull_policy=None if self.is_kind() else "Always"
)
# Find the matching job by name
target_name = f"{self.cluster_info.app_name}-job-{job_name}"
matched_job = None
for job in jobs:
if job.metadata and job.metadata.name == target_name:
matched_job = job
break
if matched_job is None:
raise Exception(
f"Job '{job_name}' not found. Available jobs: "
f"{[j.metadata.name for j in jobs if j.metadata]}"
)
if opts.o.debug:
print(f"Creating job: {target_name}")
self.batch_api.create_namespaced_job(
body=matched_job, namespace=self.k8s_namespace
)
def is_kind(self): def is_kind(self):
return self.type == "k8s-kind" return self.type == "k8s-kind"

View File

@ -393,10 +393,10 @@ def load_images_into_kind(kind_cluster_name: str, image_set: Set[str]):
raise DeployerException(f"kind load docker-image failed: {result}") raise DeployerException(f"kind load docker-image failed: {result}")
def pods_in_deployment(core_api: client.CoreV1Api, deployment_name: str): def pods_in_deployment(core_api: client.CoreV1Api, deployment_name: str, namespace: str = "default"):
pods = [] pods = []
pod_response = core_api.list_namespaced_pod( pod_response = core_api.list_namespaced_pod(
namespace="default", label_selector=f"app={deployment_name}" namespace=namespace, label_selector=f"app={deployment_name}"
) )
if opts.o.debug: if opts.o.debug:
print(f"pod_response: {pod_response}") print(f"pod_response: {pod_response}")
@ -406,10 +406,10 @@ def pods_in_deployment(core_api: client.CoreV1Api, deployment_name: str):
return pods return pods
def containers_in_pod(core_api: client.CoreV1Api, pod_name: str) -> List[str]: def containers_in_pod(core_api: client.CoreV1Api, pod_name: str, namespace: str = "default") -> List[str]:
containers: List[str] = [] containers: List[str] = []
pod_response = cast( pod_response = cast(
client.V1Pod, core_api.read_namespaced_pod(pod_name, namespace="default") client.V1Pod, core_api.read_namespaced_pod(pod_name, namespace=namespace)
) )
if opts.o.debug: if opts.o.debug:
print(f"pod_response: {pod_response}") print(f"pod_response: {pod_response}")

View File

@ -115,11 +115,35 @@ class Spec:
def get_configmaps(self): def get_configmaps(self):
return self.obj.get(constants.configmaps_key, {}) return self.obj.get(constants.configmaps_key, {})
def get_secrets(self):
return self.obj.get(constants.secrets_key, {})
def get_container_resources(self): def get_container_resources(self):
return Resources( return Resources(
self.obj.get(constants.resources_key, {}).get("containers", {}) self.obj.get(constants.resources_key, {}).get("containers", {})
) )
def get_container_resources_for(
self, container_name: str
) -> typing.Optional[Resources]:
"""Look up per-container resource overrides from spec.yml.
Checks resources.containers.<container_name> in the spec. Returns None
if no per-container override exists (caller falls back to other sources).
"""
containers_block = self.obj.get(constants.resources_key, {}).get(
"containers", {}
)
if container_name in containers_block:
entry = containers_block[container_name]
# Only treat it as a per-container override if it's a dict with
# reservations/limits nested inside (not a top-level global key)
if isinstance(entry, dict) and (
"reservations" in entry or "limits" in entry
):
return Resources(entry)
return None
def get_volume_resources(self): def get_volume_resources(self):
return Resources( return Resources(
self.obj.get(constants.resources_key, {}).get(constants.volumes_key, {}) self.obj.get(constants.resources_key, {}).get(constants.volumes_key, {})

View File

@ -19,7 +19,7 @@ from pathlib import Path
from urllib.parse import urlparse from urllib.parse import urlparse
from tempfile import NamedTemporaryFile from tempfile import NamedTemporaryFile
from stack_orchestrator.util import error_exit, global_options2 from stack_orchestrator.util import error_exit, global_options2, get_yaml
from stack_orchestrator.deploy.deployment_create import init_operation, create_operation from stack_orchestrator.deploy.deployment_create import init_operation, create_operation
from stack_orchestrator.deploy.deploy import create_deploy_context from stack_orchestrator.deploy.deploy import create_deploy_context
from stack_orchestrator.deploy.deploy_types import DeployCommandContext from stack_orchestrator.deploy.deploy_types import DeployCommandContext
@ -41,19 +41,23 @@ def _fixup_container_tag(deployment_dir: str, image: str):
def _fixup_url_spec(spec_file_name: str, url: str): def _fixup_url_spec(spec_file_name: str, url: str):
# url is like: https://example.com/path # url is like: https://example.com/path
parsed_url = urlparse(url) parsed_url = urlparse(url)
http_proxy_spec = f"""
http-proxy:
- host-name: {parsed_url.hostname}
routes:
- path: '{parsed_url.path if parsed_url.path else "/"}'
proxy-to: webapp:80
"""
spec_file_path = Path(spec_file_name) spec_file_path = Path(spec_file_name)
yaml = get_yaml()
with open(spec_file_path) as rfile: with open(spec_file_path) as rfile:
contents = rfile.read() contents = yaml.load(rfile)
contents = contents + http_proxy_spec contents.setdefault("network", {})["http-proxy"] = [
{
"host-name": parsed_url.hostname,
"routes": [
{
"path": parsed_url.path if parsed_url.path else "/",
"proxy-to": "webapp:80",
}
],
}
]
with open(spec_file_path, "w") as wfile: with open(spec_file_path, "w") as wfile:
wfile.write(contents) yaml.dump(contents, wfile)
def create_deployment( def create_deployment(

View File

@ -75,6 +75,8 @@ def get_parsed_stack_config(stack):
def get_pod_list(parsed_stack): def get_pod_list(parsed_stack):
# Handle both old and new format # Handle both old and new format
if "pods" not in parsed_stack or not parsed_stack["pods"]:
return []
pods = parsed_stack["pods"] pods = parsed_stack["pods"]
if type(pods[0]) is str: if type(pods[0]) is str:
result = pods result = pods
@ -103,7 +105,7 @@ def get_job_list(parsed_stack):
def get_plugin_code_paths(stack) -> List[Path]: def get_plugin_code_paths(stack) -> List[Path]:
parsed_stack = get_parsed_stack_config(stack) parsed_stack = get_parsed_stack_config(stack)
pods = parsed_stack["pods"] pods = parsed_stack.get("pods") or []
result: Set[Path] = set() result: Set[Path] = set()
for pod in pods: for pod in pods:
if type(pod) is str: if type(pod) is str:
@ -153,15 +155,16 @@ def resolve_job_compose_file(stack, job_name: str):
if proposed_file.exists(): if proposed_file.exists():
return proposed_file return proposed_file
# If we don't find it fall through to the internal case # If we don't find it fall through to the internal case
# TODO: Add internal compose-jobs directory support if needed data_dir = Path(__file__).absolute().parent.joinpath("data")
# For now, jobs are expected to be in external stacks only compose_jobs_base = data_dir.joinpath("compose-jobs")
compose_jobs_base = Path(stack).parent.parent.joinpath("compose-jobs")
return compose_jobs_base.joinpath(f"docker-compose-{job_name}.yml") return compose_jobs_base.joinpath(f"docker-compose-{job_name}.yml")
def get_pod_file_path(stack, parsed_stack, pod_name: str): def get_pod_file_path(stack, parsed_stack, pod_name: str):
pods = parsed_stack["pods"] pods = parsed_stack.get("pods") or []
result = None result = None
if not pods:
return result
if type(pods[0]) is str: if type(pods[0]) is str:
result = resolve_compose_file(stack, pod_name) result = resolve_compose_file(stack, pod_name)
else: else:
@ -189,9 +192,9 @@ def get_job_file_path(stack, parsed_stack, job_name: str):
def get_pod_script_paths(parsed_stack, pod_name: str): def get_pod_script_paths(parsed_stack, pod_name: str):
pods = parsed_stack["pods"] pods = parsed_stack.get("pods") or []
result = [] result = []
if not type(pods[0]) is str: if not pods or not type(pods[0]) is str:
for pod in pods: for pod in pods:
if pod["name"] == pod_name: if pod["name"] == pod_name:
pod_root_dir = os.path.join( pod_root_dir = os.path.join(
@ -207,9 +210,9 @@ def get_pod_script_paths(parsed_stack, pod_name: str):
def pod_has_scripts(parsed_stack, pod_name: str): def pod_has_scripts(parsed_stack, pod_name: str):
pods = parsed_stack["pods"] pods = parsed_stack.get("pods") or []
result = False result = False
if type(pods[0]) is str: if not pods or type(pods[0]) is str:
result = False result = False
else: else:
for pod in pods: for pod in pods:

View File

@ -105,6 +105,15 @@ fi
# Add a config file to be picked up by the ConfigMap before starting. # Add a config file to be picked up by the ConfigMap before starting.
echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/configmaps/test-config/test_config echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/configmaps/test-config/test_config
# Add secrets to the deployment spec (references a pre-existing k8s Secret by name).
# deploy init already writes an empty 'secrets: {}' key, so we replace it
# rather than appending (ruamel.yaml rejects duplicate keys).
deployment_spec_file=${test_deployment_dir}/spec.yml
sed -i 's/^secrets: {}$/secrets:\n test-secret:\n - TEST_SECRET_KEY/' ${deployment_spec_file}
# Get the deployment ID for kubectl queries
deployment_id=$(cat ${test_deployment_dir}/deployment.yml | cut -d ' ' -f 2)
echo "deploy create output file test: passed" echo "deploy create output file test: passed"
# Try to start the deployment # Try to start the deployment
$TEST_TARGET_SO deployment --dir $test_deployment_dir start $TEST_TARGET_SO deployment --dir $test_deployment_dir start
@ -166,12 +175,71 @@ else
delete_cluster_exit delete_cluster_exit
fi fi
# Stop then start again and check the volume was preserved # --- New feature tests: namespace, labels, jobs, secrets ---
$TEST_TARGET_SO deployment --dir $test_deployment_dir stop
# Sleep a bit just in case # Check that the pod is in the deployment-specific namespace (not default)
# sleep for longer to check if that's why the subsequent create cluster fails ns_pod_count=$(kubectl get pods -n laconic-${deployment_id} -l app=${deployment_id} --no-headers 2>/dev/null | wc -l)
sleep 20 if [ "$ns_pod_count" -gt 0 ]; then
$TEST_TARGET_SO deployment --dir $test_deployment_dir start echo "namespace isolation test: passed"
else
echo "namespace isolation test: FAILED"
echo "Expected pod in namespace laconic-${deployment_id}"
delete_cluster_exit
fi
# Check that the stack label is set on the pod
stack_label_count=$(kubectl get pods -n laconic-${deployment_id} -l app.kubernetes.io/stack=test --no-headers 2>/dev/null | wc -l)
if [ "$stack_label_count" -gt 0 ]; then
echo "stack label test: passed"
else
echo "stack label test: FAILED"
delete_cluster_exit
fi
# Check that the job completed successfully
for i in {1..30}; do
job_status=$(kubectl get job ${deployment_id}-job-test-job -n laconic-${deployment_id} -o jsonpath='{.status.succeeded}' 2>/dev/null || true)
if [ "$job_status" == "1" ]; then
break
fi
sleep 2
done
if [ "$job_status" == "1" ]; then
echo "job completion test: passed"
else
echo "job completion test: FAILED"
echo "Job status.succeeded: ${job_status}"
delete_cluster_exit
fi
# Check that the secrets spec results in an envFrom secretRef on the pod
secret_ref=$(kubectl get pod -n laconic-${deployment_id} -l app=${deployment_id} \
-o jsonpath='{.items[0].spec.containers[0].envFrom[?(@.secretRef.name=="test-secret")].secretRef.name}' 2>/dev/null || true)
if [ "$secret_ref" == "test-secret" ]; then
echo "secrets envFrom test: passed"
else
echo "secrets envFrom test: FAILED"
echo "Expected secretRef 'test-secret', got: ${secret_ref}"
delete_cluster_exit
fi
# Stop then start again and check the volume was preserved.
# Use --skip-cluster-management to reuse the existing kind cluster instead of
# destroying and recreating it (which fails on CI runners due to stale etcd/certs
# and cgroup detection issues).
# Use --delete-volumes to clear PVs so fresh PVCs can bind on restart.
# Bind-mount data survives on the host filesystem; provisioner volumes are recreated fresh.
$TEST_TARGET_SO deployment --dir $test_deployment_dir stop --delete-volumes --skip-cluster-management
# Wait for the namespace to be fully terminated before restarting.
# Without this, 'start' fails with 403 Forbidden because the namespace
# is still in Terminating state.
for i in {1..60}; do
if ! kubectl get namespace laconic-${deployment_id} 2>/dev/null | grep -q .; then
break
fi
sleep 2
done
$TEST_TARGET_SO deployment --dir $test_deployment_dir start --skip-cluster-management
wait_for_pods_started wait_for_pods_started
wait_for_log_output wait_for_log_output
sleep 1 sleep 1
@ -184,8 +252,9 @@ else
delete_cluster_exit delete_cluster_exit
fi fi
# These volumes will be completely destroyed by the kind delete/create, because they lived inside # Provisioner volumes are destroyed when PVs are deleted (--delete-volumes on stop).
# the kind container. So, unlike the bind-mount case, they will appear fresh after the restart. # Unlike bind-mount volumes whose data persists on the host, provisioner storage
# is gone, so the volume appears fresh after restart.
log_output_11=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs ) log_output_11=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs )
if [[ "$log_output_11" == *"/data2 filesystem is fresh"* ]]; then if [[ "$log_output_11" == *"/data2 filesystem is fresh"* ]]; then
echo "Fresh provisioner volumes test: passed" echo "Fresh provisioner volumes test: passed"

View File

@ -206,7 +206,7 @@ fi
# The deployment's pod should be scheduled onto node: worker3 # The deployment's pod should be scheduled onto node: worker3
# Check that's what happened # Check that's what happened
# Get get the node onto which the stack pod has been deployed # Get get the node onto which the stack pod has been deployed
deployment_node=$(kubectl get pods -l app=${deployment_id} -o=jsonpath='{.items..spec.nodeName}') deployment_node=$(kubectl get pods -n laconic-${deployment_id} -l app=${deployment_id} -o=jsonpath='{.items..spec.nodeName}')
expected_node=${deployment_id}-worker3 expected_node=${deployment_id}-worker3
echo "Stack pod deployed to node: ${deployment_node}" echo "Stack pod deployed to node: ${deployment_node}"
if [[ ${deployment_node} == ${expected_node} ]]; then if [[ ${deployment_node} == ${expected_node} ]]; then

View File

@ -1,5 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# TODO: handle ARM # TODO: handle ARM
curl --silent -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 curl --silent -Lo ./kind https://kind.sigs.k8s.io/dl/v0.25.0/kind-linux-amd64
chmod +x ./kind chmod +x ./kind
mv ./kind /usr/local/bin mv ./kind /usr/local/bin

View File

@ -1,5 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# TODO: handle ARM # TODO: handle ARM
curl --silent -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" # Pin kubectl to match Kind's default k8s version (v1.31.x)
curl --silent -LO "https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl"
chmod +x ./kubectl chmod +x ./kubectl
mv ./kubectl /usr/local/bin mv ./kubectl /usr/local/bin

View File

@ -0,0 +1,53 @@
#!/bin/bash
# Run a test suite locally in an isolated venv.
#
# Usage:
# ./tests/scripts/run-test-local.sh <test-script>
#
# Examples:
# ./tests/scripts/run-test-local.sh tests/webapp-test/run-webapp-test.sh
# ./tests/scripts/run-test-local.sh tests/smoke-test/run-smoke-test.sh
# ./tests/scripts/run-test-local.sh tests/k8s-deploy/run-deploy-test.sh
#
# The script creates a temporary venv, installs shiv, builds the laconic-so
# package, runs the requested test, then cleans up.
set -euo pipefail
if [ $# -lt 1 ]; then
echo "Usage: $0 <test-script> [args...]"
exit 1
fi
TEST_SCRIPT="$1"
shift
if [ ! -f "$TEST_SCRIPT" ]; then
echo "Error: $TEST_SCRIPT not found"
exit 1
fi
REPO_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
VENV_DIR=$(mktemp -d /tmp/so-test-XXXXXX)
cleanup() {
echo "Cleaning up venv: $VENV_DIR"
rm -rf "$VENV_DIR"
}
trap cleanup EXIT
cd "$REPO_DIR"
echo "==> Creating venv in $VENV_DIR"
python3 -m venv "$VENV_DIR"
source "$VENV_DIR/bin/activate"
echo "==> Installing shiv"
pip install -q shiv
echo "==> Building laconic-so package"
./scripts/create_build_tag_file.sh
./scripts/build_shiv_package.sh
echo "==> Running: $TEST_SCRIPT $*"
exec "./$TEST_SCRIPT" "$@"