Merge wd-a7b: cluster-id/namespace naming, jobs, multi-cert, secrets

Combines timestamp-based cluster IDs, namespace derived from stack name,
_build_containers refactor, jobs support, multi-ingress certificates,
user-declared secrets, and label-based resource cleanup with the existing
idempotent deploy, mount propagation, and port mapping fixes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
pull/740/head
A. F. Dudley 2026-04-01 18:22:07 +00:00
commit d50bd2b6d2
22 changed files with 830 additions and 175 deletions

BIN
.pebbles/pebbles.db 100644

Binary file not shown.

View File

@ -68,7 +68,7 @@ $ laconic-so build-npms --include <package-name> --force-rebuild
## deploy ## deploy
The `deploy` command group manages persistent deployments. The general workflow is `deploy init` to generate a spec file, then `deploy create` to create a deployment directory from the spec, then runtime commands like `deploy up` and `deploy down`. The `deploy` command group manages persistent deployments. The general workflow is `deploy init` to generate a spec file, then `deploy create` to create a deployment directory from the spec, then runtime commands like `deployment start` and `deployment stop`.
### deploy init ### deploy init
@ -101,35 +101,91 @@ Options:
- `--spec-file` (required): spec file to use - `--spec-file` (required): spec file to use
- `--deployment-dir`: target directory for deployment files - `--deployment-dir`: target directory for deployment files
- `--update`: update an existing deployment directory, preserving data volumes and env file. Changed files are backed up with a `.bak` suffix. The deployment's `config.env` and `deployment.yml` are also preserved. - `--update`: update an existing deployment directory, preserving data volumes and env file. Changed files are backed up with a `.bak` suffix. The deployment's `config.env` and `deployment.yml` are also preserved.
- `--helm-chart`: generate Helm chart instead of deploying (k8s only)
- `--network-dir`: network configuration supplied in this directory - `--network-dir`: network configuration supplied in this directory
- `--initial-peers`: initial set of persistent peers - `--initial-peers`: initial set of persistent peers
### deploy up ## deployment
Start a deployment: Runtime commands for managing a created deployment. Use `--dir` to specify the deployment directory.
### deployment start
Start a deployment (`up` is a legacy alias):
``` ```
$ laconic-so deployment --dir <deployment-dir> up $ laconic-so deployment --dir <deployment-dir> start
``` ```
### deploy down Options:
- `--stay-attached` / `--detatch-terminal`: attach to container stdout (default: detach)
- `--skip-cluster-management` / `--perform-cluster-management`: skip kind cluster creation/teardown (default: perform management). Only affects k8s-kind deployments. Use this when multiple stacks share a single cluster.
Stop a deployment: ### deployment stop
```
$ laconic-so deployment --dir <deployment-dir> down
```
Use `--delete-volumes` to also remove data volumes.
### deploy ps Stop a deployment (`down` is a legacy alias):
```
$ laconic-so deployment --dir <deployment-dir> stop
```
Options:
- `--delete-volumes` / `--preserve-volumes`: delete data volumes on stop (default: preserve)
- `--skip-cluster-management` / `--perform-cluster-management`: skip kind cluster teardown (default: perform management). Use this to stop a single deployment without destroying a shared cluster.
### deployment restart
Restart a deployment with GitOps-aware workflow. Pulls latest stack code, syncs the deployment directory from the git-tracked spec, and restarts services:
```
$ laconic-so deployment --dir <deployment-dir> restart
```
See [deployment_patterns.md](deployment_patterns.md) for the recommended GitOps workflow.
### deployment ps
Show running services: Show running services:
``` ```
$ laconic-so deployment --dir <deployment-dir> ps $ laconic-so deployment --dir <deployment-dir> ps
``` ```
### deploy logs ### deployment logs
View service logs: View service logs:
``` ```
$ laconic-so deployment --dir <deployment-dir> logs $ laconic-so deployment --dir <deployment-dir> logs
``` ```
Use `-f` to follow and `-n <count>` to tail. Use `-f` to follow and `-n <count>` to tail.
### deployment exec
Execute a command in a running service container:
```
$ laconic-so deployment --dir <deployment-dir> exec <service-name> "<command>"
```
### deployment status
Show deployment status:
```
$ laconic-so deployment --dir <deployment-dir> status
```
### deployment port
Show mapped ports for a service:
```
$ laconic-so deployment --dir <deployment-dir> port <service-name> <port>
```
### deployment push-images
Push deployment images to a registry:
```
$ laconic-so deployment --dir <deployment-dir> push-images
```
### deployment run-job
Run a one-time job in the deployment:
```
$ laconic-so deployment --dir <deployment-dir> run-job <job-name>
```

View File

@ -30,7 +30,7 @@ git commit -m "Add my-stack deployment configuration"
git push git push
# On deployment server: deploy from git-tracked spec # On deployment server: deploy from git-tracked spec
laconic-so deploy create \ laconic-so --stack my-stack deploy create \
--spec-file /path/to/operator-repo/spec.yml \ --spec-file /path/to/operator-repo/spec.yml \
--deployment-dir my-deployment --deployment-dir my-deployment

View File

@ -29,6 +29,7 @@ network_key = "network"
http_proxy_key = "http-proxy" http_proxy_key = "http-proxy"
image_registry_key = "image-registry" image_registry_key = "image-registry"
configmaps_key = "configmaps" configmaps_key = "configmaps"
secrets_key = "secrets"
resources_key = "resources" resources_key = "resources"
volumes_key = "volumes" volumes_key = "volumes"
security_key = "security" security_key = "security"

View File

@ -0,0 +1,5 @@
services:
test-job:
image: cerc/test-container:local
entrypoint: /bin/sh
command: ["-c", "echo 'Job completed successfully'"]

View File

@ -21,7 +21,7 @@ from stack_orchestrator.deploy.deploy_util import VolumeMapping, run_container_c
from pathlib import Path from pathlib import Path
default_spec_file_content = """config: default_spec_file_content = """config:
test-variable-1: test-value-1 test_variable_1: test-value-1
""" """

View File

@ -7,3 +7,5 @@ containers:
- cerc/test-container - cerc/test-container
pods: pods:
- test - test
jobs:
- test-job

View File

@ -35,6 +35,7 @@ from stack_orchestrator.util import (
get_dev_root_path, get_dev_root_path,
stack_is_in_deployment, stack_is_in_deployment,
resolve_compose_file, resolve_compose_file,
get_job_list,
) )
from stack_orchestrator.deploy.deployer import DeployerException from stack_orchestrator.deploy.deployer import DeployerException
from stack_orchestrator.deploy.deployer_factory import getDeployer from stack_orchestrator.deploy.deployer_factory import getDeployer
@ -130,6 +131,7 @@ def create_deploy_context(
compose_files=cluster_context.compose_files, compose_files=cluster_context.compose_files,
compose_project_name=cluster_context.cluster, compose_project_name=cluster_context.cluster,
compose_env_file=cluster_context.env_file, compose_env_file=cluster_context.env_file,
job_compose_files=cluster_context.job_compose_files,
) )
return DeployCommandContext(stack, cluster_context, deployer) return DeployCommandContext(stack, cluster_context, deployer)
@ -409,7 +411,7 @@ def _make_cluster_context(ctx, stack, include, exclude, cluster, env_file):
stack_config = get_parsed_stack_config(stack) stack_config = get_parsed_stack_config(stack)
if stack_config is not None: if stack_config is not None:
# TODO: syntax check the input here # TODO: syntax check the input here
pods_in_scope = stack_config["pods"] pods_in_scope = stack_config.get("pods") or []
cluster_config = ( cluster_config = (
stack_config["config"] if "config" in stack_config else None stack_config["config"] if "config" in stack_config else None
) )
@ -483,6 +485,22 @@ def _make_cluster_context(ctx, stack, include, exclude, cluster, env_file):
if ctx.verbose: if ctx.verbose:
print(f"files: {compose_files}") print(f"files: {compose_files}")
# Gather job compose files (from compose-jobs/ directory in deployment)
job_compose_files = []
if deployment and stack:
stack_config = get_parsed_stack_config(stack)
if stack_config:
jobs = get_job_list(stack_config)
compose_jobs_dir = stack.joinpath("compose-jobs")
for job in jobs:
job_file_name = os.path.join(
compose_jobs_dir, f"docker-compose-{job}.yml"
)
if os.path.exists(job_file_name):
job_compose_files.append(job_file_name)
if ctx.verbose:
print(f"job files: {job_compose_files}")
return ClusterContext( return ClusterContext(
ctx, ctx,
cluster, cluster,
@ -491,6 +509,7 @@ def _make_cluster_context(ctx, stack, include, exclude, cluster, env_file):
post_start_commands, post_start_commands,
cluster_config, cluster_config,
env_file, env_file,
job_compose_files=job_compose_files if job_compose_files else None,
) )

View File

@ -29,6 +29,7 @@ class ClusterContext:
post_start_commands: List[str] post_start_commands: List[str]
config: Optional[str] config: Optional[str]
env_file: Optional[str] env_file: Optional[str]
job_compose_files: Optional[List[str]] = None
@dataclass @dataclass

View File

@ -34,7 +34,12 @@ def getDeployerConfigGenerator(type: str, deployment_context):
def getDeployer( def getDeployer(
type: str, deployment_context, compose_files, compose_project_name, compose_env_file type: str,
deployment_context,
compose_files,
compose_project_name,
compose_env_file,
job_compose_files=None,
): ):
if type == "compose" or type is None: if type == "compose" or type is None:
return DockerDeployer( return DockerDeployer(
@ -54,6 +59,7 @@ def getDeployer(
compose_files, compose_files,
compose_project_name, compose_project_name,
compose_env_file, compose_env_file,
job_compose_files=job_compose_files,
) )
else: else:
print(f"ERROR: deploy-to {type} is not valid") print(f"ERROR: deploy-to {type} is not valid")

View File

@ -24,11 +24,13 @@ from typing import List, Optional
import random import random
from shutil import copy, copyfile, copytree, rmtree from shutil import copy, copyfile, copytree, rmtree
from secrets import token_hex from secrets import token_hex
import subprocess
import sys import sys
import filecmp import filecmp
import tempfile import tempfile
from stack_orchestrator import constants from stack_orchestrator import constants
from stack_orchestrator.ids import generate_id
from stack_orchestrator.opts import opts from stack_orchestrator.opts import opts
from stack_orchestrator.util import ( from stack_orchestrator.util import (
get_stack_path, get_stack_path,
@ -265,6 +267,25 @@ def call_stack_deploy_create(deployment_context, extra_args):
imported_stack.create(deployment_context, extra_args) imported_stack.create(deployment_context, extra_args)
def call_stack_deploy_start(deployment_context):
"""Call start() hooks after k8s deployments and jobs are created.
The start() hook receives the DeploymentContext, allowing stacks to
create additional k8s resources (Services, etc.) in the deployment namespace.
The namespace can be derived as f"laconic-{deployment_context.id}".
"""
python_file_paths = _commands_plugin_paths(deployment_context.stack.name)
for python_file_path in python_file_paths:
if python_file_path.exists():
spec = util.spec_from_file_location("commands", python_file_path)
if spec is None or spec.loader is None:
continue
imported_stack = util.module_from_spec(spec)
spec.loader.exec_module(imported_stack)
if _has_method(imported_stack, "start"):
imported_stack.start(deployment_context)
# Inspect the pod yaml to find config files referenced in subdirectories # Inspect the pod yaml to find config files referenced in subdirectories
# other than the one associated with the pod # other than the one associated with the pod
def _find_extra_config_dirs(parsed_pod_file, pod): def _find_extra_config_dirs(parsed_pod_file, pod):
@ -477,6 +498,9 @@ def init_operation(
spec_file_content["volumes"] = {**volume_descriptors, **orig_volumes} spec_file_content["volumes"] = {**volume_descriptors, **orig_volumes}
if configmap_descriptors: if configmap_descriptors:
spec_file_content["configmaps"] = configmap_descriptors spec_file_content["configmaps"] = configmap_descriptors
if "k8s" in deployer_type:
if "secrets" not in spec_file_content:
spec_file_content["secrets"] = {}
if opts.o.debug: if opts.o.debug:
print( print(
@ -491,7 +515,9 @@ def init_operation(
GENERATE_TOKEN_PATTERN = re.compile(r"\$generate:(\w+):(\d+)\$") GENERATE_TOKEN_PATTERN = re.compile(r"\$generate:(\w+):(\d+)\$")
def _generate_and_store_secrets(config_vars: dict, deployment_name: str): def _generate_and_store_secrets(
config_vars: dict, deployment_name: str, namespace: str = "default"
):
"""Generate secrets for $generate:...$ tokens and store in K8s Secret. """Generate secrets for $generate:...$ tokens and store in K8s Secret.
Called by `deploy create` - generates fresh secrets and stores them. Called by `deploy create` - generates fresh secrets and stores them.
@ -533,7 +559,6 @@ def _generate_and_store_secrets(config_vars: dict, deployment_name: str):
v1 = client.CoreV1Api() v1 = client.CoreV1Api()
secret_name = f"{deployment_name}-generated-secrets" secret_name = f"{deployment_name}-generated-secrets"
namespace = "default"
secret_data = {k: base64.b64encode(v.encode()).decode() for k, v in secrets.items()} secret_data = {k: base64.b64encode(v.encode()).decode() for k, v in secrets.items()}
k8s_secret = client.V1Secret( k8s_secret = client.V1Secret(
@ -637,7 +662,10 @@ def create_registry_secret(spec: Spec, deployment_name: str) -> Optional[str]:
def _write_config_file( def _write_config_file(
spec_file: Path, config_env_file: Path, deployment_name: Optional[str] = None spec_file: Path,
config_env_file: Path,
deployment_name: Optional[str] = None,
namespace: str = "default",
): ):
"""Write spec.yml config: entries to config.env. """Write spec.yml config: entries to config.env.
@ -661,7 +689,7 @@ def _write_config_file(
for v in config_vars.values() for v in config_vars.values()
) )
if has_generate_tokens: if has_generate_tokens:
_generate_and_store_secrets(config_vars, deployment_name) _generate_and_store_secrets(config_vars, deployment_name, namespace)
# Write non-secret config to config.env (exclude $generate:...$ tokens) # Write non-secret config to config.env (exclude $generate:...$ tokens)
with open(config_env_file, "w") as output_file: with open(config_env_file, "w") as output_file:
@ -687,9 +715,31 @@ def _copy_files_to_directory(file_paths: List[Path], directory: Path):
copy(path, os.path.join(directory, os.path.basename(path))) copy(path, os.path.join(directory, os.path.basename(path)))
def _get_existing_kind_cluster() -> Optional[str]:
"""Return the name of an existing Kind cluster, or None."""
try:
result = subprocess.run(
["kind", "get", "clusters"],
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0:
clusters = [
c.strip() for c in result.stdout.strip().splitlines() if c.strip()
]
if clusters:
return clusters[0]
except (FileNotFoundError, subprocess.TimeoutExpired):
pass
return None
def _create_deployment_file(deployment_dir: Path, stack_source: Optional[Path] = None): def _create_deployment_file(deployment_dir: Path, stack_source: Optional[Path] = None):
deployment_file_path = deployment_dir.joinpath(constants.deployment_file_name) deployment_file_path = deployment_dir.joinpath(constants.deployment_file_name)
cluster = f"{constants.cluster_name_prefix}{token_hex(8)}" # Reuse existing Kind cluster if one exists, otherwise generate a timestamp-based ID
existing = _get_existing_kind_cluster()
cluster = existing if existing else generate_id("laconic")
deployment_content = {constants.cluster_id_key: cluster} deployment_content = {constants.cluster_id_key: cluster}
if stack_source: if stack_source:
deployment_content["stack-source"] = str(stack_source) deployment_content["stack-source"] = str(stack_source)
@ -943,8 +993,13 @@ def _write_deployment_files(
# Use stack_name as deployment_name for K8s secret naming # Use stack_name as deployment_name for K8s secret naming
# Extract just the name part if stack_name is a path ("path/to/stack" -> "stack") # Extract just the name part if stack_name is a path ("path/to/stack" -> "stack")
deployment_name = Path(stack_name).name.replace("_", "-") deployment_name = Path(stack_name).name.replace("_", "-")
# Derive namespace from spec or stack name, matching deploy_k8s logic
namespace = parsed_spec.get_namespace() or f"laconic-{deployment_name}"
_write_config_file( _write_config_file(
spec_file, target_dir.joinpath(constants.config_file_name), deployment_name spec_file,
target_dir.joinpath(constants.config_file_name),
deployment_name,
namespace=namespace,
) )
# Copy any k8s config file into the target dir # Copy any k8s config file into the target dir
@ -994,17 +1049,7 @@ def _write_deployment_files(
script_paths = get_pod_script_paths(parsed_stack, pod) script_paths = get_pod_script_paths(parsed_stack, pod)
_copy_files_to_directory(script_paths, destination_script_dir) _copy_files_to_directory(script_paths, destination_script_dir)
if parsed_spec.is_kubernetes_deployment(): if not parsed_spec.is_kubernetes_deployment():
for configmap in parsed_spec.get_configmaps():
source_config_dir = resolve_config_dir(stack_name, configmap)
if os.path.exists(source_config_dir):
destination_config_dir = target_dir.joinpath(
"configmaps", configmap
)
copytree(
source_config_dir, destination_config_dir, dirs_exist_ok=True
)
else:
# TODO: # TODO:
# This is odd - looks up config dir that matches a volume name, # This is odd - looks up config dir that matches a volume name,
# then copies as a mount dir? # then copies as a mount dir?
@ -1026,9 +1071,18 @@ def _write_deployment_files(
dirs_exist_ok=True, dirs_exist_ok=True,
) )
# Copy the job files into the target dir (for Docker deployments) # Copy configmap directories for k8s deployments (outside the pod loop
# so this works for jobs-only stacks too)
if parsed_spec.is_kubernetes_deployment():
for configmap in parsed_spec.get_configmaps():
source_config_dir = resolve_config_dir(stack_name, configmap)
if os.path.exists(source_config_dir):
destination_config_dir = target_dir.joinpath("configmaps", configmap)
copytree(source_config_dir, destination_config_dir, dirs_exist_ok=True)
# Copy the job files into the target dir
jobs = get_job_list(parsed_stack) jobs = get_job_list(parsed_stack)
if jobs and not parsed_spec.is_kubernetes_deployment(): if jobs:
destination_compose_jobs_dir = target_dir.joinpath("compose-jobs") destination_compose_jobs_dir = target_dir.joinpath("compose-jobs")
os.makedirs(destination_compose_jobs_dir, exist_ok=True) os.makedirs(destination_compose_jobs_dir, exist_ok=True)
for job in jobs: for job in jobs:

View File

@ -72,15 +72,24 @@ def to_k8s_resource_requirements(resources: Resources) -> client.V1ResourceRequi
class ClusterInfo: class ClusterInfo:
parsed_pod_yaml_map: Any parsed_pod_yaml_map: Any
parsed_job_yaml_map: Any
image_set: Set[str] = set() image_set: Set[str] = set()
app_name: str app_name: str
stack_name: str
environment_variables: DeployEnvVars environment_variables: DeployEnvVars
spec: Spec spec: Spec
def __init__(self) -> None: def __init__(self) -> None:
pass self.parsed_job_yaml_map = {}
def int(self, pod_files: List[str], compose_env_file, deployment_name, spec: Spec): def int(
self,
pod_files: List[str],
compose_env_file,
deployment_name,
spec: Spec,
stack_name="",
):
self.parsed_pod_yaml_map = parsed_pod_files_map_from_file_names(pod_files) self.parsed_pod_yaml_map = parsed_pod_files_map_from_file_names(pod_files)
# Find the set of images in the pods # Find the set of images in the pods
self.image_set = images_for_deployment(pod_files) self.image_set = images_for_deployment(pod_files)
@ -90,10 +99,23 @@ class ClusterInfo:
} }
self.environment_variables = DeployEnvVars(env_vars) self.environment_variables = DeployEnvVars(env_vars)
self.app_name = deployment_name self.app_name = deployment_name
self.stack_name = stack_name
self.spec = spec self.spec = spec
if opts.o.debug: if opts.o.debug:
print(f"Env vars: {self.environment_variables.map}") print(f"Env vars: {self.environment_variables.map}")
def init_jobs(self, job_files: List[str]):
"""Initialize parsed job YAML map from job compose files."""
self.parsed_job_yaml_map = parsed_pod_files_map_from_file_names(job_files)
if opts.o.debug:
print(f"Parsed job yaml map: {self.parsed_job_yaml_map}")
def _all_named_volumes(self) -> list:
"""Return named volumes from both pod and job compose files."""
volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map)
volumes.extend(named_volumes_from_pod_files(self.parsed_job_yaml_map))
return volumes
def get_nodeports(self): def get_nodeports(self):
nodeports = [] nodeports = []
for pod_name in self.parsed_pod_yaml_map: for pod_name in self.parsed_pod_yaml_map:
@ -146,33 +168,33 @@ class ClusterInfo:
return nodeports return nodeports
def get_ingress( def get_ingress(
self, use_tls=False, certificate=None, cluster_issuer="letsencrypt-prod" self, use_tls=False, certificates=None, cluster_issuer="letsencrypt-prod"
): ):
# No ingress for a deployment that has no http-proxy defined, for now # No ingress for a deployment that has no http-proxy defined, for now
http_proxy_info_list = self.spec.get_http_proxy() http_proxy_info_list = self.spec.get_http_proxy()
ingress = None ingress = None
if http_proxy_info_list: if http_proxy_info_list:
# TODO: handle multiple definitions rules = []
http_proxy_info = http_proxy_info_list[0] tls = [] if use_tls else None
for http_proxy_info in http_proxy_info_list:
if opts.o.debug: if opts.o.debug:
print(f"http-proxy: {http_proxy_info}") print(f"http-proxy: {http_proxy_info}")
# TODO: good enough parsing for webapp deployment for now
host_name = http_proxy_info["host-name"] host_name = http_proxy_info["host-name"]
rules = [] certificate = (certificates or {}).get(host_name)
tls = (
[ if use_tls:
tls.append(
client.V1IngressTLS( client.V1IngressTLS(
hosts=certificate["spec"]["dnsNames"] hosts=certificate["spec"]["dnsNames"]
if certificate if certificate
else [host_name], else [host_name],
secret_name=certificate["spec"]["secretName"] secret_name=certificate["spec"]["secretName"]
if certificate if certificate
else f"{self.app_name}-tls", else f"{self.app_name}-{host_name}-tls",
) )
]
if use_tls
else None
) )
paths = [] paths = []
for route in http_proxy_info["routes"]: for route in http_proxy_info["routes"]:
path = route["path"] path = route["path"]
@ -190,22 +212,26 @@ class ClusterInfo:
# TODO: this looks wrong # TODO: this looks wrong
name=f"{self.app_name}-service", name=f"{self.app_name}-service",
# TODO: pull port number from the service # TODO: pull port number from the service
port=client.V1ServiceBackendPort(number=proxy_to_port), port=client.V1ServiceBackendPort(
number=proxy_to_port
),
) )
), ),
) )
) )
rules.append( rules.append(
client.V1IngressRule( client.V1IngressRule(
host=host_name, http=client.V1HTTPIngressRuleValue(paths=paths) host=host_name,
http=client.V1HTTPIngressRuleValue(paths=paths),
) )
) )
spec = client.V1IngressSpec(tls=tls, rules=rules) spec = client.V1IngressSpec(tls=tls, rules=rules)
ingress_annotations = { ingress_annotations = {
"kubernetes.io/ingress.class": "caddy", "kubernetes.io/ingress.class": "caddy",
} }
if not certificate: if not certificates:
ingress_annotations["cert-manager.io/cluster-issuer"] = cluster_issuer ingress_annotations["cert-manager.io/cluster-issuer"] = cluster_issuer
ingress = client.V1Ingress( ingress = client.V1Ingress(
@ -257,20 +283,25 @@ class ClusterInfo:
def get_pvcs(self): def get_pvcs(self):
result = [] result = []
spec_volumes = self.spec.get_volumes() spec_volumes = self.spec.get_volumes()
named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) named_volumes = self._all_named_volumes()
resources = self.spec.get_volume_resources() global_resources = self.spec.get_volume_resources()
if not resources: if not global_resources:
resources = DEFAULT_VOLUME_RESOURCES global_resources = DEFAULT_VOLUME_RESOURCES
if opts.o.debug: if opts.o.debug:
print(f"Spec Volumes: {spec_volumes}") print(f"Spec Volumes: {spec_volumes}")
print(f"Named Volumes: {named_volumes}") print(f"Named Volumes: {named_volumes}")
print(f"Resources: {resources}") print(f"Resources: {global_resources}")
for volume_name, volume_path in spec_volumes.items(): for volume_name, volume_path in spec_volumes.items():
if volume_name not in named_volumes: if volume_name not in named_volumes:
if opts.o.debug: if opts.o.debug:
print(f"{volume_name} not in pod files") print(f"{volume_name} not in pod files")
continue continue
# Per-volume resources override global, which overrides default.
vol_resources = (
self.spec.get_volume_resources_for(volume_name) or global_resources
)
labels = { labels = {
"app": self.app_name, "app": self.app_name,
"volume-label": f"{self.app_name}-{volume_name}", "volume-label": f"{self.app_name}-{volume_name}",
@ -286,7 +317,7 @@ class ClusterInfo:
spec = client.V1PersistentVolumeClaimSpec( spec = client.V1PersistentVolumeClaimSpec(
access_modes=["ReadWriteOnce"], access_modes=["ReadWriteOnce"],
storage_class_name=storage_class_name, storage_class_name=storage_class_name,
resources=to_k8s_resource_requirements(resources), resources=to_k8s_resource_requirements(vol_resources),
volume_name=k8s_volume_name, volume_name=k8s_volume_name,
) )
pvc = client.V1PersistentVolumeClaim( pvc = client.V1PersistentVolumeClaim(
@ -301,7 +332,7 @@ class ClusterInfo:
def get_configmaps(self): def get_configmaps(self):
result = [] result = []
spec_configmaps = self.spec.get_configmaps() spec_configmaps = self.spec.get_configmaps()
named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) named_volumes = self._all_named_volumes()
for cfg_map_name, cfg_map_path in spec_configmaps.items(): for cfg_map_name, cfg_map_path in spec_configmaps.items():
if cfg_map_name not in named_volumes: if cfg_map_name not in named_volumes:
if opts.o.debug: if opts.o.debug:
@ -337,10 +368,10 @@ class ClusterInfo:
def get_pvs(self): def get_pvs(self):
result = [] result = []
spec_volumes = self.spec.get_volumes() spec_volumes = self.spec.get_volumes()
named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) named_volumes = self._all_named_volumes()
resources = self.spec.get_volume_resources() global_resources = self.spec.get_volume_resources()
if not resources: if not global_resources:
resources = DEFAULT_VOLUME_RESOURCES global_resources = DEFAULT_VOLUME_RESOURCES
for volume_name, volume_path in spec_volumes.items(): for volume_name, volume_path in spec_volumes.items():
# We only need to create a volume if it is fully qualified HostPath. # We only need to create a volume if it is fully qualified HostPath.
# Otherwise, we create the PVC and expect the node to allocate the volume # Otherwise, we create the PVC and expect the node to allocate the volume
@ -369,6 +400,9 @@ class ClusterInfo:
) )
continue continue
vol_resources = (
self.spec.get_volume_resources_for(volume_name) or global_resources
)
if self.spec.is_kind_deployment(): if self.spec.is_kind_deployment():
host_path = client.V1HostPathVolumeSource( host_path = client.V1HostPathVolumeSource(
path=get_kind_pv_bind_mount_path( path=get_kind_pv_bind_mount_path(
@ -382,7 +416,7 @@ class ClusterInfo:
spec = client.V1PersistentVolumeSpec( spec = client.V1PersistentVolumeSpec(
storage_class_name="manual", storage_class_name="manual",
access_modes=["ReadWriteOnce"], access_modes=["ReadWriteOnce"],
capacity=to_k8s_resource_requirements(resources).requests, capacity=to_k8s_resource_requirements(vol_resources).requests,
host_path=host_path, host_path=host_path,
) )
pv = client.V1PersistentVolume( pv = client.V1PersistentVolume(
@ -428,15 +462,29 @@ class ClusterInfo:
# 3. Fall back to spec.yml global (already resolved with DEFAULT fallback) # 3. Fall back to spec.yml global (already resolved with DEFAULT fallback)
return global_resources return global_resources
# TODO: put things like image pull policy into an object-scope struct def _build_containers(
def get_deployment(self, image_pull_policy: Optional[str] = None): self,
parsed_yaml_map: Any,
image_pull_policy: Optional[str] = None,
) -> tuple:
"""Build k8s container specs from parsed compose YAML.
Returns a tuple of (containers, init_containers, services, volumes)
where:
- containers: list of V1Container objects
- init_containers: list of V1Container objects for init containers
(compose services with label ``laconic.init-container: "true"``)
- services: the last services dict processed (used for annotations/labels)
- volumes: list of V1Volume objects
"""
containers = [] containers = []
init_containers = []
services = {} services = {}
global_resources = self.spec.get_container_resources() global_resources = self.spec.get_container_resources()
if not global_resources: if not global_resources:
global_resources = DEFAULT_CONTAINER_RESOURCES global_resources = DEFAULT_CONTAINER_RESOURCES
for pod_name in self.parsed_pod_yaml_map: for pod_name in parsed_yaml_map:
pod = self.parsed_pod_yaml_map[pod_name] pod = parsed_yaml_map[pod_name]
services = pod["services"] services = pod["services"]
for service_name in services: for service_name in services:
container_name = service_name container_name = service_name
@ -492,9 +540,7 @@ class ClusterInfo:
if self.spec.get_image_registry() is not None if self.spec.get_image_registry() is not None
else image else image
) )
volume_mounts = volume_mounts_for_service( volume_mounts = volume_mounts_for_service(parsed_yaml_map, service_name)
self.parsed_pod_yaml_map, service_name
)
# Handle command/entrypoint from compose file # Handle command/entrypoint from compose file
# In docker-compose: entrypoint -> k8s command, command -> k8s args # In docker-compose: entrypoint -> k8s command, command -> k8s args
container_command = None container_command = None
@ -517,6 +563,16 @@ class ClusterInfo:
) )
) )
] ]
# Mount user-declared secrets from spec.yml
for user_secret_name in self.spec.get_secrets():
env_from.append(
client.V1EnvFromSource(
secret_ref=client.V1SecretEnvSource(
name=user_secret_name,
optional=True,
)
)
)
container_resources = self._resolve_container_resources( container_resources = self._resolve_container_resources(
container_name, service_info, global_resources container_name, service_info, global_resources
) )
@ -532,6 +588,9 @@ class ClusterInfo:
volume_mounts=volume_mounts, volume_mounts=volume_mounts,
security_context=client.V1SecurityContext( security_context=client.V1SecurityContext(
privileged=self.spec.get_privileged(), privileged=self.spec.get_privileged(),
run_as_user=int(service_info["user"])
if "user" in service_info
else None,
capabilities=client.V1Capabilities( capabilities=client.V1Capabilities(
add=self.spec.get_capabilities() add=self.spec.get_capabilities()
) )
@ -540,9 +599,28 @@ class ClusterInfo:
), ),
resources=to_k8s_resource_requirements(container_resources), resources=to_k8s_resource_requirements(container_resources),
) )
# Services with laconic.init-container label become
# k8s init containers instead of regular containers.
svc_labels = service_info.get("labels", {})
if isinstance(svc_labels, list):
# docker-compose labels can be a list of "key=value"
svc_labels = dict(item.split("=", 1) for item in svc_labels)
is_init = str(svc_labels.get("laconic.init-container", "")).lower() in (
"true",
"1",
"yes",
)
if is_init:
init_containers.append(container)
else:
containers.append(container) containers.append(container)
volumes = volumes_for_pod_files( volumes = volumes_for_pod_files(parsed_yaml_map, self.spec, self.app_name)
self.parsed_pod_yaml_map, self.spec, self.app_name return containers, init_containers, services, volumes
# TODO: put things like image pull policy into an object-scope struct
def get_deployment(self, image_pull_policy: Optional[str] = None):
containers, init_containers, services, volumes = self._build_containers(
self.parsed_pod_yaml_map, image_pull_policy
) )
registry_config = self.spec.get_image_registry_config() registry_config = self.spec.get_image_registry_config()
if registry_config: if registry_config:
@ -553,6 +631,8 @@ class ClusterInfo:
annotations = None annotations = None
labels = {"app": self.app_name} labels = {"app": self.app_name}
if self.stack_name:
labels["app.kubernetes.io/stack"] = self.stack_name
affinity = None affinity = None
tolerations = None tolerations = None
@ -610,6 +690,7 @@ class ClusterInfo:
metadata=client.V1ObjectMeta(annotations=annotations, labels=labels), metadata=client.V1ObjectMeta(annotations=annotations, labels=labels),
spec=client.V1PodSpec( spec=client.V1PodSpec(
containers=containers, containers=containers,
init_containers=init_containers or None,
image_pull_secrets=image_pull_secrets, image_pull_secrets=image_pull_secrets,
volumes=volumes, volumes=volumes,
affinity=affinity, affinity=affinity,
@ -628,7 +709,99 @@ class ClusterInfo:
deployment = client.V1Deployment( deployment = client.V1Deployment(
api_version="apps/v1", api_version="apps/v1",
kind="Deployment", kind="Deployment",
metadata=client.V1ObjectMeta(name=f"{self.app_name}-deployment"), metadata=client.V1ObjectMeta(
name=f"{self.app_name}-deployment",
labels={
"app": self.app_name,
**(
{"app.kubernetes.io/stack": self.stack_name}
if self.stack_name
else {}
),
},
),
spec=spec, spec=spec,
) )
return deployment return deployment
def get_jobs(self, image_pull_policy: Optional[str] = None) -> List[client.V1Job]:
"""Build k8s Job objects from parsed job compose files.
Each job compose file produces a V1Job with:
- restartPolicy: Never
- backoffLimit: 0
- Name: {app_name}-job-{job_name}
"""
if not self.parsed_job_yaml_map:
return []
jobs = []
registry_config = self.spec.get_image_registry_config()
if registry_config:
secret_name = f"{self.app_name}-registry"
image_pull_secrets = [client.V1LocalObjectReference(name=secret_name)]
else:
image_pull_secrets = []
for job_file in self.parsed_job_yaml_map:
# Build containers for this single job file
single_job_map = {job_file: self.parsed_job_yaml_map[job_file]}
containers, init_containers, _services, volumes = self._build_containers(
single_job_map, image_pull_policy
)
# Derive job name from file path: docker-compose-<name>.yml -> <name>
base = os.path.basename(job_file)
# Strip docker-compose- prefix and .yml suffix
job_name = base
if job_name.startswith("docker-compose-"):
job_name = job_name[len("docker-compose-") :]
if job_name.endswith(".yml"):
job_name = job_name[: -len(".yml")]
elif job_name.endswith(".yaml"):
job_name = job_name[: -len(".yaml")]
# Use a distinct app label for job pods so they don't get
# picked up by pods_in_deployment() which queries app={app_name}.
pod_labels = {
"app": f"{self.app_name}-job",
**(
{"app.kubernetes.io/stack": self.stack_name}
if self.stack_name
else {}
),
}
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels=pod_labels),
spec=client.V1PodSpec(
containers=containers,
init_containers=init_containers or None,
image_pull_secrets=image_pull_secrets,
volumes=volumes,
restart_policy="Never",
),
)
job_spec = client.V1JobSpec(
template=template,
backoff_limit=0,
)
job_labels = {
"app": self.app_name,
**(
{"app.kubernetes.io/stack": self.stack_name}
if self.stack_name
else {}
),
}
job = client.V1Job(
api_version="batch/v1",
kind="Job",
metadata=client.V1ObjectMeta(
name=f"{self.app_name}-job-{job_name}",
labels=job_labels,
),
spec=job_spec,
)
jobs.append(job)
return jobs

View File

@ -95,6 +95,7 @@ class K8sDeployer(Deployer):
type: str type: str
core_api: client.CoreV1Api core_api: client.CoreV1Api
apps_api: client.AppsV1Api apps_api: client.AppsV1Api
batch_api: client.BatchV1Api
networking_api: client.NetworkingV1Api networking_api: client.NetworkingV1Api
k8s_namespace: str k8s_namespace: str
kind_cluster_name: str kind_cluster_name: str
@ -110,6 +111,7 @@ class K8sDeployer(Deployer):
compose_files, compose_files,
compose_project_name, compose_project_name,
compose_env_file, compose_env_file,
job_compose_files=None,
) -> None: ) -> None:
self.type = type self.type = type
self.skip_cluster_management = False self.skip_cluster_management = False
@ -120,19 +122,32 @@ class K8sDeployer(Deployer):
return return
self.deployment_dir = deployment_context.deployment_dir self.deployment_dir = deployment_context.deployment_dir
self.deployment_context = deployment_context self.deployment_context = deployment_context
self.kind_cluster_name = compose_project_name self.kind_cluster_name = (
# Use deployment-specific namespace for resource isolation and easy cleanup deployment_context.spec.get_kind_cluster_name() or compose_project_name
self.k8s_namespace = f"laconic-{compose_project_name}" )
# stack.name may be an absolute path (from spec "stack:" key after
# path resolution). Extract just the directory basename for labels.
raw_name = deployment_context.stack.name if deployment_context else ""
stack_name = Path(raw_name).name if raw_name else ""
# Use spec namespace if provided, otherwise derive from stack name
self.k8s_namespace = deployment_context.spec.get_namespace() or (
f"laconic-{stack_name}" if stack_name else f"laconic-{compose_project_name}"
)
self.cluster_info = ClusterInfo() self.cluster_info = ClusterInfo()
self.cluster_info.int( self.cluster_info.int(
compose_files, compose_files,
compose_env_file, compose_env_file,
compose_project_name, compose_project_name,
deployment_context.spec, deployment_context.spec,
stack_name=stack_name,
) )
# Initialize job compose files if provided
if job_compose_files:
self.cluster_info.init_jobs(job_compose_files)
if opts.o.debug: if opts.o.debug:
print(f"Deployment dir: {deployment_context.deployment_dir}") print(f"Deployment dir: {deployment_context.deployment_dir}")
print(f"Compose files: {compose_files}") print(f"Compose files: {compose_files}")
print(f"Job compose files: {job_compose_files}")
print(f"Project name: {compose_project_name}") print(f"Project name: {compose_project_name}")
print(f"Env file: {compose_env_file}") print(f"Env file: {compose_env_file}")
print(f"Type: {type}") print(f"Type: {type}")
@ -150,6 +165,7 @@ class K8sDeployer(Deployer):
self.core_api = client.CoreV1Api() self.core_api = client.CoreV1Api()
self.networking_api = client.NetworkingV1Api() self.networking_api = client.NetworkingV1Api()
self.apps_api = client.AppsV1Api() self.apps_api = client.AppsV1Api()
self.batch_api = client.BatchV1Api()
self.custom_obj_api = client.CustomObjectsApi() self.custom_obj_api = client.CustomObjectsApi()
def _ensure_namespace(self): def _ensure_namespace(self):
@ -310,6 +326,94 @@ class K8sDeployer(Deployer):
else: else:
raise raise
def _delete_resources_by_label(self, label_selector: str, delete_volumes: bool):
"""Delete only this stack's resources from a shared namespace."""
ns = self.k8s_namespace
if opts.o.dry_run:
print(f"Dry run: would delete resources with {label_selector} in {ns}")
return
# Deployments
try:
deps = self.apps_api.list_namespaced_deployment(
namespace=ns, label_selector=label_selector
)
for dep in deps.items:
print(f"Deleting Deployment {dep.metadata.name}")
self.apps_api.delete_namespaced_deployment(
name=dep.metadata.name, namespace=ns
)
except ApiException as e:
_check_delete_exception(e)
# Jobs
try:
jobs = self.batch_api.list_namespaced_job(
namespace=ns, label_selector=label_selector
)
for job in jobs.items:
print(f"Deleting Job {job.metadata.name}")
self.batch_api.delete_namespaced_job(
name=job.metadata.name,
namespace=ns,
body=client.V1DeleteOptions(propagation_policy="Background"),
)
except ApiException as e:
_check_delete_exception(e)
# Services (NodePorts created by SO)
try:
svcs = self.core_api.list_namespaced_service(
namespace=ns, label_selector=label_selector
)
for svc in svcs.items:
print(f"Deleting Service {svc.metadata.name}")
self.core_api.delete_namespaced_service(
name=svc.metadata.name, namespace=ns
)
except ApiException as e:
_check_delete_exception(e)
# Ingresses
try:
ings = self.networking_api.list_namespaced_ingress(
namespace=ns, label_selector=label_selector
)
for ing in ings.items:
print(f"Deleting Ingress {ing.metadata.name}")
self.networking_api.delete_namespaced_ingress(
name=ing.metadata.name, namespace=ns
)
except ApiException as e:
_check_delete_exception(e)
# ConfigMaps
try:
cms = self.core_api.list_namespaced_config_map(
namespace=ns, label_selector=label_selector
)
for cm in cms.items:
print(f"Deleting ConfigMap {cm.metadata.name}")
self.core_api.delete_namespaced_config_map(
name=cm.metadata.name, namespace=ns
)
except ApiException as e:
_check_delete_exception(e)
# PVCs (only if --delete-volumes)
if delete_volumes:
try:
pvcs = self.core_api.list_namespaced_persistent_volume_claim(
namespace=ns, label_selector=label_selector
)
for pvc in pvcs.items:
print(f"Deleting PVC {pvc.metadata.name}")
self.core_api.delete_namespaced_persistent_volume_claim(
name=pvc.metadata.name, namespace=ns
)
except ApiException as e:
_check_delete_exception(e)
def _create_volume_data(self): def _create_volume_data(self):
# Create the host-path-mounted PVs for this deployment # Create the host-path-mounted PVs for this deployment
pvs = self.cluster_info.get_pvs() pvs = self.cluster_info.get_pvs()
@ -372,6 +476,11 @@ class K8sDeployer(Deployer):
def _create_deployment(self): def _create_deployment(self):
"""Create the k8s Deployment resource (which starts pods).""" """Create the k8s Deployment resource (which starts pods)."""
# Skip if there are no pods to deploy (e.g. jobs-only stacks)
if not self.cluster_info.parsed_pod_yaml_map:
if opts.o.debug:
print("No pods defined, skipping Deployment creation")
return
deployment = self.cluster_info.get_deployment( deployment = self.cluster_info.get_deployment(
image_pull_policy=None if self.is_kind() else "Always" image_pull_policy=None if self.is_kind() else "Always"
) )
@ -380,6 +489,26 @@ class K8sDeployer(Deployer):
if not opts.o.dry_run: if not opts.o.dry_run:
self._ensure_deployment(deployment) self._ensure_deployment(deployment)
def _create_jobs(self):
# Process job compose files into k8s Jobs
jobs = self.cluster_info.get_jobs(
image_pull_policy=None if self.is_kind() else "Always"
)
for job in jobs:
if opts.o.debug:
print(f"Sending this job: {job}")
if not opts.o.dry_run:
job_resp = self.batch_api.create_namespaced_job(
body=job, namespace=self.k8s_namespace
)
if opts.o.debug:
print("Job created:")
if job_resp.metadata:
print(
f" {job_resp.metadata.namespace} "
f"{job_resp.metadata.name}"
)
def _find_certificate_for_host_name(self, host_name): def _find_certificate_for_host_name(self, host_name):
all_certificates = self.custom_obj_api.list_namespaced_custom_object( all_certificates = self.custom_obj_api.list_namespaced_custom_object(
group="cert-manager.io", group="cert-manager.io",
@ -478,16 +607,19 @@ class K8sDeployer(Deployer):
http_proxy_info = self.cluster_info.spec.get_http_proxy() http_proxy_info = self.cluster_info.spec.get_http_proxy()
use_tls = http_proxy_info and not self.is_kind() use_tls = http_proxy_info and not self.is_kind()
certificate = ( certificates = None
self._find_certificate_for_host_name(http_proxy_info[0]["host-name"]) if use_tls:
if use_tls certificates = {}
else None for proxy in http_proxy_info:
) host_name = proxy["host-name"]
if opts.o.debug and certificate: cert = self._find_certificate_for_host_name(host_name)
print(f"Using existing certificate: {certificate}") if cert:
certificates[host_name] = cert
if opts.o.debug:
print(f"Using existing certificate for {host_name}: {cert}")
ingress = self.cluster_info.get_ingress( ingress = self.cluster_info.get_ingress(
use_tls=use_tls, certificate=certificate use_tls=use_tls, certificates=certificates
) )
if ingress: if ingress:
if opts.o.debug: if opts.o.debug:
@ -515,16 +647,24 @@ class K8sDeployer(Deployer):
self._create_infrastructure() self._create_infrastructure()
print("Cluster infrastructure prepared (no pods started).") print("Cluster infrastructure prepared (no pods started).")
# Call start() hooks — stacks can create additional k8s resources
if self.deployment_context:
from stack_orchestrator.deploy.deployment_create import (
call_stack_deploy_start,
)
call_stack_deploy_start(self.deployment_context)
def down(self, timeout, volumes, skip_cluster_management): def down(self, timeout, volumes, skip_cluster_management):
self.skip_cluster_management = skip_cluster_management self.skip_cluster_management = skip_cluster_management
self.connect_api() self.connect_api()
app_label = f"app={self.cluster_info.app_name}"
# PersistentVolumes are cluster-scoped (not namespaced), so delete by label # PersistentVolumes are cluster-scoped (not namespaced), so delete by label
if volumes: if volumes:
try: try:
pvs = self.core_api.list_persistent_volume( pvs = self.core_api.list_persistent_volume(label_selector=app_label)
label_selector=f"app={self.cluster_info.app_name}"
)
for pv in pvs.items: for pv in pvs.items:
if opts.o.debug: if opts.o.debug:
print(f"Deleting PV: {pv.metadata.name}") print(f"Deleting PV: {pv.metadata.name}")
@ -536,8 +676,13 @@ class K8sDeployer(Deployer):
if opts.o.debug: if opts.o.debug:
print(f"Error listing PVs: {e}") print(f"Error listing PVs: {e}")
# Delete the deployment namespace - this cascades to all namespaced resources # When namespace is explicitly set in the spec, it may be shared with
# (PVCs, ConfigMaps, Deployments, Services, Ingresses, etc.) # other stacks — delete only this stack's resources by label.
# Otherwise the namespace is owned by this deployment, delete it entirely.
shared_namespace = self.deployment_context.spec.get_namespace() is not None
if shared_namespace:
self._delete_resources_by_label(app_label, volumes)
else:
self._delete_namespace() self._delete_namespace()
if self.is_kind() and not self.skip_cluster_management: if self.is_kind() and not self.skip_cluster_management:
@ -663,14 +808,18 @@ class K8sDeployer(Deployer):
def logs(self, services, tail, follow, stream): def logs(self, services, tail, follow, stream):
self.connect_api() self.connect_api()
pods = pods_in_deployment(self.core_api, self.cluster_info.app_name, namespace=self.k8s_namespace) pods = pods_in_deployment(
self.core_api, self.cluster_info.app_name, namespace=self.k8s_namespace
)
if len(pods) > 1: if len(pods) > 1:
print("Warning: more than one pod in the deployment") print("Warning: more than one pod in the deployment")
if len(pods) == 0: if len(pods) == 0:
log_data = "******* Pods not running ********\n" log_data = "******* Pods not running ********\n"
else: else:
k8s_pod_name = pods[0] k8s_pod_name = pods[0]
containers = containers_in_pod(self.core_api, k8s_pod_name, namespace=self.k8s_namespace) containers = containers_in_pod(
self.core_api, k8s_pod_name, namespace=self.k8s_namespace
)
# If pod not started, logs request below will throw an exception # If pod not started, logs request below will throw an exception
try: try:
log_data = "" log_data = ""
@ -688,6 +837,10 @@ class K8sDeployer(Deployer):
return log_stream_from_string(log_data) return log_stream_from_string(log_data)
def update_envs(self): def update_envs(self):
if not self.cluster_info.parsed_pod_yaml_map:
if opts.o.debug:
print("No pods defined, skipping update")
return
self.connect_api() self.connect_api()
ref_deployment = self.cluster_info.get_deployment() ref_deployment = self.cluster_info.get_deployment()
if not ref_deployment or not ref_deployment.metadata: if not ref_deployment or not ref_deployment.metadata:
@ -748,16 +901,10 @@ class K8sDeployer(Deployer):
def run_job(self, job_name: str, helm_release: Optional[str] = None): def run_job(self, job_name: str, helm_release: Optional[str] = None):
if not opts.o.dry_run: if not opts.o.dry_run:
from stack_orchestrator.deploy.k8s.helm.job_runner import run_helm_job
# Check if this is a helm-based deployment # Check if this is a helm-based deployment
chart_dir = self.deployment_dir / "chart" chart_dir = self.deployment_dir / "chart"
if not chart_dir.exists(): if chart_dir.exists():
# TODO: Implement job support for compose-based K8s deployments from stack_orchestrator.deploy.k8s.helm.job_runner import run_helm_job
raise Exception(
f"Job support is only available for helm-based "
f"deployments. Chart directory not found: {chart_dir}"
)
# Run the job using the helm job runner # Run the job using the helm job runner
run_helm_job( run_helm_job(
@ -768,6 +915,29 @@ class K8sDeployer(Deployer):
timeout=600, timeout=600,
verbose=opts.o.verbose, verbose=opts.o.verbose,
) )
else:
# Non-Helm path: create job from ClusterInfo
self.connect_api()
jobs = self.cluster_info.get_jobs(
image_pull_policy=None if self.is_kind() else "Always"
)
# Find the matching job by name
target_name = f"{self.cluster_info.app_name}-job-{job_name}"
matched_job = None
for job in jobs:
if job.metadata and job.metadata.name == target_name:
matched_job = job
break
if matched_job is None:
raise Exception(
f"Job '{job_name}' not found. Available jobs: "
f"{[j.metadata.name for j in jobs if j.metadata]}"
)
if opts.o.debug:
print(f"Creating job: {target_name}")
self.batch_api.create_namespaced_job(
body=matched_job, namespace=self.k8s_namespace
)
def is_kind(self): def is_kind(self):
return self.type == "k8s-kind" return self.type == "k8s-kind"

View File

@ -409,7 +409,9 @@ def load_images_into_kind(kind_cluster_name: str, image_set: Set[str]):
raise DeployerException(f"kind load docker-image failed: {result}") raise DeployerException(f"kind load docker-image failed: {result}")
def pods_in_deployment(core_api: client.CoreV1Api, deployment_name: str, namespace: str = "default"): def pods_in_deployment(
core_api: client.CoreV1Api, deployment_name: str, namespace: str = "default"
):
pods = [] pods = []
pod_response = core_api.list_namespaced_pod( pod_response = core_api.list_namespaced_pod(
namespace=namespace, label_selector=f"app={deployment_name}" namespace=namespace, label_selector=f"app={deployment_name}"
@ -422,7 +424,9 @@ def pods_in_deployment(core_api: client.CoreV1Api, deployment_name: str, namespa
return pods return pods
def containers_in_pod(core_api: client.CoreV1Api, pod_name: str, namespace: str = "default") -> List[str]: def containers_in_pod(
core_api: client.CoreV1Api, pod_name: str, namespace: str = "default"
) -> List[str]:
containers: List[str] = [] containers: List[str] = []
pod_response = cast( pod_response = cast(
client.V1Pod, core_api.read_namespaced_pod(pod_name, namespace=namespace) client.V1Pod, core_api.read_namespaced_pod(pod_name, namespace=namespace)

View File

@ -144,6 +144,9 @@ class Spec:
def get_configmaps(self): def get_configmaps(self):
return self.obj.get(constants.configmaps_key, {}) return self.obj.get(constants.configmaps_key, {})
def get_secrets(self):
return self.obj.get(constants.secrets_key, {})
def get_container_resources(self): def get_container_resources(self):
return Resources( return Resources(
self.obj.get(constants.resources_key, {}).get("containers", {}) self.obj.get(constants.resources_key, {}).get("containers", {})
@ -175,9 +178,46 @@ class Spec:
self.obj.get(constants.resources_key, {}).get(constants.volumes_key, {}) self.obj.get(constants.resources_key, {}).get(constants.volumes_key, {})
) )
def get_volume_resources_for(self, volume_name: str) -> typing.Optional[Resources]:
"""Look up per-volume resource overrides from spec.yml.
Supports two formats under resources.volumes:
Global (original):
resources:
volumes:
reservations:
storage: 5Gi
Per-volume (new):
resources:
volumes:
my-volume:
reservations:
storage: 10Gi
Returns the per-volume Resources if found, otherwise None.
The caller should fall back to get_volume_resources() then the default.
"""
vol_section = self.obj.get(constants.resources_key, {}).get(
constants.volumes_key, {}
)
if volume_name not in vol_section:
return None
entry = vol_section[volume_name]
if isinstance(entry, dict) and ("reservations" in entry or "limits" in entry):
return Resources(entry)
return None
def get_http_proxy(self): def get_http_proxy(self):
return self.obj.get(constants.network_key, {}).get(constants.http_proxy_key, []) return self.obj.get(constants.network_key, {}).get(constants.http_proxy_key, [])
def get_namespace(self):
return self.obj.get("namespace")
def get_kind_cluster_name(self):
return self.obj.get("kind-cluster-name")
def get_annotations(self): def get_annotations(self):
return self.obj.get(constants.annotations_key, {}) return self.obj.get(constants.annotations_key, {})

View File

@ -19,7 +19,7 @@ from pathlib import Path
from urllib.parse import urlparse from urllib.parse import urlparse
from tempfile import NamedTemporaryFile from tempfile import NamedTemporaryFile
from stack_orchestrator.util import error_exit, global_options2 from stack_orchestrator.util import error_exit, global_options2, get_yaml
from stack_orchestrator.deploy.deployment_create import init_operation, create_operation from stack_orchestrator.deploy.deployment_create import init_operation, create_operation
from stack_orchestrator.deploy.deploy import create_deploy_context from stack_orchestrator.deploy.deploy import create_deploy_context
from stack_orchestrator.deploy.deploy_types import DeployCommandContext from stack_orchestrator.deploy.deploy_types import DeployCommandContext
@ -41,19 +41,23 @@ def _fixup_container_tag(deployment_dir: str, image: str):
def _fixup_url_spec(spec_file_name: str, url: str): def _fixup_url_spec(spec_file_name: str, url: str):
# url is like: https://example.com/path # url is like: https://example.com/path
parsed_url = urlparse(url) parsed_url = urlparse(url)
http_proxy_spec = f"""
http-proxy:
- host-name: {parsed_url.hostname}
routes:
- path: '{parsed_url.path if parsed_url.path else "/"}'
proxy-to: webapp:80
"""
spec_file_path = Path(spec_file_name) spec_file_path = Path(spec_file_name)
yaml = get_yaml()
with open(spec_file_path) as rfile: with open(spec_file_path) as rfile:
contents = rfile.read() contents = yaml.load(rfile)
contents = contents + http_proxy_spec contents.setdefault("network", {})["http-proxy"] = [
{
"host-name": parsed_url.hostname,
"routes": [
{
"path": parsed_url.path if parsed_url.path else "/",
"proxy-to": "webapp:80",
}
],
}
]
with open(spec_file_path, "w") as wfile: with open(spec_file_path, "w") as wfile:
wfile.write(contents) yaml.dump(contents, wfile)
def create_deployment( def create_deployment(

View File

@ -0,0 +1,47 @@
"""Sortable timestamp-based ID generation for cluster naming.
Uses base62 encoding with 100ms resolution and a 2024-01-01 epoch
to produce compact, sortable IDs like 'laconic-iqE6Za'.
Format: {prefix}-{timestamp}{random}
- timestamp: 5 chars (100ms resolution, ~180 years from 2024)
- random: 2 chars (3,844 unique per 100ms slot)
"""
# Adapted from exophial/src/exophial/ids.py
import random
import time
# 2024-01-01 00:00:00 UTC in milliseconds
EPOCH_2024 = 1704067200000
# Sortable base62 alphabet (0-9, A-Z, a-z)
ALPHABET = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
def _base62(n: int) -> str:
"""Encode integer as base62 string."""
if n == 0:
return ALPHABET[0]
s = ""
while n:
n, r = divmod(n, 62)
s = ALPHABET[r] + s
return s
def _random_suffix(length: int = 2) -> str:
"""Generate random base62 suffix."""
return "".join(random.choice(ALPHABET) for _ in range(length))
def _timestamp_id() -> str:
"""Generate a sortable timestamp ID (100ms resolution, 2024 epoch) with random suffix."""
now_ms = int(time.time() * 1000)
offset = (now_ms - EPOCH_2024) // 100 # 100ms resolution
return f"{_base62(offset)}{_random_suffix()}"
def generate_id(prefix: str) -> str:
"""Generate a sortable ID with an arbitrary prefix like 'laconic-iqE6Za'."""
return f"{prefix}-{_timestamp_id()}"

View File

@ -75,6 +75,8 @@ def get_parsed_stack_config(stack):
def get_pod_list(parsed_stack): def get_pod_list(parsed_stack):
# Handle both old and new format # Handle both old and new format
if "pods" not in parsed_stack or not parsed_stack["pods"]:
return []
pods = parsed_stack["pods"] pods = parsed_stack["pods"]
if type(pods[0]) is str: if type(pods[0]) is str:
result = pods result = pods
@ -103,7 +105,7 @@ def get_job_list(parsed_stack):
def get_plugin_code_paths(stack) -> List[Path]: def get_plugin_code_paths(stack) -> List[Path]:
parsed_stack = get_parsed_stack_config(stack) parsed_stack = get_parsed_stack_config(stack)
pods = parsed_stack["pods"] pods = parsed_stack.get("pods") or []
result: Set[Path] = set() result: Set[Path] = set()
for pod in pods: for pod in pods:
if type(pod) is str: if type(pod) is str:
@ -153,15 +155,16 @@ def resolve_job_compose_file(stack, job_name: str):
if proposed_file.exists(): if proposed_file.exists():
return proposed_file return proposed_file
# If we don't find it fall through to the internal case # If we don't find it fall through to the internal case
# TODO: Add internal compose-jobs directory support if needed data_dir = Path(__file__).absolute().parent.joinpath("data")
# For now, jobs are expected to be in external stacks only compose_jobs_base = data_dir.joinpath("compose-jobs")
compose_jobs_base = Path(stack).parent.parent.joinpath("compose-jobs")
return compose_jobs_base.joinpath(f"docker-compose-{job_name}.yml") return compose_jobs_base.joinpath(f"docker-compose-{job_name}.yml")
def get_pod_file_path(stack, parsed_stack, pod_name: str): def get_pod_file_path(stack, parsed_stack, pod_name: str):
pods = parsed_stack["pods"] pods = parsed_stack.get("pods") or []
result = None result = None
if not pods:
return result
if type(pods[0]) is str: if type(pods[0]) is str:
result = resolve_compose_file(stack, pod_name) result = resolve_compose_file(stack, pod_name)
else: else:
@ -189,9 +192,9 @@ def get_job_file_path(stack, parsed_stack, job_name: str):
def get_pod_script_paths(parsed_stack, pod_name: str): def get_pod_script_paths(parsed_stack, pod_name: str):
pods = parsed_stack["pods"] pods = parsed_stack.get("pods") or []
result = [] result = []
if not type(pods[0]) is str: if not pods or not type(pods[0]) is str:
for pod in pods: for pod in pods:
if pod["name"] == pod_name: if pod["name"] == pod_name:
pod_root_dir = os.path.join( pod_root_dir = os.path.join(
@ -207,9 +210,9 @@ def get_pod_script_paths(parsed_stack, pod_name: str):
def pod_has_scripts(parsed_stack, pod_name: str): def pod_has_scripts(parsed_stack, pod_name: str):
pods = parsed_stack["pods"] pods = parsed_stack.get("pods") or []
result = False result = False
if type(pods[0]) is str: if not pods or type(pods[0]) is str:
result = False result = False
else: else:
for pod in pods: for pod in pods:

View File

@ -105,6 +105,15 @@ fi
# Add a config file to be picked up by the ConfigMap before starting. # Add a config file to be picked up by the ConfigMap before starting.
echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/configmaps/test-config/test_config echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/configmaps/test-config/test_config
# Add secrets to the deployment spec (references a pre-existing k8s Secret by name).
# deploy init already writes an empty 'secrets: {}' key, so we replace it
# rather than appending (ruamel.yaml rejects duplicate keys).
deployment_spec_file=${test_deployment_dir}/spec.yml
sed -i 's/^secrets: {}$/secrets:\n test-secret:\n - TEST_SECRET_KEY/' ${deployment_spec_file}
# Get the deployment ID for kubectl queries
deployment_id=$(cat ${test_deployment_dir}/deployment.yml | cut -d ' ' -f 2)
echo "deploy create output file test: passed" echo "deploy create output file test: passed"
# Try to start the deployment # Try to start the deployment
$TEST_TARGET_SO deployment --dir $test_deployment_dir start $TEST_TARGET_SO deployment --dir $test_deployment_dir start
@ -166,12 +175,71 @@ else
delete_cluster_exit delete_cluster_exit
fi fi
# Stop then start again and check the volume was preserved # --- New feature tests: namespace, labels, jobs, secrets ---
$TEST_TARGET_SO deployment --dir $test_deployment_dir stop
# Sleep a bit just in case # Check that the pod is in the deployment-specific namespace (not default)
# sleep for longer to check if that's why the subsequent create cluster fails ns_pod_count=$(kubectl get pods -n laconic-${deployment_id} -l app=${deployment_id} --no-headers 2>/dev/null | wc -l)
sleep 20 if [ "$ns_pod_count" -gt 0 ]; then
$TEST_TARGET_SO deployment --dir $test_deployment_dir start echo "namespace isolation test: passed"
else
echo "namespace isolation test: FAILED"
echo "Expected pod in namespace laconic-${deployment_id}"
delete_cluster_exit
fi
# Check that the stack label is set on the pod
stack_label_count=$(kubectl get pods -n laconic-${deployment_id} -l app.kubernetes.io/stack=test --no-headers 2>/dev/null | wc -l)
if [ "$stack_label_count" -gt 0 ]; then
echo "stack label test: passed"
else
echo "stack label test: FAILED"
delete_cluster_exit
fi
# Check that the job completed successfully
for i in {1..30}; do
job_status=$(kubectl get job ${deployment_id}-job-test-job -n laconic-${deployment_id} -o jsonpath='{.status.succeeded}' 2>/dev/null || true)
if [ "$job_status" == "1" ]; then
break
fi
sleep 2
done
if [ "$job_status" == "1" ]; then
echo "job completion test: passed"
else
echo "job completion test: FAILED"
echo "Job status.succeeded: ${job_status}"
delete_cluster_exit
fi
# Check that the secrets spec results in an envFrom secretRef on the pod
secret_ref=$(kubectl get pod -n laconic-${deployment_id} -l app=${deployment_id} \
-o jsonpath='{.items[0].spec.containers[0].envFrom[?(@.secretRef.name=="test-secret")].secretRef.name}' 2>/dev/null || true)
if [ "$secret_ref" == "test-secret" ]; then
echo "secrets envFrom test: passed"
else
echo "secrets envFrom test: FAILED"
echo "Expected secretRef 'test-secret', got: ${secret_ref}"
delete_cluster_exit
fi
# Stop then start again and check the volume was preserved.
# Use --skip-cluster-management to reuse the existing kind cluster instead of
# destroying and recreating it (which fails on CI runners due to stale etcd/certs
# and cgroup detection issues).
# Use --delete-volumes to clear PVs so fresh PVCs can bind on restart.
# Bind-mount data survives on the host filesystem; provisioner volumes are recreated fresh.
$TEST_TARGET_SO deployment --dir $test_deployment_dir stop --delete-volumes --skip-cluster-management
# Wait for the namespace to be fully terminated before restarting.
# Without this, 'start' fails with 403 Forbidden because the namespace
# is still in Terminating state.
for i in {1..60}; do
if ! kubectl get namespace laconic-${deployment_id} 2>/dev/null | grep -q .; then
break
fi
sleep 2
done
$TEST_TARGET_SO deployment --dir $test_deployment_dir start --skip-cluster-management
wait_for_pods_started wait_for_pods_started
wait_for_log_output wait_for_log_output
sleep 1 sleep 1
@ -184,8 +252,9 @@ else
delete_cluster_exit delete_cluster_exit
fi fi
# These volumes will be completely destroyed by the kind delete/create, because they lived inside # Provisioner volumes are destroyed when PVs are deleted (--delete-volumes on stop).
# the kind container. So, unlike the bind-mount case, they will appear fresh after the restart. # Unlike bind-mount volumes whose data persists on the host, provisioner storage
# is gone, so the volume appears fresh after restart.
log_output_11=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs ) log_output_11=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs )
if [[ "$log_output_11" == *"/data2 filesystem is fresh"* ]]; then if [[ "$log_output_11" == *"/data2 filesystem is fresh"* ]]; then
echo "Fresh provisioner volumes test: passed" echo "Fresh provisioner volumes test: passed"

View File

@ -206,7 +206,7 @@ fi
# The deployment's pod should be scheduled onto node: worker3 # The deployment's pod should be scheduled onto node: worker3
# Check that's what happened # Check that's what happened
# Get get the node onto which the stack pod has been deployed # Get get the node onto which the stack pod has been deployed
deployment_node=$(kubectl get pods -l app=${deployment_id} -o=jsonpath='{.items..spec.nodeName}') deployment_node=$(kubectl get pods -n laconic-${deployment_id} -l app=${deployment_id} -o=jsonpath='{.items..spec.nodeName}')
expected_node=${deployment_id}-worker3 expected_node=${deployment_id}-worker3
echo "Stack pod deployed to node: ${deployment_node}" echo "Stack pod deployed to node: ${deployment_node}"
if [[ ${deployment_node} == ${expected_node} ]]; then if [[ ${deployment_node} == ${expected_node} ]]; then

View File

@ -1,5 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# TODO: handle ARM # TODO: handle ARM
curl --silent -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 curl --silent -Lo ./kind https://kind.sigs.k8s.io/dl/v0.25.0/kind-linux-amd64
chmod +x ./kind chmod +x ./kind
mv ./kind /usr/local/bin mv ./kind /usr/local/bin

View File

@ -1,5 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# TODO: handle ARM # TODO: handle ARM
curl --silent -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" # Pin kubectl to match Kind's default k8s version (v1.31.x)
curl --silent -LO "https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl"
chmod +x ./kubectl chmod +x ./kubectl
mv ./kubectl /usr/local/bin mv ./kubectl /usr/local/bin