cluster-id plays two roles today: (a) which kind cluster this deployment attaches to (used for the kube-config context name) and (b) compose_project_name -> app_name, the prefix for every k8s resource the deployment creates. _get_existing_kind_cluster() in deploy create forces (a) to inherit the running cluster's name, and because (a) and (b) are the same field, (b) inherits too — so two deployments that share a cluster also share an app_name and collide on every resource whose suffix isn't naturally distinct (PVs are cluster-scoped; same-stack deployments collide there in particular). Decouple: add a distinct `deployment-id` field. cluster-id keeps its current behavior (inherit running cluster, else fresh). deployment-id is always fresh per `deploy create`. K8sDeployer sources kind_cluster_name from cluster-id and app_name from deployment-id. Backward compatibility: - Existing deployment.yml files have only cluster-id; no on-disk change until the next `deploy create`. - DeploymentContext.init() falls back: deployment-id = cluster-id when the field is absent. Existing deployments keep their current app_name and resource names on next start — no PV renames, no re-binds, no data orphaning. - `compose_project_name` parameter to K8sDeployer is retained (still used by the compose deployer path); only the k8s-side internals switch to deployment_context getters. - The helm chart generator continues to derive chart names from cluster-id; untouched here, worth a follow-up for consistency. Effect on woodburn: dumpster/rpc/trashscan each already carry a distinct cluster-id in their deployment.yml (pre-`_get_existing_kind_cluster` era). Under the fallback, they all adopt their existing cluster-id as deployment-id, so resource names are identical to today. Effect on new deployments: even when they share a running cluster (kind-cluster-name in kube-config matches cluster-id), they get distinct deployment-ids at deploy create, and thus distinct resource name prefixes. The same-stack PV collision the namespace ownership check surfaces goes away by construction. Test: run-deploy-test.sh now reads deployment-id from the new field, falling back to cluster-id for pre-decouple fixtures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .github/workflows | ||
| .pebbles | ||
| docs | ||
| scripts | ||
| stack_orchestrator | ||
| tests | ||
| .gitignore | ||
| .pre-commit-config.yaml | ||
| AI-FRIENDLY-PLAN.md | ||
| CLAUDE.md | ||
| LICENSE | ||
| MANIFEST.in | ||
| README.md | ||
| STACK-CREATION-GUIDE.md | ||
| TODO.md | ||
| laconic-network-deployment.md | ||
| pyproject.toml | ||
| pyrightconfig.json | ||
| requirements.txt | ||
| setup.py | ||
| tox.ini | ||
| uv.lock | ||
README.md
Stack Orchestrator
Stack Orchestrator allows building and deployment of a Laconic Stack on a single machine with minimial prerequisites. It is a Python3 CLI tool that runs on any OS with Python3 and Docker. The following diagram summarizes the relevant repositories in the Laconic Stack - and the relationship to Stack Orchestrator.
Install
To get started quickly on a fresh Ubuntu instance (e.g, Digital Ocean); try this script. WARNING: always review scripts prior to running them so that you know what is happening on your machine.
For any other installation, follow along below and adapt these instructions based on the specifics of your system.
Ensure that the following are already installed:
- Python3:
python3 --version>=3.8.10(the Python3 shipped in Ubuntu 20+ is good to go) - Docker:
docker --version>=20.10.21 - jq:
jq --version>=1.5 - git:
git --version>=2.10.3
Note: if installing docker-compose via package manager on Linux (as opposed to Docker Desktop), you must install the plugin, e.g. :
mkdir -p ~/.docker/cli-plugins
curl -SL https://github.com/docker/compose/releases/download/v2.11.2/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as ~/bin or perhaps /usr/local/laconic or possibly just the current working directory.
Now, having selected that directory, download the latest release from this page into it (we're using ~/bin below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so
Give it execute permissions:
chmod +x ~/bin/laconic-so
Ensure laconic-so is on the PATH
Verify operation (your version will probably be different, just check here that you see some version output and not an error):
laconic-so version
Version: 1.1.0-7a607c2-202304260513
Save the distribution url to ~/.laconic-so/config.yml:
mkdir ~/.laconic-so
echo "distribution-url: https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so" > ~/.laconic-so/config.yml
Update
If Stack Orchestrator was installed using the process described above, it is able to subsequently self-update to the current latest version by running:
laconic-so update
Usage
The various stacks each contain instructions for running different stacks based on your use case. For example:
Deployment Types
- compose: Docker Compose on local machine
- k8s: External Kubernetes cluster (requires kubeconfig)
- k8s-kind: Local Kubernetes via Kind - one cluster per host, shared by all deployments
External Stacks
Stacks can live in external git repositories. Required structure:
<repo>/
stack_orchestrator/data/
stacks/<stack-name>/stack.yml
compose/docker-compose-<pod-name>.yml
deployment/spec.yml
Deployment Commands
# Create deployment from spec
laconic-so --stack <path> deploy create --spec-file <spec.yml> --deployment-dir <dir>
# Start (creates cluster on first run)
laconic-so deployment --dir <dir> start
# GitOps restart (git pull + redeploy, preserves data)
laconic-so deployment --dir <dir> restart
# Stop
laconic-so deployment --dir <dir> stop
spec.yml Reference
stack: stack-name-or-path
deploy-to: k8s-kind
network:
http-proxy:
- host-name: app.example.com
routes:
- path: /
proxy-to: service-name:port
acme-email: admin@example.com
config:
ENV_VAR: value
SECRET_VAR: $generate:hex:32$ # Auto-generated, stored in K8s Secret
volumes:
volume-name:
Contributing
See the CONTRIBUTING.md for developer mode install.
Platform Support
Native aarm64 is not currently supported. x64 emulation on ARM64 macos should work (not yet tested).
