The previous approach of mounting cri-base.json into kind nodes failed
because we didn't tell containerd to use it via containerdConfigPatches.
RuntimeClass allows different stacks to have different rlimit profiles,
which is essential since kind only supports one cluster per host and
multiple stacks share the same cluster.
Changes:
- Add containerdConfigPatches to kind-config.yml to define runtime handlers
- Create RuntimeClass resources after cluster creation
- Add runtimeClassName to pod specs based on stack's security settings
- Rename cri-base.json to high-memlock-spec.json for clarity
- Add get_runtime_class() method to Spec that auto-derives from
unlimited-memlock setting
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add pyrightconfig.json for pyright 1.1.408 TOML parsing workaround
- Add NoReturn annotations to fatal() functions for proper type narrowing
- Add None checks and assertions after require=True get_record() calls
- Fix AttrDict class with __getattr__ for dynamic attribute access
- Add type annotations and casts for Kubernetes client objects
- Store compose config as DockerDeployer instance attributes
- Filter None values from dotenv and environment mappings
- Use hasattr/getattr patterns for optional container attributes
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Lint Checks / Run linter (push) Failing after 0sDetails
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Failing after 0sDetails
Add spec.yml option `security.unlimited-memlock` that configures
RLIMIT_MEMLOCK to unlimited for Kind cluster pods. This is needed
for workloads like Solana validators that require large amounts of
locked memory for memory-mapped files during snapshot decompression.
When enabled, generates a cri-base.json file with rlimits and mounts
it into the Kind node to override the default containerd runtime spec.
Also includes flake8 line-length fixes for affected files.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Deploy Test / Run deploy test suite (push) Failing after 4sDetails
Webapp Test / Run webapp test suite (push) Failing after 3sDetails
Smoke Test / Run basic test suite (push) Failing after 4sDetails
Publish / Build and publish (push) Failing after 4sDetails
K8s Deployment Control Test / Run deployment control suite on kind/k8s (push) Failing after 3sDetails
Lint Checks / Run linter (push) Failing after 3sDetails
Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/917
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Deploy Test / Run deploy test suite (push) Failing after 3sDetails
Webapp Test / Run webapp test suite (push) Failing after 2sDetails
Publish / Build and publish (push) Failing after 4sDetails
Smoke Test / Run basic test suite (push) Failing after 2sDetails
Lint Checks / Run linter (push) Failing after 2sDetails
NodePort example:
```
network:
ports:
caddy:
- 1234
- 32020:2020
```
Replicas example:
```
replicas: 2
```
This also adds an optimization for k8s where if a directory matching the name of a configmap exists in beneath config/ in the stack, its contents will be copied into the corresponding configmap.
For example:
```
# Config files in the stack
❯ ls stack-orchestrator/config/caddyconfig
Caddyfile Caddyfile.one-req-per-upstream-example
# ConfigMap in the spec
❯ cat foo.yml | grep config
...
configmaps:
caddyconfig: ./configmaps/caddyconfig
# Create the deployment
❯ laconic-so --stack ~/cerc/caddy-ethcache/stack-orchestrator/stacks/caddy-ethcache deploy create --spec-file foo.yml
# The files from beneath config/<config_map_name> have been copied to the ConfigMap directory from the spec.
❯ ls deployment-001/configmaps/caddyconfig
Caddyfile Caddyfile.one-req-per-upstream-example
```
Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/913
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Publish / Build and publish (push) Successful in 53sDetails
Deploy Test / Run deploy test suite (push) Successful in 3m39sDetails
Webapp Test / Run webapp test suite (push) Successful in 2m47sDetails
Smoke Test / Run basic test suite (push) Successful in 3m54sDetails
Lint Checks / Run linter (push) Failing after 3sDetails
Rather than always requesting a certificate, attempt to re-use an existing certificate if it already exists in the k8s cluster. This includes matching to a wildcard certificate.
Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/779
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
In kind, when we bind-mount a host directory it is first mounted into the kind container at /mnt, then into the pod at the desired location.
We accidentally picked this up for full-blown k8s, and were creating volumes at /mnt. This changes the behavior for both kind and regular k8s so that bind mounts are only allowed if a fully-qualified path is specified. If no path is specified at all, a default storageClass is assumed to be present, and the volume managed by a provisioner.
Eg, for kind, the default provisioner is: https://github.com/rancher/local-path-provisioner
```
stack: test
deploy-to: k8s-kind
config:
test-variable-1: test-value-1
network:
ports:
test:
- '80'
volumes:
# this will be bind-mounted to a host-path
test-data-bind: /srv/data
# this will be managed by the k8s node
test-data-auto:
configmaps:
test-config: ./configmap/test-config
```
Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/741
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>