Apply pre-commit linting fixes

Fix trailing whitespace and end-of-file issues across codebase.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
helm-charts-with-caddy
A. F. Dudley 2026-01-20 23:16:44 -05:00
parent 89db6e1e92
commit 5a1399f2b2
72 changed files with 84 additions and 101 deletions

View File

@ -1 +1 @@
Change this file to trigger running the test-container-registry CI job Change this file to trigger running the test-container-registry CI job

View File

@ -1,2 +1,2 @@
Change this file to trigger running the test-database CI job Change this file to trigger running the test-database CI job
Trigger test run Trigger test run

View File

@ -1,2 +1 @@
Change this file to trigger running the fixturenet-eth-test CI job Change this file to trigger running the fixturenet-eth-test CI job

View File

@ -658,4 +658,4 @@
You should also get your employer (if you work as a programmer) or school, You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary. if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see For more information on this, and how to apply and follow the GNU AGPL, see
<http://www.gnu.org/licenses/>. <http://www.gnu.org/licenses/>.

View File

@ -26,7 +26,7 @@ curl -SL https://github.com/docker/compose/releases/download/v2.11.2/docker-comp
chmod +x ~/.docker/cli-plugins/docker-compose chmod +x ~/.docker/cli-plugins/docker-compose
``` ```
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory. a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable: Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
@ -78,5 +78,3 @@ See the [CONTRIBUTING.md](/docs/CONTRIBUTING.md) for developer mode install.
## Platform Support ## Platform Support
Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested). Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested).

View File

@ -1,9 +1,9 @@
# Fetching pre-built container images # Fetching pre-built container images
When Stack Orchestrator deploys a stack containing a suite of one or more containers it expects images for those containers to be on the local machine with a tag of the form `<image-name>:local` Images for these containers can be built from source (and optionally base container images from public registries) with the `build-containers` subcommand. When Stack Orchestrator deploys a stack containing a suite of one or more containers it expects images for those containers to be on the local machine with a tag of the form `<image-name>:local` Images for these containers can be built from source (and optionally base container images from public registries) with the `build-containers` subcommand.
However, the task of building a large number of containers from source may consume considerable time and machine resources. This is where the `fetch-containers` subcommand steps in. It is designed to work exactly like `build-containers` but instead the images, pre-built, are fetched from an image registry then re-tagged for deployment. It can be used in place of `build-containers` for any stack provided the necessary containers, built for the local machine architecture (e.g. arm64 or x86-64) have already been published in an image registry. However, the task of building a large number of containers from source may consume considerable time and machine resources. This is where the `fetch-containers` subcommand steps in. It is designed to work exactly like `build-containers` but instead the images, pre-built, are fetched from an image registry then re-tagged for deployment. It can be used in place of `build-containers` for any stack provided the necessary containers, built for the local machine architecture (e.g. arm64 or x86-64) have already been published in an image registry.
## Usage ## Usage
To use `fetch-containers`, provide an image registry path, a username and token/password with read access to the registry, and optionally specify `--force-local-overwrite`. If this argument is not specified, if there is already a locally built or previously fetched image for a stack container on the machine, it will not be overwritten and a warning issued. To use `fetch-containers`, provide an image registry path, a username and token/password with read access to the registry, and optionally specify `--force-local-overwrite`. If this argument is not specified, if there is already a locally built or previously fetched image for a stack container on the machine, it will not be overwritten and a warning issued.
``` ```
$ laconic-so --stack mobymask-v3-demo fetch-containers --image-registry git.vdb.to/cerc-io --registry-username <registry-user> --registry-token <registry-token> --force-local-overwrite $ laconic-so --stack mobymask-v3-demo fetch-containers --image-registry git.vdb.to/cerc-io --registry-username <registry-user> --registry-token <registry-token> --force-local-overwrite
``` ```

View File

@ -7,7 +7,7 @@ Deploy a local Gitea server, publish NPM packages to it, then use those packages
```bash ```bash
laconic-so --stack build-support build-containers laconic-so --stack build-support build-containers
laconic-so --stack package-registry setup-repositories laconic-so --stack package-registry setup-repositories
laconic-so --stack package-registry build-containers laconic-so --stack package-registry build-containers
laconic-so --stack package-registry deploy up laconic-so --stack package-registry deploy up
``` ```

View File

@ -24,4 +24,3 @@ node-tolerations:
value: typeb value: typeb
``` ```
This example denotes that the stack's pods will tolerate a taint: `nodetype=typeb` This example denotes that the stack's pods will tolerate a taint: `nodetype=typeb`

View File

@ -26,4 +26,3 @@ $ ./scripts/tag_new_release.sh 1 0 17
$ ./scripts/build_shiv_package.sh $ ./scripts/build_shiv_package.sh
$ ./scripts/publish_shiv_package_github.sh 1 0 17 $ ./scripts/publish_shiv_package_github.sh 1 0 17
``` ```

View File

@ -4,9 +4,9 @@ Note: this page is out of date (but still useful) - it will no longer be useful
## Implementation ## Implementation
The orchestrator's operation is driven by files shown below. The orchestrator's operation is driven by files shown below.
- `repository-list.txt` contains the list of git repositories; - `repository-list.txt` contains the list of git repositories;
- `container-image-list.txt` contains the list of container image names - `container-image-list.txt` contains the list of container image names
- `pod-list.txt` specifies the set of compose components (corresponding to individual docker-compose-xxx.yml files which may in turn specify more than one container). - `pod-list.txt` specifies the set of compose components (corresponding to individual docker-compose-xxx.yml files which may in turn specify more than one container).
- `container-build/` contains the files required to build each container image - `container-build/` contains the files required to build each container image

View File

@ -7,7 +7,7 @@ compilation and static page generation are separated in the `build-webapp` and `
This offers much more flexibilty than standard Next.js build methods, since any environment variables accessed This offers much more flexibilty than standard Next.js build methods, since any environment variables accessed
via `process.env`, whether for pages or for API, will have values drawn from their runtime deployment environment, via `process.env`, whether for pages or for API, will have values drawn from their runtime deployment environment,
not their build environment. not their build environment.
## Building ## Building

View File

@ -4,7 +4,7 @@
# https://github.com/cerc-io/github-release-api # https://github.com/cerc-io/github-release-api
# User must define: CERC_GH_RELEASE_SCRIPTS_DIR # User must define: CERC_GH_RELEASE_SCRIPTS_DIR
# pointing to the location of that cloned repository # pointing to the location of that cloned repository
# e.g. # e.g.
# cd ~/projects # cd ~/projects
# git clone https://github.com/cerc-io/github-release-api # git clone https://github.com/cerc-io/github-release-api
# cd ./stack-orchestrator # cd ./stack-orchestrator

View File

@ -94,7 +94,7 @@ sudo apt -y install jq
# laconic-so depends on git # laconic-so depends on git
sudo apt -y install git sudo apt -y install git
# curl used below # curl used below
sudo apt -y install curl sudo apt -y install curl
# docker repo add depends on gnupg and updated ca-certificates # docker repo add depends on gnupg and updated ca-certificates
sudo apt -y install ca-certificates gnupg sudo apt -y install ca-certificates gnupg

View File

@ -3,7 +3,7 @@
# Uses this script package to tag a new release: # Uses this script package to tag a new release:
# User must define: CERC_GH_RELEASE_SCRIPTS_DIR # User must define: CERC_GH_RELEASE_SCRIPTS_DIR
# pointing to the location of that cloned repository # pointing to the location of that cloned repository
# e.g. # e.g.
# cd ~/projects # cd ~/projects
# git clone https://github.com/cerc-io/github-release-api # git clone https://github.com/cerc-io/github-release-api
# cd ./stack-orchestrator # cd ./stack-orchestrator

View File

@ -26,4 +26,3 @@ class BuildContext:
container_build_dir: Path container_build_dir: Path
container_build_env: Mapping[str,str] container_build_env: Mapping[str,str]
dev_root_path: str dev_root_path: str

View File

@ -79,7 +79,7 @@ def _find_latest(candidate_tags: List[str]):
return sorted_candidates[-1] return sorted_candidates[-1]
def _filter_for_platform(container: str, def _filter_for_platform(container: str,
registry_info: RegistryInfo, registry_info: RegistryInfo,
tag_list: List[str]) -> List[str] : tag_list: List[str]) -> List[str] :
filtered_tags = [] filtered_tags = []

View File

@ -20,7 +20,7 @@ services:
depends_on: depends_on:
generate-jwt: generate-jwt:
condition: service_completed_successfully condition: service_completed_successfully
env_file: env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config - ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
blast-geth: blast-geth:
image: blastio/blast-geth:${NETWORK:-testnet-sepolia} image: blastio/blast-geth:${NETWORK:-testnet-sepolia}
@ -51,7 +51,7 @@ services:
--nodiscover --nodiscover
--maxpeers=0 --maxpeers=0
--rollup.disabletxpoolgossip=true --rollup.disabletxpoolgossip=true
env_file: env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config - ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
depends_on: depends_on:
geth-init: geth-init:
@ -73,7 +73,7 @@ services:
--rollup.config="/blast/rollup.json" --rollup.config="/blast/rollup.json"
depends_on: depends_on:
- blast-geth - blast-geth
env_file: env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config - ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
volumes: volumes:

View File

@ -14,4 +14,3 @@ services:
- "9090" - "9090"
- "9091" - "9091"
- "1317" - "1317"

View File

@ -19,7 +19,7 @@ services:
depends_on: depends_on:
generate-jwt: generate-jwt:
condition: service_completed_successfully condition: service_completed_successfully
env_file: env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config - ../config/mainnet-blast/${NETWORK:-mainnet}.config
blast-geth: blast-geth:
image: blastio/blast-geth:${NETWORK:-mainnet} image: blastio/blast-geth:${NETWORK:-mainnet}
@ -53,7 +53,7 @@ services:
--nodiscover --nodiscover
--maxpeers=0 --maxpeers=0
--rollup.disabletxpoolgossip=true --rollup.disabletxpoolgossip=true
env_file: env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config - ../config/mainnet-blast/${NETWORK:-mainnet}.config
depends_on: depends_on:
geth-init: geth-init:
@ -76,7 +76,7 @@ services:
--rollup.config="/blast/rollup.json" --rollup.config="/blast/rollup.json"
depends_on: depends_on:
- blast-geth - blast-geth
env_file: env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config - ../config/mainnet-blast/${NETWORK:-mainnet}.config
volumes: volumes:

View File

@ -17,4 +17,3 @@ services:
- URL_NEUTRON_TEST_REST=https://rest-palvus.pion-1.ntrn.tech - URL_NEUTRON_TEST_REST=https://rest-palvus.pion-1.ntrn.tech
- URL_NEUTRON_TEST_RPC=https://rpc-palvus.pion-1.ntrn.tech - URL_NEUTRON_TEST_RPC=https://rpc-palvus.pion-1.ntrn.tech
- WALLET_CONNECT_ID=0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x - WALLET_CONNECT_ID=0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x

View File

@ -32,4 +32,4 @@ services:
volumes: volumes:
reth_data: reth_data:
lighthouse_data: lighthouse_data:
shared_data: shared_data:

View File

@ -12,7 +12,7 @@ services:
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C" POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C"
ports: ports:
- "5432" - "5432"
test-client: test-client:
image: cerc/test-database-client:local image: cerc/test-database-client:local

View File

@ -1,2 +1,2 @@
GETH_ROLLUP_SEQUENCERHTTP=https://sequencer.s2.testblast.io GETH_ROLLUP_SEQUENCERHTTP=https://sequencer.s2.testblast.io
OP_NODE_P2P_BOOTNODES=enr:-J-4QM3GLUFfKMSJQuP1UvuKQe8DyovE7Eaiit0l6By4zjTodkR4V8NWXJxNmlg8t8rP-Q-wp3jVmeAOml8cjMj__ROGAYznzb_HgmlkgnY0gmlwhA-cZ_eHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAiuDqvB-AsVSRmnnWr6OHfjgY8YfNclFy9p02flKzXnOg3RjcIJ2YYN1ZHCCdmE,enr:-J-4QDCVpByqQ8nFqCS9aHicqwUfXgzFDslvpEyYz19lvkHLIdtcIGp2d4q5dxHdjRNTO6HXCsnIKxUeuZSPcEbyVQCGAYznzz0RgmlkgnY0gmlwhANiQfuHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAy3AtF2Jh_aPdOohg506Hjmtx-fQ1AKmu71C7PfkWAw9g3RjcIJ2YYN1ZHCCdmE OP_NODE_P2P_BOOTNODES=enr:-J-4QM3GLUFfKMSJQuP1UvuKQe8DyovE7Eaiit0l6By4zjTodkR4V8NWXJxNmlg8t8rP-Q-wp3jVmeAOml8cjMj__ROGAYznzb_HgmlkgnY0gmlwhA-cZ_eHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAiuDqvB-AsVSRmnnWr6OHfjgY8YfNclFy9p02flKzXnOg3RjcIJ2YYN1ZHCCdmE,enr:-J-4QDCVpByqQ8nFqCS9aHicqwUfXgzFDslvpEyYz19lvkHLIdtcIGp2d4q5dxHdjRNTO6HXCsnIKxUeuZSPcEbyVQCGAYznzz0RgmlkgnY0gmlwhANiQfuHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAy3AtF2Jh_aPdOohg506Hjmtx-fQ1AKmu71C7PfkWAw9g3RjcIJ2YYN1ZHCCdmE

View File

@ -1411,4 +1411,4 @@
"uid": "nT9VeZoVk", "uid": "nT9VeZoVk",
"version": 2, "version": 2,
"weekStart": "" "weekStart": ""
} }

View File

@ -65,7 +65,7 @@ if [ -n "$CERC_L1_ADDRESS" ] && [ -n "$CERC_L1_PRIV_KEY" ]; then
# Sequencer # Sequencer
SEQ=$(echo "$wallet3" | awk '/Address:/{print $2}') SEQ=$(echo "$wallet3" | awk '/Address:/{print $2}')
SEQ_KEY=$(echo "$wallet3" | awk '/Private key:/{print $3}') SEQ_KEY=$(echo "$wallet3" | awk '/Private key:/{print $3}')
echo "Funding accounts." echo "Funding accounts."
wait_for_block 1 300 wait_for_block 1 300
cast send --from $ADMIN --rpc-url $CERC_L1_RPC --value 5ether $PROPOSER --private-key $ADMIN_KEY cast send --from $ADMIN --rpc-url $CERC_L1_RPC --value 5ether $PROPOSER --private-key $ADMIN_KEY

View File

@ -56,7 +56,7 @@
"value": "!validator-pubkey" "value": "!validator-pubkey"
} }
} }
} }
], ],
"supply": [] "supply": []
}, },
@ -269,4 +269,4 @@
"claims": null "claims": null
} }
} }
} }

View File

@ -2084,4 +2084,4 @@
"clientPolicies": { "clientPolicies": {
"policies": [] "policies": []
} }
} }

View File

@ -2388,4 +2388,4 @@
"clientPolicies": { "clientPolicies": {
"policies": [] "policies": []
} }
} }

View File

@ -29,4 +29,3 @@
"l1_system_config_address": "0x5531dcff39ec1ec727c4c5d2fc49835368f805a9", "l1_system_config_address": "0x5531dcff39ec1ec727c4c5d2fc49835368f805a9",
"protocol_versions_address": "0x0000000000000000000000000000000000000000" "protocol_versions_address": "0x0000000000000000000000000000000000000000"
} }

View File

@ -2388,4 +2388,4 @@
"clientPolicies": { "clientPolicies": {
"policies": [] "policies": []
} }
} }

View File

@ -1901,4 +1901,4 @@
"uid": "b54352dd-35f6-4151-97dc-265bab0c67e9", "uid": "b54352dd-35f6-4151-97dc-265bab0c67e9",
"version": 18, "version": 18,
"weekStart": "" "weekStart": ""
} }

View File

@ -849,7 +849,7 @@ groups:
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
# Secured Finance # Secured Finance
- uid: secured_finance_diff_external - uid: secured_finance_diff_external
title: secured_finance_watcher_head_tracking title: secured_finance_watcher_head_tracking

View File

@ -14,7 +14,7 @@ echo ACCOUNT_PRIVATE_KEY=${CERC_PRIVATE_KEY_DEPLOYER} >> .env
if [ -f ${erc20_address_file} ]; then if [ -f ${erc20_address_file} ]; then
echo "${erc20_address_file} already exists, skipping ERC20 contract deployment" echo "${erc20_address_file} already exists, skipping ERC20 contract deployment"
cat ${erc20_address_file} cat ${erc20_address_file}
# Keep the container running # Keep the container running
tail -f tail -f
fi fi

View File

@ -940,4 +940,3 @@ ALTER TABLE ONLY public.state
-- --
-- PostgreSQL database dump complete -- PostgreSQL database dump complete
-- --

View File

@ -18,4 +18,3 @@ root@7c4124bb09e3:/src#
``` ```
Now gerbil commands can be run. Now gerbil commands can be run.

View File

@ -23,7 +23,7 @@ local_npm_registry_url=$2
versioned_target_package=$(yarn list --pattern ${target_package} --depth=0 --json --non-interactive --no-progress | jq -r '.data.trees[].name') versioned_target_package=$(yarn list --pattern ${target_package} --depth=0 --json --non-interactive --no-progress | jq -r '.data.trees[].name')
# Use yarn info to get URL checksums etc from the new registry # Use yarn info to get URL checksums etc from the new registry
yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null) yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null)
# First check if the target version actually exists. # First check if the target version actually exists.
# If it doesn't exist there will be no .data.dist.tarball element, # If it doesn't exist there will be no .data.dist.tarball element,
# and jq will output the string "null" # and jq will output the string "null"
package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball) package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball)

View File

@ -4,4 +4,4 @@ out = 'out'
libs = ['lib'] libs = ['lib']
remappings = ['ds-test/=lib/ds-test/src/'] remappings = ['ds-test/=lib/ds-test/src/']
# See more config options https://github.com/gakonst/foundry/tree/master/config # See more config options https://github.com/gakonst/foundry/tree/master/config

View File

@ -20,4 +20,4 @@ contract Stateful {
function inc() public { function inc() public {
x = x + 1; x = x + 1;
} }
} }

View File

@ -11,4 +11,4 @@ record:
foo: bar foo: bar
tags: tags:
- a - a
- b - b

View File

@ -9,4 +9,4 @@ record:
foo: bar foo: bar
tags: tags:
- a - a
- b - b

View File

@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Build cerc/laconicd # Build cerc/laconicd
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
docker build -t cerc/laconicd:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconicd docker build -t cerc/laconicd:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconicd

View File

@ -36,7 +36,7 @@ if [ -f "./run-webapp.sh" ]; then
./run-webapp.sh & ./run-webapp.sh &
tpid=$! tpid=$!
wait $tpid wait $tpid
else else
"$SCRIPT_DIR/apply-runtime-env.sh" "`pwd`" .next .next-r "$SCRIPT_DIR/apply-runtime-env.sh" "`pwd`" .next .next-r
mv .next .next.old mv .next .next.old
mv .next-r/.next . mv .next-r/.next .

View File

@ -5,4 +5,3 @@ WORKDIR /app
COPY . . COPY . .
RUN yarn RUN yarn

View File

@ -22,7 +22,7 @@ fi
# infers the directory from which to load chain configuration files # infers the directory from which to load chain configuration files
# by the presence or absense of the substring "testnet" in the host name # by the presence or absense of the substring "testnet" in the host name
# (browser side -- the host name of the host in the address bar of the browser) # (browser side -- the host name of the host in the address bar of the browser)
# Accordingly we configure our network in both directories in order to # Accordingly we configure our network in both directories in order to
# subvert this lunacy. # subvert this lunacy.
explorer_mainnet_config_dir=/app/chains/mainnet explorer_mainnet_config_dir=/app/chains/mainnet
explorer_testnet_config_dir=/app/chains/testnet explorer_testnet_config_dir=/app/chains/testnet

View File

@ -2,4 +2,4 @@
# Build cerc/test-container # Build cerc/test-container
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/test-database-client:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} $SCRIPT_DIR docker build -t cerc/test-database-client:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} $SCRIPT_DIR

View File

@ -8,7 +8,7 @@ CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}" CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}"
CERC_SINGLE_PAGE_APP="${CERC_SINGLE_PAGE_APP}" CERC_SINGLE_PAGE_APP="${CERC_SINGLE_PAGE_APP}"
if [ -z "${CERC_SINGLE_PAGE_APP}" ]; then if [ -z "${CERC_SINGLE_PAGE_APP}" ]; then
# If there is only one HTML file, assume an SPA. # If there is only one HTML file, assume an SPA.
if [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ]; then if [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ]; then
CERC_SINGLE_PAGE_APP=true CERC_SINGLE_PAGE_APP=true

View File

@ -6,7 +6,7 @@ JS/TS/NPM builds need an npm registry to store intermediate package artifacts.
This can be supplied by the user (e.g. using a hosted registry or even npmjs.com), or a local registry using gitea can be deployed by stack orchestrator. This can be supplied by the user (e.g. using a hosted registry or even npmjs.com), or a local registry using gitea can be deployed by stack orchestrator.
To use a user-supplied registry set these environment variables: To use a user-supplied registry set these environment variables:
`CERC_NPM_REGISTRY_URL` and `CERC_NPM_REGISTRY_URL` and
`CERC_NPM_AUTH_TOKEN` `CERC_NPM_AUTH_TOKEN`
Leave `CERC_NPM_REGISTRY_URL` un-set to use the local gitea registry. Leave `CERC_NPM_REGISTRY_URL` un-set to use the local gitea registry.
@ -22,7 +22,7 @@ $ laconic-so --stack build-support build-containers
``` ```
$ laconic-so --stack package-registry setup-repositories $ laconic-so --stack package-registry setup-repositories
$ laconic-so --stack package-registry build-containers $ laconic-so --stack package-registry build-containers
$ laconic-so --stack package-registry deploy up $ laconic-so --stack package-registry deploy up
[+] Running 3/3 [+] Running 3/3
⠿ Network laconic-aecc4a21d3a502b14522db97d427e850_gitea Created 0.0s ⠿ Network laconic-aecc4a21d3a502b14522db97d427e850_gitea Created 0.0s

View File

@ -14,4 +14,3 @@ containers:
pods: pods:
- fixturenet-blast - fixturenet-blast
- foundry - foundry

View File

@ -3,4 +3,3 @@
A "loaded" version of fixturenet-eth, with all the bells and whistles enabled. A "loaded" version of fixturenet-eth, with all the bells and whistles enabled.
TODO: write me TODO: write me

View File

@ -12,7 +12,7 @@ $ chmod +x ./laconic-so
$ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory $ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory
``` ```
## 2. Prepare the local build environment ## 2. Prepare the local build environment
Note that this step needs only to be done once on a new machine. Note that this step needs only to be done once on a new machine.
Detailed instructions can be found [here](../build-support/README.md). For the impatient run these commands: Detailed instructions can be found [here](../build-support/README.md). For the impatient run these commands:
``` ```
$ laconic-so --stack build-support build-containers --exclude cerc/builder-gerbil $ laconic-so --stack build-support build-containers --exclude cerc/builder-gerbil

View File

@ -52,7 +52,7 @@ laconic-so --stack fixturenet-optimism deploy init --map-ports-to-host any-fixed
It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections. It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections.
Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`. Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
In addition, a stack-wide port mapping "recipe" can be applied at the time the In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported: `laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping | | Recipe | Host Port Mapping |
|--------|-------------------| |--------|-------------------|
@ -62,11 +62,11 @@ In addition, a stack-wide port mapping "recipe" can be applied at the time the
| localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)| | localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)|
| any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) | | any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) |
For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `fixturenet-eth-geth-1` RPC to port 8545 and the `op-geth` RPC to port 9545 on the host. For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `fixturenet-eth-geth-1` RPC to port 8545 and the `op-geth` RPC to port 9545 on the host.
Or, you may wish to use `any-same` for the initial mappings -- in which case you'll have to edit the spec to file to ensure the various geth instances aren't all trying to publish to host ports 8545/8546 at once. Or, you may wish to use `any-same` for the initial mappings -- in which case you'll have to edit the spec to file to ensure the various geth instances aren't all trying to publish to host ports 8545/8546 at once.
### Data volumes ### Data volumes
Container data volumes are bind-mounted to specified paths in the host filesystem. Container data volumes are bind-mounted to specified paths in the host filesystem.
The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`. The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
@ -101,7 +101,7 @@ docker logs -f <CONTAINER_ID>
## Example: bridge some ETH from L1 to L2 ## Example: bridge some ETH from L1 to L2
Send some ETH from the desired account to the `L1StandardBridgeProxy` contract on L1 to test bridging to L2. Send some ETH from the desired account to the `L1StandardBridgeProxy` contract on L1 to test bridging to L2.
We can use the testing account `0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F` which is pre-funded and unlocked, and the `cerc/foundry:local` container to make use of the `cast` cli. We can use the testing account `0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F` which is pre-funded and unlocked, and the `cerc/foundry:local` container to make use of the `cast` cli.

View File

@ -38,7 +38,7 @@ laconic-so --stack fixturenet-optimism deploy init --map-ports-to-host any-fixed
It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections. It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections.
Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`. Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
In addition, a stack-wide port mapping "recipe" can be applied at the time the In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported: `laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping | | Recipe | Host Port Mapping |
|--------|-------------------| |--------|-------------------|
@ -48,9 +48,9 @@ In addition, a stack-wide port mapping "recipe" can be applied at the time the
| localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)| | localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)|
| any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) | | any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) |
For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `op-geth` RPC to an easy to remember port like 8545 or 9545 on the host. For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `op-geth` RPC to an easy to remember port like 8545 or 9545 on the host.
### Data volumes ### Data volumes
Container data volumes are bind-mounted to specified paths in the host filesystem. Container data volumes are bind-mounted to specified paths in the host filesystem.
The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`. The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.

View File

@ -128,7 +128,7 @@ Stack components:
removed removed
topics topics
transactionHash transactionHash
transactionIndex transactionIndex
} }
getEthBlock( getEthBlock(
@ -211,14 +211,14 @@ Stack components:
hash hash
} }
log { log {
id id
} }
block { block {
number number
} }
} }
metadata { metadata {
pageEndsAtTimestamp pageEndsAtTimestamp
isLastPage isLastPage
} }
} }
@ -227,7 +227,7 @@ Stack components:
* Open watcher Ponder app endpoint http://localhost:42069 * Open watcher Ponder app endpoint http://localhost:42069
* Try GQL query to see transfer events * Try GQL query to see transfer events
```graphql ```graphql
{ {
transferEvents (orderBy: "timestamp", orderDirection: "desc") { transferEvents (orderBy: "timestamp", orderDirection: "desc") {
@ -251,9 +251,9 @@ Stack components:
```bash ```bash
export TOKEN_ADDRESS=$(docker exec payments-ponder-er20-contracts-1 jq -r '.address' ./deployment/erc20-address.json) export TOKEN_ADDRESS=$(docker exec payments-ponder-er20-contracts-1 jq -r '.address' ./deployment/erc20-address.json)
``` ```
* Transfer token * Transfer token
```bash ```bash
docker exec -it payments-ponder-er20-contracts-1 bash -c "yarn token:transfer:docker --token ${TOKEN_ADDRESS} --to 0xe22AD83A0dE117bA0d03d5E94Eb4E0d80a69C62a --amount 5000" docker exec -it payments-ponder-er20-contracts-1 bash -c "yarn token:transfer:docker --token ${TOKEN_ADDRESS} --to 0xe22AD83A0dE117bA0d03d5E94Eb4E0d80a69C62a --amount 5000"
``` ```

View File

@ -48,7 +48,7 @@ or see the full logs:
$ laconic-so --stack fixturenet-pocket deploy logs pocket $ laconic-so --stack fixturenet-pocket deploy logs pocket
``` ```
## 5. Send a relay request to Pocket node ## 5. Send a relay request to Pocket node
The Pocket node serves relay requests at `http://localhost:8081/v1/client/sim` The Pocket node serves relay requests at `http://localhost:8081/v1/client/sim`
Example request: Example request:
``` ```

View File

@ -154,12 +154,12 @@ http://127.0.0.1:<HOST_PORT>/subgraphs/name/sushiswap/v3-lotus/graphql
deployment deployment
hasIndexingErrors hasIndexingErrors
} }
factories { factories {
poolCount poolCount
id id
} }
pools { pools {
id id
token0 { token0 {

View File

@ -7,7 +7,7 @@ We will use the [ethereum-gravatar](https://github.com/graphprotocol/graph-tooli
- Clone the repo - Clone the repo
```bash ```bash
git clone git@github.com:graphprotocol/graph-tooling.git git clone git@github.com:graphprotocol/graph-tooling.git
cd graph-tooling cd graph-tooling
``` ```
@ -54,11 +54,11 @@ The following steps should be similar for every subgraph
- Create and deploy the subgraph - Create and deploy the subgraph
```bash ```bash
pnpm graph create example --node <GRAPH_NODE_DEPLOY_ENDPOINT> pnpm graph create example --node <GRAPH_NODE_DEPLOY_ENDPOINT>
pnpm graph deploy example --ipfs <GRAPH_NODE_IPFS_ENDPOINT> --node <GRAPH_NODE_DEPLOY_ENDPOINT> pnpm graph deploy example --ipfs <GRAPH_NODE_IPFS_ENDPOINT> --node <GRAPH_NODE_DEPLOY_ENDPOINT>
``` ```
- `GRAPH_NODE_DEPLOY_ENDPOINT` and `GRAPH_NODE_IPFS_ENDPOINT` will be available after graph-node has been deployed - `GRAPH_NODE_DEPLOY_ENDPOINT` and `GRAPH_NODE_IPFS_ENDPOINT` will be available after graph-node has been deployed
- More details can be seen in [Create a deployment](./README.md#create-a-deployment) section - More details can be seen in [Create a deployment](./README.md#create-a-deployment) section
- The subgraph GQL endpoint will be seen after deploy command runs successfully - The subgraph GQL endpoint will be seen after deploy command runs successfully

View File

@ -1,7 +1,7 @@
version: "1.0" version: "1.0"
name: kubo name: kubo
description: "Run kubo (IPFS)" description: "Run kubo (IPFS)"
repos: repos:
containers: containers:
pods: pods:
- kubo - kubo

View File

@ -2,7 +2,7 @@
``` ```
laconic-so --stack laconic-dot-com setup-repositories laconic-so --stack laconic-dot-com setup-repositories
laconic-so --stack laconic-dot-com build-containers laconic-so --stack laconic-dot-com build-containers
laconic-so --stack laconic-dot-com deploy init --output laconic-website-spec.yml --map-ports-to-host localhost-same laconic-so --stack laconic-dot-com deploy init --output laconic-website-spec.yml --map-ports-to-host localhost-same
laconic-so --stack laconic-dot-com deploy create --spec-file laconic-website-spec.yml --deployment-dir lx-website laconic-so --stack laconic-dot-com deploy create --spec-file laconic-website-spec.yml --deployment-dir lx-website
laconic-so deployment --dir lx-website start laconic-so deployment --dir lx-website start

View File

@ -2,6 +2,6 @@
``` ```
laconic-so --stack lasso setup-repositories laconic-so --stack lasso setup-repositories
laconic-so --stack lasso build-containers laconic-so --stack lasso build-containers
laconic-so --stack lasso deploy up laconic-so --stack lasso deploy up
``` ```

View File

@ -92,7 +92,7 @@ volumes:
mainnet_eth_plugeth_geth_1_data: ./data/mainnet_eth_plugeth_geth_1_data mainnet_eth_plugeth_geth_1_data: ./data/mainnet_eth_plugeth_geth_1_data
mainnet_eth_plugeth_lighthouse_1_data: ./data/mainnet_eth_plugeth_lighthouse_1_data mainnet_eth_plugeth_lighthouse_1_data: ./data/mainnet_eth_plugeth_lighthouse_1_data
``` ```
In addition, a stack-wide port mapping "recipe" can be applied at the time the In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported: `laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping | | Recipe | Host Port Mapping |
|--------|-------------------| |--------|-------------------|

View File

@ -92,7 +92,7 @@ volumes:
mainnet_eth_geth_1_data: ./data/mainnet_eth_geth_1_data mainnet_eth_geth_1_data: ./data/mainnet_eth_geth_1_data
mainnet_eth_lighthouse_1_data: ./data/mainnet_eth_lighthouse_1_data mainnet_eth_lighthouse_1_data: ./data/mainnet_eth_lighthouse_1_data
``` ```
In addition, a stack-wide port mapping "recipe" can be applied at the time the In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported: `laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping | | Recipe | Host Port Mapping |
|--------|-------------------| |--------|-------------------|

View File

@ -36,9 +36,9 @@ laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | 'mainnet-109331-no-histor
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Maximum peer count total=50 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Maximum peer count total=50
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory" laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Genesis file is a known preset name="Mainnet-109331 without history" laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Genesis file is a known preset name="Mainnet-109331 without history"
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.052] Applying genesis state laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.052] Applying genesis state
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.052] - Reading epochs unit 0 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.052] - Reading epochs unit 0
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.054] - Reading blocks unit 0 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.054] - Reading blocks unit 0
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.530] Applied genesis state name=main id=250 genesis=0x4a53c5445584b3bfc20dbfb2ec18ae20037c716f3ba2d9e1da768a9deca17cb4 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.530] Applied genesis state name=main id=250 genesis=0x4a53c5445584b3bfc20dbfb2ec18ae20037c716f3ba2d9e1da768a9deca17cb4
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.531] Regenerated local transaction journal transactions=0 accounts=0 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.531] Regenerated local transaction journal transactions=0 accounts=0
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.532] Starting peer-to-peer node instance=go-opera/v1.1.2-rc.5-50cd051d-1677276206/linux-amd64/go1.19.10 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.532] Starting peer-to-peer node instance=go-opera/v1.1.2-rc.5-50cd051d-1677276206/linux-amd64/go1.19.10
@ -47,7 +47,7 @@ laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.537]
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.537] IPC endpoint opened url=/root/.opera/opera.ipc laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.537] IPC endpoint opened url=/root/.opera/opera.ipc
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] HTTP server started endpoint=[::]:18545 prefix= cors=* vhosts=localhost laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] HTTP server started endpoint=[::]:18545 prefix= cors=* vhosts=localhost
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] WebSocket enabled url=ws://[::]:18546 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] WebSocket enabled url=ws://[::]:18546
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Rebuilding state snapshot laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Rebuilding state snapshot
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] EVM snapshot module=gossip-store at=000000..000000 generating=true laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] EVM snapshot module=gossip-store at=000000..000000 generating=true
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Resuming state snapshot generation accounts=0 slots=0 storage=0.00B elapsed="189.74µs" laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Resuming state snapshot generation accounts=0 slots=0 storage=0.00B elapsed="189.74µs"
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Generated state snapshot accounts=0 slots=0 storage=0.00B elapsed="265.061µs" laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Generated state snapshot accounts=0 slots=0 storage=0.00B elapsed="265.061µs"

View File

@ -1,2 +1 @@
# Laconic Mainnet Deployment (experimental) # Laconic Mainnet Deployment (experimental)

View File

@ -5,4 +5,4 @@ repos:
containers: containers:
- cerc/watcher-mobymask - cerc/watcher-mobymask
pods: pods:
- watcher-mobymask - watcher-mobymask

View File

@ -180,7 +180,7 @@ Set the following env variables in the deployment env config file (`monitoring-d
# (Optional, default: http://localhost:3000) # (Optional, default: http://localhost:3000)
GF_SERVER_ROOT_URL= GF_SERVER_ROOT_URL=
# RPC endpoint used by graph-node for upstream head metric # RPC endpoint used by graph-node for upstream head metric
# (Optional, default: https://mainnet.infura.io/v3) # (Optional, default: https://mainnet.infura.io/v3)
GRAPH_NODE_RPC_ENDPOINT= GRAPH_NODE_RPC_ENDPOINT=

View File

@ -1,3 +1,3 @@
# Test Database Stack # Test Database Stack
A stack with a database for test/demo purposes. A stack with a database for test/demo purposes.

View File

@ -1,3 +1,3 @@
# Test Stack # Test Stack
A stack for test/demo purposes. A stack for test/demo purposes.

View File

@ -116,7 +116,7 @@ echo "deploy create output file test: passed"
# Note we also turn up the log level on the scheduler in order to diagnose placement errors # Note we also turn up the log level on the scheduler in order to diagnose placement errors
# See logs like: kubectl -n kube-system logs kube-scheduler-laconic-f185cd245d8dba98-control-plane # See logs like: kubectl -n kube-system logs kube-scheduler-laconic-f185cd245d8dba98-control-plane
kind_config_file=${test_deployment_dir}/kind-config.yml kind_config_file=${test_deployment_dir}/kind-config.yml
cat << EOF > ${kind_config_file} cat << EOF > ${kind_config_file}
kind: Cluster kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4 apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches: kubeadmConfigPatches:

View File

@ -14,7 +14,7 @@ chain_id="laconic_81337-6"
node_moniker_prefix="node" node_moniker_prefix="node"
echo "Deleting any existing network directories..." echo "Deleting any existing network directories..."
for (( i=1 ; i<=$node_count ; i++ )); for (( i=1 ; i<=$node_count ; i++ ));
do do
node_network_dir=${node_dir_prefix}${i} node_network_dir=${node_dir_prefix}${i}
if [[ -d $node_network_dir ]]; then if [[ -d $node_network_dir ]]; then
@ -38,7 +38,7 @@ do
done done
echo "Initalizing ${node_count} nodes networks..." echo "Initalizing ${node_count} nodes networks..."
for (( i=1 ; i<=$node_count ; i++ )); for (( i=1 ; i<=$node_count ; i++ ));
do do
node_network_dir=${node_dir_prefix}${i} node_network_dir=${node_dir_prefix}${i}
node_moniker=${node_moniker_prefix}${i} node_moniker=${node_moniker_prefix}${i}
@ -47,7 +47,7 @@ do
done done
echo "Joining ${node_count} nodes to the network..." echo "Joining ${node_count} nodes to the network..."
for (( i=1 ; i<=$node_count ; i++ )); for (( i=1 ; i<=$node_count ; i++ ));
do do
node_network_dir=${node_dir_prefix}${i} node_network_dir=${node_dir_prefix}${i}
node_moniker=${node_moniker_prefix}${i} node_moniker=${node_moniker_prefix}${i}

View File

@ -15,7 +15,7 @@ echo "Test version command"
reported_version_string=$( $TEST_TARGET_SO version ) reported_version_string=$( $TEST_TARGET_SO version )
echo "Version reported is: ${reported_version_string}" echo "Version reported is: ${reported_version_string}"
echo "Cloning repositories into: $CERC_REPO_BASE_DIR" echo "Cloning repositories into: $CERC_REPO_BASE_DIR"
$TEST_TARGET_SO --stack mainnet-eth setup-repositories $TEST_TARGET_SO --stack mainnet-eth setup-repositories
$TEST_TARGET_SO --stack mainnet-eth build-containers $TEST_TARGET_SO --stack mainnet-eth build-containers
$TEST_TARGET_SO --stack mainnet-eth deploy init --output mainnet-eth-spec.yml $TEST_TARGET_SO --stack mainnet-eth deploy init --output mainnet-eth-spec.yml
$TEST_TARGET_SO deploy create --spec-file mainnet-eth-spec.yml --deployment-dir $DEPLOYMENT_DIR $TEST_TARGET_SO deploy create --spec-file mainnet-eth-spec.yml --deployment-dir $DEPLOYMENT_DIR

View File

@ -3,4 +3,3 @@ extend-ignore = E203
exclude = .git,__pycache__,docs/source/conf.py,old,build,dist,venv exclude = .git,__pycache__,docs/source/conf.py,old,build,dist,venv
max-complexity = 25 max-complexity = 25
max-line-length = 132 max-line-length = 132