Compare commits

...

30 Commits

Author SHA1 Message Date
gitea_admin ce966f1baa Update README.md
Publish / Build and publish (push) Successful in 30s Details
Deploy Test / Run deploy test suite (push) Successful in 2m50s Details
Smoke Test / Run basic test suite (push) Successful in 3m12s Details
2023-06-20 15:23:18 +00:00
gitea_admin db4728a9e3 Update README.md
Publish / Build and publish (push) Successful in 31s Details
Deploy Test / Run deploy test suite (push) Successful in 2m54s Details
Smoke Test / Run basic test suite (push) Successful in 3m16s Details
2023-06-20 15:16:46 +00:00
Zach 7ca7bcc952
Cloud init scripts for user/dev mode (#430)
* cloud init install

* add dev mode script + description

* instructions
2023-06-20 10:09:30 -04:00
Nabarun Gogoi 32f8d65bb8
Update mobymask-v2 stack with lighthouse-cli and branch checkout feature (#425)
* Update optimism stack yml for lighthouse-cli

* Use branch checkout feature in mobymask stack
2023-06-07 18:48:59 +05:30
David Boreham d19b9a65b9 Fix typo 2023-06-05 21:59:42 -06:00
David Boreham 98e1d120cc
Add missing lighthouse-cli container to pocket stack (#424)
Co-authored-by: David Boreham <david@bozemanpas.com>
2023-06-05 21:08:05 -06:00
Thomas E Lackey 26ff7a969c
Fix plugeth build. (#423) 2023-06-05 21:10:17 -05:00
Thomas E Lackey a8e198ad55
Allow configuring the number of statediff workers. (#422)
* Allow configuring the number of statediff workers.

* Leave logging alone
2023-06-05 18:16:42 -05:00
David Boreham f1a626ddf5
build local lighthouse cli (#420)
* Build lcli locally

* Pull lighthouse repo

* Enable portable lcli build

* Update ldcli options

* Add lcli container to fixturenet-eth stack

* Include --eth1-block-hash

---------

Co-authored-by: David Boreham <david@bozemanpas.com>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
2023-06-05 16:54:22 -05:00
Roy Crihfield ff616db4ad
Updates for running IPLD-ETH CI tests (#414)
* random nits

* geth - visibility of migration status

* forward CERC_RUN_STATEDIFF to geth container

* fix ipld-eth-server vars

* fix fixturenet-eth-loaded stack

* fixturenet geth genesis - include mergeNetsplitBlock

* forward CERC_STATEDIFF_DB_GOOSE_MIN_VER to env file

* add TAG_SUFFIX arg to lighthouse build

  intended to avoid sporadic failures when running lcli on github CI runners, likely related to non-portable builds
2023-05-31 03:10:58 -05:00
David Boreham 9880b48b78
Add foundry to fixturenet-plugeth-tx (#418) 2023-05-30 23:51:01 -06:00
Thomas E Lackey 23a336020c
Make a separate lighthouse container for the plugeth fixturenet. (#412)
* Make a separate lighthouse container for the plugeth fixturenet.
2023-05-26 16:57:15 -05:00
Zach 605db8a4d2
Update pokt README (#413)
* Update pokt README

* split cmds from responses
2023-05-26 10:37:59 -04:00
Thomas E Lackey 6ec55ba460
Add a plugeth-based version of the fixturenet (#411)
* plugeth version of the fixturenet

* Use pre-built plugeth.
2023-05-25 11:21:08 -05:00
David Boreham 938f51ef8c
Specify chunker stack branches (#410)
* Specify v5 branches

* Fix logic for branch switch
2023-05-24 20:00:42 -06:00
David Boreham 6d620ba9c2
git branch in stack and on command line (#409)
* Support @branch notation in stack.yml

* Refactor and support branches directive
2023-05-24 19:49:26 -06:00
erikdies 0c4c128465
cleanup Options boilerplate (#402)
Co-authored-by: David Boreham <david@bozemanpass.com>
2023-05-24 18:02:25 -06:00
David Boreham 97c1ae1c43
Use upstream act_runner project (#408) 2023-05-24 18:01:49 -06:00
David Boreham ec6b5439f4
Support for git hosts other than github (#407)
* Update repository list file

* Add host part to repo name

* Allow git hosts other than github
2023-05-24 17:19:21 -06:00
David Boreham 1d8f252a51
Detect bad reponse from yarn info (#406) 2023-05-22 13:42:55 -06:00
David Boreham 161665ef72
Fix deploy commands (#404)
* Fix bugs

* Add test for deploy port command
2023-05-22 12:43:59 -06:00
David Boreham 9c5f6469ff
Allow docker buildkit to be enabled via env var (#403) 2023-05-22 11:38:34 -06:00
David Boreham 85225c72d7 Fix another typo 2023-05-21 15:43:15 -06:00
David Boreham 223d1171e8 Change test display name 2023-05-21 07:42:09 -06:00
David Boreham 1e38e16550 Fix typo 2023-05-21 07:40:22 -06:00
David Boreham dddae8cc7a
Dboreham/deploy volume control (#401)
* Implement volume control

* Deploy test

* Add test for volumes

* Enable CI for deploy test
2023-05-21 07:39:00 -06:00
Thomas E Lackey aa702737ef Fix 397 by pegging alpine version. 2023-05-19 11:26:09 -05:00
prathamesh0 c9155eafd2
Add restart policies to fixturenet-eth and fixturenet-opimism stacks (#396)
* Add restart policies for fixturenet-optimism stack containers


Former-commit-id: e749699188c733614423ccc7ef43525b9805e23d

* Add restart policies for fixturenet-eth stack containers


Former-commit-id: 716e132300d88dbe6121ed3968a9c78b561196ef

* Remove existing bootnode ENR directory on start
2023-05-19 13:46:39 +05:30
David Boreham 1ffc6b1687 Refactor deploy into click subcommands (#399)
Former-commit-id: cb58fdb58ce1686f4638946745830f391d820f4b
2023-05-18 17:01:46 -06:00
David Boreham 87c25dfb5e Fix up test stack (#398)
Former-commit-id: 088105c7829254fc8ff1f31b71d28fd916def7eb
2023-05-18 13:54:27 -06:00
71 changed files with 1025 additions and 391 deletions

View File

@ -0,0 +1,39 @@
name: Deploy Test
on:
pull_request:
branches: '*'
push:
branches:
- main
- ci-test
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs:
test:
name: "Run deploy test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: cerc-io/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run deploy tests"
run: ./tests/deploy/run-deploy-test.sh

View File

@ -1,4 +1,4 @@
name: Integration Test name: Smoke Test
on: on:
pull_request: pull_request:

View File

@ -0,0 +1,29 @@
name: Deploy Test
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run deploy test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run deploy tests"
run: ./tests/deploy/run-deploy-test.sh

View File

@ -1,4 +1,4 @@
name: Test name: Smoke Test
on: on:
pull_request: pull_request:

View File

@ -1,5 +1,7 @@
# Stack Orchestrator # Stack Orchestrator
Stack Orchestrator allows building and deployment of a Laconic Stack on a single machine with minimial prerequisites. It is a Python3 CLI tool that runs on any OS with Python3 and Docker. The following diagram summarizes the relevant repositories in the Laconic Stack - and the relationship to Stack Orchestrator. Stack Orchestrator allows building and deployment of a Laconic Stack on a single machine with minimial prerequisites. It is a Python3 CLI tool that runs on any OS with Python3 and Docker. The following diagram summarizes the relevant repositories in the Laconic Stack - and the relationship to Stack Orchestrator.
![The Stack](/docs/images/laconic-stack.png) ![The Stack](/docs/images/laconic-stack.png)

View File

@ -90,7 +90,7 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
"CERC_CONTAINER_BASE_DIR": container_build_dir, "CERC_CONTAINER_BASE_DIR": container_build_dir,
"CERC_HOST_UID": f"{os.getuid()}", "CERC_HOST_UID": f"{os.getuid()}",
"CERC_HOST_GID": f"{os.getgid()}", "CERC_HOST_GID": f"{os.getgid()}",
"DOCKER_BUILDKIT": "0" "DOCKER_BUILDKIT": config("DOCKER_BUILDKIT", default="0")
} }
container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {}) container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {}) container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})

View File

@ -2,6 +2,7 @@ version: '3.7'
services: services:
fixturenet-eth-bootnode-geth: fixturenet-eth-bootnode-geth:
restart: always
hostname: fixturenet-eth-bootnode-geth hostname: fixturenet-eth-bootnode-geth
env_file: env_file:
- ../config/fixturenet-eth/fixturenet-eth.env - ../config/fixturenet-eth/fixturenet-eth.env
@ -15,12 +16,13 @@ services:
- "30303" - "30303"
fixturenet-eth-geth-1: fixturenet-eth-geth-1:
restart: always
hostname: fixturenet-eth-geth-1 hostname: fixturenet-eth-geth-1
cap_add: cap_add:
- SYS_PTRACE - SYS_PTRACE
environment: environment:
CERC_REMOTE_DEBUG: "true" CERC_REMOTE_DEBUG: "true"
CERC_RUN_STATEDIFF: "detect" CERC_RUN_STATEDIFF: ${CERC_RUN_STATEDIFF:-detect}
CERC_STATEDIFF_DB_NODE_ID: 1 CERC_STATEDIFF_DB_NODE_ID: 1
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG} CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
env_file: env_file:
@ -42,6 +44,7 @@ services:
- "6060" - "6060"
fixturenet-eth-geth-2: fixturenet-eth-geth-2:
restart: always
hostname: fixturenet-eth-geth-2 hostname: fixturenet-eth-geth-2
healthcheck: healthcheck:
test: ["CMD", "nc", "-v", "localhost", "8545"] test: ["CMD", "nc", "-v", "localhost", "8545"]
@ -60,12 +63,14 @@ services:
- fixturenet_eth_geth_2_data:/root/ethdata - fixturenet_eth_geth_2_data:/root/ethdata
fixturenet-eth-bootnode-lighthouse: fixturenet-eth-bootnode-lighthouse:
restart: always
hostname: fixturenet-eth-bootnode-lighthouse hostname: fixturenet-eth-bootnode-lighthouse
environment: environment:
RUN_BOOTNODE: "true" RUN_BOOTNODE: "true"
image: cerc/fixturenet-eth-lighthouse:local image: cerc/fixturenet-eth-lighthouse:local
fixturenet-eth-lighthouse-1: fixturenet-eth-lighthouse-1:
restart: always
hostname: fixturenet-eth-lighthouse-1 hostname: fixturenet-eth-lighthouse-1
healthcheck: healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"] test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"]
@ -91,6 +96,7 @@ services:
- "8001" - "8001"
fixturenet-eth-lighthouse-2: fixturenet-eth-lighthouse-2:
restart: always
hostname: fixturenet-eth-lighthouse-2 hostname: fixturenet-eth-lighthouse-2
healthcheck: healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"] test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"]

View File

@ -1,10 +1,11 @@
version: "3.2"
services: services:
laconicd: laconicd:
restart: unless-stopped restart: unless-stopped
image: cerc/laconicd:local image: cerc/laconicd:local
command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"] command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes: volumes:
# The cosmos-sdk node's database directory:
- laconicd-data:/root/.laconicd/data
# TODO: look at folding these scripts into the container # TODO: look at folding these scripts into the container
- ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh - ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
- ../config/fixturenet-laconicd/export-mykey.sh:/docker-entrypoint-scripts.d/export-mykey.sh - ../config/fixturenet-laconicd/export-mykey.sh:/docker-entrypoint-scripts.d/export-mykey.sh

View File

@ -5,6 +5,7 @@ services:
# Creates / updates the configuration for L1 contracts deployment # Creates / updates the configuration for L1 contracts deployment
# Deploys the L1 smart contracts (outputs to volume l1_deployment) # Deploys the L1 smart contracts (outputs to volume l1_deployment)
fixturenet-optimism-contracts: fixturenet-optimism-contracts:
restart: on-failure
hostname: fixturenet-optimism-contracts hostname: fixturenet-optimism-contracts
image: cerc/optimism-contracts:local image: cerc/optimism-contracts:local
env_file: env_file:
@ -35,6 +36,7 @@ services:
# Generates the config files required for L2 (outputs to volume l2_config) # Generates the config files required for L2 (outputs to volume l2_config)
op-node-l2-config-gen: op-node-l2-config-gen:
restart: on-failure
image: cerc/optimism-op-node:local image: cerc/optimism-op-node:local
depends_on: depends_on:
fixturenet-optimism-contracts: fixturenet-optimism-contracts:
@ -54,6 +56,7 @@ services:
# Initializes and runs the L2 execution client (outputs to volume l2_geth_data) # Initializes and runs the L2 execution client (outputs to volume l2_geth_data)
op-geth: op-geth:
restart: always
image: cerc/optimism-l2geth:local image: cerc/optimism-l2geth:local
depends_on: depends_on:
op-node-l2-config-gen: op-node-l2-config-gen:
@ -76,6 +79,7 @@ services:
# Runs the L2 consensus client (Sequencer node) # Runs the L2 consensus client (Sequencer node)
op-node: op-node:
restart: always
image: cerc/optimism-op-node:local image: cerc/optimism-op-node:local
depends_on: depends_on:
op-geth: op-geth:
@ -103,6 +107,7 @@ services:
# Runs the batcher (takes transactions from the Sequencer and publishes them to L1) # Runs the batcher (takes transactions from the Sequencer and publishes them to L1)
op-batcher: op-batcher:
restart: always
image: cerc/optimism-op-batcher:local image: cerc/optimism-op-batcher:local
depends_on: depends_on:
op-node: op-node:
@ -129,6 +134,7 @@ services:
# Runs the proposer (periodically submits new state roots to L1) # Runs the proposer (periodically submits new state roots to L1)
op-proposer: op-proposer:
restart: always
image: cerc/optimism-op-proposer:local image: cerc/optimism-op-proposer:local
depends_on: depends_on:
op-node: op-node:

View File

@ -0,0 +1,129 @@
services:
fixturenet-eth-bootnode-geth:
restart: always
hostname: fixturenet-eth-bootnode-geth
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
environment:
RUN_BOOTNODE: "true"
image: cerc/fixturenet-plugeth-plugeth:local
volumes:
- fixturenet_plugeth_bootnode_geth_data:/root/ethdata
- ../config/fixturenet-plugeth/plugins:/root/ethdata/plugins
ports:
- "9898"
- "30303"
fixturenet-eth-geth-1:
restart: always
hostname: fixturenet-eth-geth-1
cap_add:
- SYS_PTRACE
environment:
CERC_REMOTE_DEBUG: "true"
CERC_RUN_STATEDIFF: "detect"
CERC_STATEDIFF_DB_NODE_ID: 1
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
image: cerc/fixturenet-plugeth-plugeth:local
volumes:
- fixturenet_plugeth_geth_1_data:/root/ethdata
- ../config/fixturenet-plugeth/plugins:/root/ethdata/plugins
healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8545/"]
interval: 30s
timeout: 10s
retries: 10
start_period: 3s
depends_on:
- fixturenet-eth-bootnode-geth
ports:
- "8545"
- "40000"
- "6060"
fixturenet-eth-geth-2:
restart: always
hostname: fixturenet-eth-geth-2
healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8545/"]
interval: 30s
timeout: 10s
retries: 10
start_period: 3s
environment:
CERC_KEEP_RUNNING_AFTER_GETH_EXIT: "true"
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
image: cerc/fixturenet-plugeth-plugeth:local
depends_on:
- fixturenet-eth-bootnode-geth
volumes:
- fixturenet_plugeth_geth_2_data:/root/ethdata
- ../config/fixturenet-plugeth/plugins:/root/ethdata/plugins
fixturenet-eth-bootnode-lighthouse:
restart: always
hostname: fixturenet-eth-bootnode-lighthouse
environment:
RUN_BOOTNODE: "true"
image: cerc/fixturenet-plugeth-lighthouse:local
fixturenet-eth-lighthouse-1:
restart: always
hostname: fixturenet-eth-lighthouse-1
healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"]
interval: 30s
timeout: 10s
retries: 10
start_period: 30s
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
environment:
NODE_NUMBER: "1"
ETH1_ENDPOINT: "http://fixturenet-eth-geth-1:8545"
EXECUTION_ENDPOINT: "http://fixturenet-eth-geth-1:8551"
image: cerc/fixturenet-plugeth-lighthouse:local
volumes:
- fixturenet_plugeth_lighthouse_1_data:/opt/testnet/build/cl
depends_on:
fixturenet-eth-bootnode-lighthouse:
condition: service_started
fixturenet-eth-geth-1:
condition: service_healthy
ports:
- "8001"
fixturenet-eth-lighthouse-2:
restart: always
hostname: fixturenet-eth-lighthouse-2
healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"]
interval: 30s
timeout: 10s
retries: 10
start_period: 30s
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
environment:
NODE_NUMBER: "2"
ETH1_ENDPOINT: "http://fixturenet-eth-geth-2:8545"
EXECUTION_ENDPOINT: "http://fixturenet-eth-geth-2:8551"
LIGHTHOUSE_GENESIS_STATE_URL: "http://fixturenet-eth-lighthouse-1:8001/eth/v2/debug/beacon/states/0"
image: cerc/fixturenet-plugeth-lighthouse:local
volumes:
- fixturenet_plugeth_lighthouse_2_data:/opt/testnet/build/cl
depends_on:
fixturenet-eth-bootnode-lighthouse:
condition: service_started
fixturenet-eth-geth-2:
condition: service_healthy
volumes:
fixturenet_plugeth_bootnode_geth_data:
fixturenet_plugeth_geth_1_data:
fixturenet_plugeth_geth_2_data:
fixturenet_plugeth_lighthouse_1_data:
fixturenet_plugeth_lighthouse_2_data:

View File

@ -1,6 +1,7 @@
# Add-on pod to include foundry tooling within a fixturenet # Add-on pod to include foundry tooling within a fixturenet
services: services:
foundry: foundry:
restart: always
image: cerc/foundry:local image: cerc/foundry:local
command: ["while :; do sleep 600; done"] command: ["while :; do sleep 600; done"]
volumes: volumes:

View File

@ -7,11 +7,9 @@ services:
condition: service_healthy condition: service_healthy
image: cerc/ipld-eth-server:local image: cerc/ipld-eth-server:local
environment: environment:
IPLD_SERVER_GRAPHQL: "true" SERVER_HTTP_PATH: 0.0.0.0:8081
IPLD_POSTGRAPHILEPATH: http://graphql:5000 SERVER_GRAPHQL: "true"
ETH_SERVER_HTTPPATH: 0.0.0.0:8081 SERVER_GRAPHQLPATH: 0.0.0.0:8082
ETH_SERVER_GRAPHQL: "true"
ETH_SERVER_GRAPHQLPATH: 0.0.0.0:8082
VDB_COMMAND: "serve" VDB_COMMAND: "serve"
ETH_CHAIN_CONFIG: "/tmp/chain.json" ETH_CHAIN_CONFIG: "/tmp/chain.json"
DATABASE_NAME: cerc_testing DATABASE_NAME: cerc_testing

View File

@ -1,8 +1,9 @@
version: '3.2' version: '3.2'
services: services:
# Builds and serves the peer-test react-app
peer-test-app: peer-test-app:
# Builds and serves the peer-test react-app restart: unless-stopped
image: cerc/react-peer:local image: cerc/react-peer:local
working_dir: /scripts working_dir: /scripts
env_file: env_file:

View File

@ -1,7 +1,13 @@
version: "3.2"
services: services:
test: test:
image: cerc/test-container:local image: cerc/test-container:local
restart: always restart: always
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
volumes:
- test-data:/var
ports: ports:
- "80" - "80"
volumes:
test-data:

View File

@ -17,7 +17,8 @@ CERC_STATEDIFF_DB_PORT=5432
CERC_STATEDIFF_DB_NAME="cerc_testing" CERC_STATEDIFF_DB_NAME="cerc_testing"
CERC_STATEDIFF_DB_USER="vdbm" CERC_STATEDIFF_DB_USER="vdbm"
CERC_STATEDIFF_DB_PASSWORD="password" CERC_STATEDIFF_DB_PASSWORD="password"
CERC_STATEDIFF_DB_GOOSE_MIN_VER=23 CERC_STATEDIFF_DB_GOOSE_MIN_VER=${CERC_STATEDIFF_DB_GOOSE_MIN_VER:-18}
CERC_STATEDIFF_DB_LOG_STATEMENTS="false" CERC_STATEDIFF_DB_LOG_STATEMENTS="false"
CERC_STATEDIFF_WORKERS=2
CERC_GETH_VMODULE="statediff/*=5,rpc/*=5" CERC_GETH_VMODULE="statediff/*=5,rpc/*=5"

View File

@ -0,0 +1 @@
See: https://docs.plugeth.org/

View File

@ -27,8 +27,8 @@ yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null)
# If it doesn't exist there will be no .data.dist.tarball element, # If it doesn't exist there will be no .data.dist.tarball element,
# and jq will output the string "null" # and jq will output the string "null"
package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball) package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball)
if [[ $package_tarball == "null" ]]; then if [[ "$yarn_info_output" == "" || $package_tarball == "null" ]]; then
echo "FATAL: Target package version ($versioned_target_package) not found" >&2 echo "FATAL: Target package version ($versioned_target_package) not found (or bad npm auth token)" >&2
exit 1 exit 1
fi fi
# Code below parses out the values we need # Code below parses out the values we need

View File

@ -6,7 +6,7 @@ RUN go install github.com/go-delve/delve/cmd/dlv@latest
FROM cerc/go-ethereum:local as geth FROM cerc/go-ethereum:local as geth
FROM alpine:latest FROM alpine:3.17
RUN apk add --no-cache python3 python3-dev py3-pip curl wget jq build-base gettext libintl openssl bash bind-tools postgresql-client RUN apk add --no-cache python3 python3-dev py3-pip curl wget jq build-base gettext libintl openssl bash bind-tools postgresql-client
COPY --from=delve /go/bin/dlv /usr/local/bin/ COPY --from=delve /go/bin/dlv /usr/local/bin/
@ -22,6 +22,18 @@ COPY run-el.sh /opt/testnet/run.sh
RUN cd /opt/testnet && make genesis-el RUN cd /opt/testnet && make genesis-el
COPY --from=geth /usr/local/bin/geth /usr/local/bin/ COPY --from=geth /usr/local/bin/geth /usr/local/bin/
# Snag the genesis block info.
RUN geth --datadir ~/ethdata init /opt/testnet/build/el/geth.json && rm -f ~/ethdata/geth/nodekey RUN geth --datadir ~/ethdata init /opt/testnet/build/el/geth.json && rm -f ~/ethdata/geth/nodekey
RUN cp -rp ~/ethdata ~/tmpeth && \
geth --datadir ~/tmpeth init /opt/testnet/build/el/geth.json && \
geth --datadir ~/tmpeth --http & \
sleep 5 && \
curl -q --location 'localhost:8545' \
--header 'Content-Type: application/json' \
--data '{ "jsonrpc": "2.0", "id": 14, "method": "eth_getBlockByNumber", "params": ["0x0", false] }' \
-o /opt/testnet/build/el/genesis_block.json && \
killall -9 geth && \
rm -rf ~/tmpeth
ENTRYPOINT ["/opt/testnet/run.sh"] ENTRYPOINT ["/opt/testnet/run.sh"]

View File

@ -34,5 +34,7 @@ python3 /apps/el-gen/genesis_geth.py $tmp_dir/genesis-config.yaml | \
jq ".config.istanbulBlock=$istanbul_block" | \ jq ".config.istanbulBlock=$istanbul_block" | \
jq ".config.berlinBlock=$berlin_block" | \ jq ".config.berlinBlock=$berlin_block" | \
jq ".config.londonBlock=$london_block" | \ jq ".config.londonBlock=$london_block" | \
jq ".config.mergeForkBlock=$merge_fork_block" > ../build/el/geth.json jq ".config.mergeForkBlock=$merge_fork_block" | \
jq ".config.mergeNetsplitBlock=$merge_fork_block" \
> ../build/el/geth.json
python3 ../accounts/mnemonic_to_csv.py $tmp_dir/genesis-config.yaml > ../build/el/accounts.csv python3 ../accounts/mnemonic_to_csv.py $tmp_dir/genesis-config.yaml > ../build/el/accounts.csv

View File

@ -64,8 +64,8 @@ else
STATEDIFF_OPTS="" STATEDIFF_OPTS=""
if [ "$CERC_RUN_STATEDIFF" == "true" ]; then if [ "$CERC_RUN_STATEDIFF" == "true" ]; then
ready=0 ready=0
echo "Waiting for statediff DB..."
while [ $ready -eq 0 ]; do while [ $ready -eq 0 ]; do
echo "Waiting for statediff DB..."
sleep 1 sleep 1
export PGPASSWORD="$CERC_STATEDIFF_DB_PASSWORD" export PGPASSWORD="$CERC_STATEDIFF_DB_PASSWORD"
result=$(psql -h "$CERC_STATEDIFF_DB_HOST" \ result=$(psql -h "$CERC_STATEDIFF_DB_HOST" \
@ -73,9 +73,13 @@ else
-U "$CERC_STATEDIFF_DB_USER" \ -U "$CERC_STATEDIFF_DB_USER" \
-d "$CERC_STATEDIFF_DB_NAME" \ -d "$CERC_STATEDIFF_DB_NAME" \
-t -c 'select max(version_id) from goose_db_version;' 2>/dev/null | awk '{ print $1 }') -t -c 'select max(version_id) from goose_db_version;' 2>/dev/null | awk '{ print $1 }')
if [ -n "$result" ] && [ $result -ge $CERC_STATEDIFF_DB_GOOSE_MIN_VER ]; then if [ -n "$result" ]; then
echo "DB ready..." echo "DB ready..."
ready=1 if [ $result -ge $CERC_STATEDIFF_DB_GOOSE_MIN_VER ]; then
ready=1
else
echo "DB not at required version (want $CERC_STATEDIFF_DB_GOOSE_MIN_VER, have $result)"
fi
fi fi
done done
STATEDIFF_OPTS="--statediff=true \ STATEDIFF_OPTS="--statediff=true \
@ -88,6 +92,7 @@ else
--statediff.db.logstatements=${CERC_STATEDIFF_DB_LOG_STATEMENTS:-false} \ --statediff.db.logstatements=${CERC_STATEDIFF_DB_LOG_STATEMENTS:-false} \
--statediff.db.copyfrom=${CERC_STATEDIFF_DB_COPY_FROM:-true} \ --statediff.db.copyfrom=${CERC_STATEDIFF_DB_COPY_FROM:-true} \
--statediff.waitforsync=true \ --statediff.waitforsync=true \
--statediff.workers=${CERC_STATEDIFF_WORKERS:-1} \
--statediff.writing=true" --statediff.writing=true"
fi fi

View File

@ -1,4 +1,4 @@
FROM sigp/lcli:v4.1.0 AS lcli FROM cerc/lighthouse-cli:local AS lcli
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen
FROM cerc/fixturenet-eth-geth:local AS fnetgeth FROM cerc/fixturenet-eth-geth:local AS fnetgeth

View File

@ -13,22 +13,26 @@ DEBUG_LEVEL=${1:-info}
echo "Starting bootnode" echo "Starting bootnode"
if [ ! -f "$DATADIR/bootnode/enr.dat" ]; then # Clean up existing ENR dir to avoid node connectivity issues on a restart
echo "Generating bootnode enr" if [ -d "$DATADIR/bootnode" ]; then
lcli \ echo "Removing existing bootnode enr directory"
generate-bootnode-enr \ rm -r "$DATADIR/bootnode"
--ip $ENR_IP \
--udp-port $BOOTNODE_PORT \
--tcp-port $BOOTNODE_PORT \
--genesis-fork-version $GENESIS_FORK_VERSION \
--output-dir $DATADIR/bootnode
bootnode_enr=`cat $DATADIR/bootnode/enr.dat`
echo "- $bootnode_enr" > $TESTNET_DIR/boot_enr.yaml
echo "Generated bootnode enr and written to $TESTNET_DIR/boot_enr.yaml"
fi fi
echo "Generating bootnode enr"
lcli \
generate-bootnode-enr \
--ip $ENR_IP \
--udp-port $BOOTNODE_PORT \
--tcp-port $BOOTNODE_PORT \
--genesis-fork-version $GENESIS_FORK_VERSION \
--output-dir $DATADIR/bootnode
bootnode_enr=`cat $DATADIR/bootnode/enr.dat`
echo "- $bootnode_enr" > $TESTNET_DIR/boot_enr.yaml
echo "Generated bootnode enr and written to $TESTNET_DIR/boot_enr.yaml"
exec lighthouse boot_node \ exec lighthouse boot_node \
--testnet-dir $TESTNET_DIR \ --testnet-dir $TESTNET_DIR \
--port $BOOTNODE_PORT \ --port $BOOTNODE_PORT \

View File

@ -27,12 +27,14 @@ lcli \
--deposit-contract-address $ETH1_DEPOSIT_CONTRACT_ADDRESS \ --deposit-contract-address $ETH1_DEPOSIT_CONTRACT_ADDRESS \
--testnet-dir $TESTNET_DIR \ --testnet-dir $TESTNET_DIR \
--min-genesis-active-validator-count $GENESIS_VALIDATOR_COUNT \ --min-genesis-active-validator-count $GENESIS_VALIDATOR_COUNT \
--validator-count $VALIDATOR_COUNT \
--min-genesis-time $GENESIS_TIME \ --min-genesis-time $GENESIS_TIME \
--genesis-delay $GENESIS_DELAY \ --genesis-delay $GENESIS_DELAY \
--genesis-fork-version $GENESIS_FORK_VERSION \ --genesis-fork-version $GENESIS_FORK_VERSION \
--altair-fork-epoch $ALTAIR_FORK_EPOCH \ --altair-fork-epoch $ALTAIR_FORK_EPOCH \
--merge-fork-epoch $MERGE_FORK_EPOCH \ --bellatrix-fork-epoch $MERGE_FORK_EPOCH \
--eth1-id $ETH1_CHAIN_ID \ --eth1-id $ETH1_CHAIN_ID \
--eth1-block-hash $ETH1_BLOCK_HASH \
--eth1-follow-distance 1 \ --eth1-follow-distance 1 \
--seconds-per-slot $SECONDS_PER_SLOT \ --seconds-per-slot $SECONDS_PER_SLOT \
--seconds-per-eth1-block $SECONDS_PER_ETH1_BLOCK \ --seconds-per-eth1-block $SECONDS_PER_ETH1_BLOCK \

View File

@ -15,9 +15,6 @@ GENESIS_VALIDATOR_COUNT=${GENESIS_VALIDATOR_COUNT:-80}
# Number of beacon_node instances that you intend to run # Number of beacon_node instances that you intend to run
BN_COUNT=${BN_COUNT:-2} BN_COUNT=${BN_COUNT:-2}
# Number of validator clients
VC_COUNT=${VC_COUNT:-$BN_COUNT}
# Number of seconds to delay to start genesis block. # Number of seconds to delay to start genesis block.
# If started by a script this can be 0, if starting by hand # If started by a script this can be 0, if starting by hand
# use something like 180. # use something like 180.
@ -45,7 +42,9 @@ VC_ARGS=${VC_ARGS:-""}
EXECUTION_ENDPOINT=${EXECUTION_ENDPOINT:-http://localhost:8551} EXECUTION_ENDPOINT=${EXECUTION_ENDPOINT:-http://localhost:8551}
ETH1_GENESIS_JSON=${ETH1_GENESIS_JSON:-"../build/el/geth.json"} ETH1_GENESIS_JSON=${ETH1_GENESIS_JSON:-"../build/el/geth.json"}
ETH1_GENESIS_BLOCK_JSON=${ETH1_GENESIS_BLOCK_JSON:-"../build/el/genesis_block.json"}
ETH1_CONFIG_YAML=${ETH1_CONFIG_YAML:-"../el/el-config.yaml"} ETH1_CONFIG_YAML=${ETH1_CONFIG_YAML:-"../el/el-config.yaml"}
ETH1_BLOCK_HASH=${ETH1_BLOCK_HASH:-`cat $ETH1_GENESIS_BLOCK_JSON | jq -r '.result.hash' | cut -d'x' -f2`}
ETH1_CHAIN_ID=${ETH1_CHAIN_ID:-`cat $ETH1_GENESIS_JSON | jq -r '.config.chainId'`} ETH1_CHAIN_ID=${ETH1_CHAIN_ID:-`cat $ETH1_GENESIS_JSON | jq -r '.config.chainId'`}
ETH1_TTD=${ETH1_TTD:-`cat $ETH1_GENESIS_JSON | jq -r '.config.terminalTotalDifficulty'`} ETH1_TTD=${ETH1_TTD:-`cat $ETH1_GENESIS_JSON | jq -r '.config.terminalTotalDifficulty'`}

View File

@ -4,7 +4,14 @@ if [ -n "$CERC_SCRIPT_DEBUG" ]; then
fi fi
MIN_BLOCK_NUM=${1:-${MIN_BLOCK_NUM:-3}} MIN_BLOCK_NUM=${1:-${MIN_BLOCK_NUM:-3}}
STATUSES=("geth to generate DAG" "beacon phase0" "beacon altair" "beacon bellatrix pre-merge" "beacon bellatrix merge" "block number $MIN_BLOCK_NUM") STATUSES=(
"geth to generate DAG"
"beacon phase0"
"beacon altair"
"beacon bellatrix pre-merge"
"beacon bellatrix merge"
"block number $MIN_BLOCK_NUM"
)
STATUS=0 STATUS=0
LIGHTHOUSE_BASE_URL=${LIGHTHOUSE_BASE_URL} LIGHTHOUSE_BASE_URL=${LIGHTHOUSE_BASE_URL}
@ -36,7 +43,6 @@ MARKER="."
function inc_status() { function inc_status() {
echo " done" echo " done"
MARKEr="."
STATUS=$((STATUS + 1)) STATUS=$((STATUS + 1))
if [ $STATUS -lt ${#STATUSES[@]} ]; then if [ $STATUS -lt ${#STATUSES[@]} ]; then
echo -n "Waiting for ${STATUSES[$STATUS]}..." echo -n "Waiting for ${STATUSES[$STATUS]}..."

View File

@ -0,0 +1,27 @@
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen
FROM golang:1.19.4-bullseye AS delve
RUN go install github.com/go-delve/delve/cmd/dlv@latest
FROM ubuntu:22.04
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3 python3-dev python3-pip curl wget jq gettext gettext-base openssl bash dnsutils postgresql-client make iproute2 netcat && \
rm -rf /var/lib/apt/lists/*
COPY --from=delve /go/bin/dlv /usr/local/bin/
COPY --from=ethgen /usr/local/bin/eth2-testnet-genesis /usr/local/bin/
COPY --from=ethgen /usr/local/bin/eth2-val-tools /usr/local/bin/
COPY --from=ethgen /apps /apps
RUN wget -O /usr/local/bin/geth https://github.com/openrelayxyz/plugeth/releases/download/v1.11.6.1.0/geth-linux-amd64-v1.1.0-v1.11.6.1.0 && chmod a+x /usr/local/bin/geth
RUN cd /apps/el-gen && pip3 install -r requirements.txt
COPY genesis /opt/testnet
COPY run-el.sh /opt/testnet/run.sh
RUN cd /opt/testnet && make genesis-el
RUN geth --datadir ~/ethdata init /opt/testnet/build/el/geth.json && rm -f ~/ethdata/geth/nodekey
ENTRYPOINT ["/opt/testnet/run.sh"]

View File

@ -0,0 +1,17 @@
#!/usr/bin/env bash
# Build cerc/fixturenet-eth-plugeth
set -x
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
if [ ! -d "${SCRIPT_DIR}/genesis" ]; then
cp -frp ${SCRIPT_DIR}/../cerc-fixturenet-eth-geth/genesis ${SCRIPT_DIR}/genesis
fi
if [ ! -d "${SCRIPT_DIR}/run-el.sh" ]; then
cp -fp ${SCRIPT_DIR}/../cerc-fixturenet-eth-geth/run-el.sh ${SCRIPT_DIR}/
fi
docker build -t cerc/fixturenet-eth-plugeth:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} $SCRIPT_DIR

View File

@ -0,0 +1,34 @@
FROM cerc/lighthouse-cli:local AS lcli
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen
FROM cerc/fixturenet-plugeth-plugeth:local AS fnetgeth
FROM cerc/lighthouse:local
# cerc/lighthouse is based on Ubuntu
RUN apt-get update && apt-get -y upgrade && apt-get install -y --no-install-recommends \
libssl-dev ca-certificates \
curl socat iproute2 telnet wget jq \
build-essential python3 python3-dev python3-pip gettext-base \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY genesis /opt/testnet
COPY run-cl.sh /opt/testnet/run.sh
COPY --from=lcli /usr/local/bin/lcli /usr/local/bin/lcli
COPY --from=ethgen /usr/local/bin/eth2-testnet-genesis /usr/local/bin/eth2-testnet-genesis
COPY --from=ethgen /usr/local/bin/eth2-val-tools /usr/local/bin/eth2-val-tools
COPY --from=ethgen /apps /apps
COPY --from=fnetgeth /opt/testnet/el /opt/testnet/el
COPY --from=fnetgeth /opt/testnet/build/el /opt/testnet/build/el
RUN cd /opt/testnet && make genesis-cl
# Work around some bugs in lcli where the default path is always used.
RUN mkdir -p /root/.lighthouse && cd /root/.lighthouse && ln -s /opt/testnet/build/cl/testnet
RUN mkdir -p /scripts
COPY scripts/status-internal.sh /scripts
COPY scripts/status.sh /scripts
ENTRYPOINT ["/opt/testnet/run.sh"]

View File

@ -0,0 +1,20 @@
#!/usr/bin/env bash
# Build cerc/fixturenet-plugeth-lighthouse
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
if [ ! -d "${SCRIPT_DIR}/genesis" ]; then
cp -frp ${SCRIPT_DIR}/../cerc-fixturenet-eth-lighthouse/genesis ${SCRIPT_DIR}/genesis
fi
if [ ! -e "${SCRIPT_DIR}/run-cl.sh" ]; then
cp -fp ${SCRIPT_DIR}/../cerc-fixturenet-eth-lighthouse/run-cl.sh ${SCRIPT_DIR}/
fi
if [ ! -d "${SCRIPT_DIR}/scripts" ]; then
cp -frp ${SCRIPT_DIR}/../cerc-fixturenet-eth-lighthouse/scripts ${SCRIPT_DIR}/
fi
docker build -t cerc/fixturenet-plugeth-lighthouse:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} $SCRIPT_DIR

View File

@ -0,0 +1,40 @@
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen
FROM golang:1.19.4-bullseye AS delve
RUN go install github.com/go-delve/delve/cmd/dlv@latest
FROM ubuntu:22.04
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3 python3-dev python3-pip curl wget jq gettext gettext-base openssl bash dnsutils postgresql-client make iproute2 netcat psmisc && \
rm -rf /var/lib/apt/lists/*
COPY --from=delve /go/bin/dlv /usr/local/bin/
COPY --from=ethgen /usr/local/bin/eth2-testnet-genesis /usr/local/bin/
COPY --from=ethgen /usr/local/bin/eth2-val-tools /usr/local/bin/
COPY --from=ethgen /apps /apps
RUN wget -O /usr/local/bin/geth https://github.com/openrelayxyz/plugeth/releases/download/v1.11.6.1.0/geth-linux-amd64-v1.1.0-v1.11.6.1.0 && chmod a+x /usr/local/bin/geth
RUN cd /apps/el-gen && pip3 install -r requirements.txt
COPY genesis /opt/testnet
COPY run-el.sh /opt/testnet/run.sh
RUN cd /opt/testnet && make genesis-el
RUN geth --datadir ~/ethdata init /opt/testnet/build/el/geth.json && rm -f ~/ethdata/geth/nodekey
# Snag the genesis block info.
RUN geth --datadir ~/ethdata init /opt/testnet/build/el/geth.json && rm -f ~/ethdata/geth/nodekey
RUN cp -rp ~/ethdata ~/tmpeth && \
geth --datadir ~/tmpeth init /opt/testnet/build/el/geth.json && \
geth --datadir ~/tmpeth --http & \
sleep 5 && \
curl -q --location 'localhost:8545' \
--header 'Content-Type: application/json' \
--data '{ "jsonrpc": "2.0", "id": 14, "method": "eth_getBlockByNumber", "params": ["0x0", false] }' \
-o /opt/testnet/build/el/genesis_block.json && \
killall -9 geth && \
rm -rf ~/tmpeth
ENTRYPOINT ["/opt/testnet/run.sh"]

View File

@ -0,0 +1,17 @@
#!/usr/bin/env bash
# Build cerc/fixturenet-plugeth-plugeth
set -x
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
if [ ! -d "${SCRIPT_DIR}/genesis" ]; then
cp -frp ${SCRIPT_DIR}/../cerc-fixturenet-eth-geth/genesis ${SCRIPT_DIR}/genesis
fi
if [ ! -e "${SCRIPT_DIR}/run-el.sh" ]; then
cp -fp ${SCRIPT_DIR}/../cerc-fixturenet-eth-geth/run-el.sh ${SCRIPT_DIR}/
fi
docker build -t cerc/fixturenet-plugeth-plugeth:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} $SCRIPT_DIR

View File

@ -16,7 +16,7 @@ db-waitforsync=bool Should the statediff service start once geth has synced to
rpc-port=port change RPC port (default: 8545) rpc-port=port change RPC port (default: 8545)
rpc-addr=address change RPC address (default: 127.0.0.1) rpc-addr=address change RPC address (default: 127.0.0.1)
chain-id=number change chain ID (default: 99) chain-id=number change chain ID (default: 99)
extra-args=name extra args to pass to geth on startup extra-args=name extra args to pass to geth on startup
period=seconds use a block time instead of instamine period=seconds use a block time instead of instamine
accounts=number create multiple accounts (default: 1) accounts=number create multiple accounts (default: 1)
address=address eth address to add to genesis address=address eth address to add to genesis

View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
# Build cerc/lighthouse-cli
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
project_dir=${CERC_REPO_BASE_DIR}/lighthouse
docker build -t cerc/lighthouse-cli:local --build-arg PORTABLE=true -f ${project_dir}/lcli/Dockerfile ${build_command_args} ${project_dir}

View File

@ -1,4 +1,5 @@
FROM sigp/lighthouse:v4.1.0-modern ARG TAG_SUFFIX="-modern"
FROM sigp/lighthouse:v4.1.0${TAG_SUFFIX}
RUN apt-get update; apt-get install bash netcat curl less jq -y; RUN apt-get update; apt-get install bash netcat curl less jq -y;

View File

@ -1,11 +1,14 @@
#!/bin/sh #!/usr/bin/env bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# Test if the container's filesystem is old (run previously) or new # Test if the container's filesystem is old (run previously) or new
EXISTSFILENAME=/var/exists EXISTSFILENAME=/var/exists
echo "Test container starting" echo "Test container starting"
if [[ -f "$EXISTSFILENAME" ]]; if [[ -f "$EXISTSFILENAME" ]];
then then
TIMESTAMP = `cat $EXISTSFILENAME` TIMESTAMP=`cat $EXISTSFILENAME`
echo "Filesystem is old, created: $TIMESTAMP" echo "Filesystem is old, created: $TIMESTAMP"
else else
echo "Filesystem is fresh" echo "Filesystem is fresh"

View File

@ -1,35 +1,36 @@
cerc-io/ipld-eth-db github.com/cerc-io/ipld-eth-db
cerc-io/go-ethereum github.com/cerc-io/go-ethereum
cerc-io/ipld-eth-server github.com/cerc-io/ipld-eth-server
cerc-io/eth-statediff-service github.com/cerc-io/eth-statediff-service
cerc-io/eth-statediff-fill-service github.com/cerc-io/eth-statediff-fill-service
cerc-io/ipld-eth-db-validator github.com/cerc-io/ipld-eth-db-validator
cerc-io/ipld-eth-beacon-indexer github.com/cerc-io/ipld-eth-beacon-indexer
cerc-io/ipld-eth-beacon-db github.com/cerc-io/ipld-eth-beacon-db
cerc-io/laconicd github.com/cerc-io/laconicd
cerc-io/laconic-sdk github.com/cerc-io/laconic-sdk
cerc-io/laconic-registry-cli github.com/cerc-io/laconic-registry-cli
cerc-io/laconic-console github.com/cerc-io/laconic-console
cerc-io/mobymask-watcher github.com/cerc-io/mobymask-watcher
cerc-io/watcher-ts github.com/cerc-io/watcher-ts
cerc-io/mobymask-v2-watcher-ts github.com/cerc-io/mobymask-v2-watcher-ts
cerc-io/MobyMask github.com/cerc-io/MobyMask
vulcanize/uniswap-watcher-ts github.com/vulcanize/uniswap-watcher-ts
vulcanize/uniswap-v3-info github.com/vulcanize/uniswap-v3-info
vulcanize/assemblyscript github.com/vulcanize/assemblyscript
cerc-io/eth-probe github.com/cerc-io/eth-probe
cerc-io/tx-spammer github.com/cerc-io/tx-spammer
dboreham/foundry github.com/dboreham/foundry
lirewine/gem github.com/lirewine/gem
lirewine/debug github.com/lirewine/debug
lirewine/crypto github.com/lirewine/crypto
lirewine/sdk github.com/lirewine/sdk
telackey/act_runner github.com/telackey/act_runner
ethereum-optimism/op-geth github.com/ethereum-optimism/op-geth
ethereum-optimism/optimism github.com/ethereum-optimism/optimism
pokt-network/pocket-core github.com/pokt-network/pocket-core
pokt-network/pocket-core-deployments github.com/pokt-network/pocket-core-deployments
cerc-io/azimuth-watcher-ts github.com/cerc-io/azimuth-watcher-ts
cerc-io/ipld-eth-state-snapshot github.com/cerc-io/ipld-eth-state-snapshot
cerc-io/gelato-watcher-ts github.com/cerc-io/gelato-watcher-ts
filecoin-project/lotus github.com/filecoin-project/lotus
git.vdb.to/cerc-io/test-project

View File

@ -1,7 +1,7 @@
version: "1.0" version: "1.0"
name: azimuth name: azimuth
repos: repos:
- cerc-io/azimuth-watcher-ts - github.com/cerc-io/azimuth-watcher-ts
containers: containers:
- cerc/watcher-azimuth - cerc/watcher-azimuth
pods: pods:

View File

@ -2,10 +2,10 @@ version: "1.0"
name: chain-chunker name: chain-chunker
decription: "Stack to build containers for chain-chunker" decription: "Stack to build containers for chain-chunker"
repos: repos:
- cerc-io/ipld-eth-state-snapshot - github.com/cerc-io/ipld-eth-state-snapshot@v5
- cerc-io/eth-statediff-service - github.com/cerc-io/eth-statediff-service@v5
- cerc-io/ipld-eth-db - github.com/cerc-io/ipld-eth-db@v5
- cerc-io/ipld-eth-server - github.com/cerc-io/ipld-eth-server@v5
containers: containers:
- cerc/ipld-eth-state-snapshot - cerc/ipld-eth-state-snapshot
- cerc/eth-statediff-service - cerc/eth-statediff-service

View File

@ -1,11 +1,11 @@
version: "1.0" version: "1.0"
name: erc20-watcher name: erc20-watcher
repos: repos:
- cerc-io/go-ethereum - github.com/cerc-io/go-ethereum
- cerc-io/ipld-eth-db - github.com/cerc-io/ipld-eth-db
- cerc-io/ipld-eth-server - github.com/cerc-io/ipld-eth-server
- cerc-io/watcher-ts - github.com/cerc-io/watcher-ts
- dboreham/foundry - github.com/dboreham/foundry
containers: containers:
- cerc/foundry - cerc/foundry
- cerc/go-ethereum - cerc/go-ethereum

View File

@ -1,10 +1,10 @@
version: "1.0" version: "1.0"
name: erc721-watcher name: erc721-watcher
repos: repos:
- cerc-io/go-ethereum - github.com/cerc-io/go-ethereum
- cerc-io/ipld-eth-db - github.com/cerc-io/ipld-eth-db
- cerc-io/ipld-eth-server - github.com/cerc-io/ipld-eth-server
- cerc-io/watcher-ts - github.com/cerc-io/watcher-ts
containers: containers:
- cerc/go-ethereum - cerc/go-ethereum
- cerc/go-ethereum-foundry - cerc/go-ethereum-foundry

View File

@ -2,13 +2,15 @@ version: "1.0"
name: fixturenet-eth-loaded name: fixturenet-eth-loaded
decription: "Loaded Ethereum Fixturenet" decription: "Loaded Ethereum Fixturenet"
repos: repos:
- cerc-io/go-ethereum - github.com/cerc-io/go-ethereum
- cerc-io/tx-spammer - github.com/cerc-io/tx-spammer
- cerc-io/ipld-eth-server - github.com/cerc-io/ipld-eth-server
- cerc-io/ipld-eth-db - github.com/cerc-io/ipld-eth-db
- cerc/go-ethereum - github.com/cerc-io/lighthouse
containers: containers:
- cerc/go-ethereum
- cerc/lighthouse - cerc/lighthouse
- cerc/lighthouse-cli
- cerc/fixturenet-eth-geth - cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse - cerc/fixturenet-eth-lighthouse
- cerc/ipld-eth-server - cerc/ipld-eth-server

View File

@ -2,12 +2,14 @@ version: "1.2"
name: fixturenet-eth-tx name: fixturenet-eth-tx
decription: "Ethereum Fixturenet w/ tx-spammer" decription: "Ethereum Fixturenet w/ tx-spammer"
repos: repos:
- cerc-io/go-ethereum - github.com/cerc-io/go-ethereum
- cerc-io/tx-spammer - github.com/cerc-io/tx-spammer
- dboreham/foundry - github.com/dboreham/foundry
- github.com/cerc-io/lighthouse
containers: containers:
- cerc/go-ethereum - cerc/go-ethereum
- cerc/lighthouse - cerc/lighthouse
- cerc/lighthouse-cli
- cerc/fixturenet-eth-geth - cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse - cerc/fixturenet-eth-lighthouse
- cerc/tx-spammer - cerc/tx-spammer

View File

@ -1,6 +1,6 @@
# fixturenet-eth # fixturenet-eth
Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator#user-mode)): Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator)):
## Clone required repositories ## Clone required repositories
@ -66,7 +66,7 @@ It is not necessary to use them all at once, but a complete example follows:
``` ```
# Setup # Setup
$ laconic-so setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/ipld-eth-beacon-db,cerc-io/ipld-eth-beacon-indexer,cerc-io/eth-probe,cerc-io/tx-spammer $ laconic-so setup-repositories --include github.com/cerc-io/go-ethereum,github.com/cerc-io/ipld-eth-db,github.com/cerc-io/ipld-eth-server,github.com/cerc-io/ipld-eth-beacon-db,github.com/cerc-io/ipld-eth-beacon-indexer,github.com/cerc-io/eth-probe,github.com/cerc-io/tx-spammer
# Build # Build
$ laconic-so build-containers --include cerc/go-ethereum,cerc/lighthouse,cerc/fixturenet-eth-geth,cerc/fixturenet-eth-lighthouse,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/ipld-eth-beacon-db,cerc/ipld-eth-beacon-indexer,cerc/eth-probe,cerc/keycloak,cerc/tx-spammer $ laconic-so build-containers --include cerc/go-ethereum,cerc/lighthouse,cerc/fixturenet-eth-geth,cerc/fixturenet-eth-lighthouse,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/ipld-eth-beacon-db,cerc/ipld-eth-beacon-indexer,cerc/eth-probe,cerc/keycloak,cerc/tx-spammer

View File

@ -2,11 +2,13 @@ version: "1.1"
name: fixturenet-eth name: fixturenet-eth
decription: "Ethereum Fixturenet" decription: "Ethereum Fixturenet"
repos: repos:
- cerc-io/go-ethereum - github.com/cerc-io/go-ethereum
- dboreham/foundry - github.com/cerc-io/lighthouse
- github.com/dboreham/foundry
containers: containers:
- cerc/go-ethereum - cerc/go-ethereum
- cerc/lighthouse - cerc/lighthouse
- cerc/lighthouse-cli
- cerc/fixturenet-eth-geth - cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse - cerc/fixturenet-eth-lighthouse
- cerc/foundry - cerc/foundry

View File

@ -2,14 +2,14 @@ version: "1.1"
name: fixturenet-laconic-loaded name: fixturenet-laconic-loaded
description: "A full featured laconic fixturenet" description: "A full featured laconic fixturenet"
repos: repos:
- cerc-io/laconicd - github.com/cerc-io/laconicd
- lirewine/debug - github.com/lirewine/debug
- lirewine/crypto - github.com/lirewine/crypto
- lirewine/gem - github.com/lirewine/gem
- lirewine/sdk - github.com/lirewine/sdk
- cerc-io/laconic-sdk - github.com/cerc-io/laconic-sdk
- cerc-io/laconic-registry-cli - github.com/cerc-io/laconic-registry-cli
- cerc-io/laconic-console - github.com/cerc-io/laconic-console
npms: npms:
- laconic-sdk - laconic-sdk
- laconic-registry-cli - laconic-registry-cli

View File

@ -2,9 +2,9 @@ version: "1.0"
name: fixturenet-laconicd name: fixturenet-laconicd
description: "A laconicd fixturenet" description: "A laconicd fixturenet"
repos: repos:
- cerc-io/laconicd - github.com/cerc-io/laconicd
- cerc-io/laconic-sdk - github.com/cerc-io/laconic-sdk
- cerc-io/laconic-registry-cli - github.com/cerc-io/laconic-registry-cli
npms: npms:
- laconic-sdk - laconic-sdk
- laconic-registry-cli - laconic-registry-cli

View File

@ -2,7 +2,7 @@ version: "1.0"
name: fixturenet-lotus name: fixturenet-lotus
description: "A lotus fixturenet" description: "A lotus fixturenet"
repos: repos:
- filecoin-project/lotus - github.com/filecoin-project/lotus
containers: containers:
- cerc/lotus - cerc/lotus
pods: pods:

View File

@ -14,14 +14,6 @@ laconic-so --stack fixturenet-optimism setup-repositories
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
``` ```
Checkout to the required versions and branches in repos:
```bash
# Optimism
cd ~/cerc/optimism
git checkout v1.0.4
```
Build the container images: Build the container images:
```bash ```bash

View File

@ -9,19 +9,11 @@ Prerequisite: An L1 Ethereum RPC endpoint
Clone required repositories: Clone required repositories:
```bash ```bash
laconic-so --stack fixturenet-optimism setup-repositories --exclude cerc-io/go-ethereum laconic-so --stack fixturenet-optimism setup-repositories --exclude github.com/cerc-io/go-ethereum
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
``` ```
Checkout to the required versions and branches in repos:
```bash
# Optimism
cd ~/cerc/optimism
git checkout v1.0.4
```
Build the container images: Build the container images:
```bash ```bash

View File

@ -2,13 +2,15 @@ version: "1.0"
name: fixturenet-optimism name: fixturenet-optimism
decription: "Optimism Fixturenet" decription: "Optimism Fixturenet"
repos: repos:
- cerc-io/go-ethereum - github.com/cerc-io/go-ethereum
- dboreham/foundry - github.com/cerc-io/lighthouse
- ethereum-optimism/optimism - github.com/dboreham/foundry
- ethereum-optimism/op-geth - github.com/ethereum-optimism/optimism@v1.0.4
- github.com/ethereum-optimism/op-geth@v1.101105.2
containers: containers:
- cerc/go-ethereum - cerc/go-ethereum
- cerc/lighthouse - cerc/lighthouse
- cerc/lighthouse-cli
- cerc/fixturenet-eth-geth - cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse - cerc/fixturenet-eth-lighthouse
- cerc/foundry - cerc/foundry

View File

@ -0,0 +1,19 @@
# fixturenet-plugeth-tx
A variation of `fixturenet-eth` that uses `plugeth` instead of `go-ethereum`.
See `stacks/fixturenet-eth/README.md` for more information.
## Containers
* cerc/lighthouse
* cerc/fixturenet-eth-plugeth
* cerc/fixturenet-eth-lighthouse
* cerc/tx-spammer
## Deploy the stack
```
$ laconic-so --stack fixturenet-plugeth-tx setup-repositories
$ laconic-so --stack fixturenet-plugeth-tx build-containers
$ laconic-so --stack fixturenet-plugeth-tx deploy up
```

View File

@ -0,0 +1,18 @@
version: "1.2"
name: fixturenet-plugeth-tx
decription: "plugeth Ethereum Fixturenet w/ tx-spammer"
repos:
- github.com/cerc-io/tx-spammer
- github.com/dboreham/foundry
- github.com/cerc-io/lighthouse
containers:
- cerc/lighthouse
- cerc/lighthouse-cli
- cerc/fixturenet-plugeth-plugeth
- cerc/fixturenet-plugeth-lighthouse
- cerc/tx-spammer
- cerc/foundry
pods:
- fixturenet-plugeth
- foundry
- tx-spammer

View File

@ -1,41 +1,41 @@
# Pocket Fixturenet # Pocket Fixturenet
Instructions for deploying a local single-node Pocket chain alongside a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator. Instructions for deploying a local single-node Pocket chain alongside a geth + lighthouse blockchain "fixturenet" for development and testing purposes using Stack Orchestrator.
## 1. Build Laconic Stack Orchestrator ## 1. Clone required repositories
Build this fork of Laconic Stack Orchestrator which includes the fixturenet-pocket stack:
```
$ scripts/build_shiv_package.sh
$ cd package
$ mv laconic-so-{version} /usr/local/bin/laconic-so # Or move laconic-so to ~/bin or your favorite on-path directory
```
## 2. Clone required repositories
``` ```
$ laconic-so --stack fixturenet-pocket setup-repositories $ laconic-so --stack fixturenet-pocket setup-repositories
``` ```
## 3. Build the stack's containers ## 2. Build the stack's containers
``` ```
$ laconic-so --stack fixturenet-pocket build-containers $ laconic-so --stack fixturenet-pocket build-containers
``` ```
## 4. Deploy the stack ## 3. Deploy the stack
``` ```
$ laconic-so --stack fixturenet-pocket deploy up $ laconic-so --stack fixturenet-pocket deploy up
``` ```
It may take up to 10 minutes for the Eth Fixturenet to fully come online and start producing blocks. It may take up to 10 minutes for the Eth Fixturenet to fully come online and start producing blocks.
## 5. Check status
## 4. Check status
**Eth Fixturenet:** **Eth Fixturenet:**
``` ```
$ laconic-so --stack fixturenet-pocket deploy exec fixturenet-eth-bootnode-lighthouse /scripts/status-internal.sh $ laconic-so --stack fixturenet-pocket deploy exec fixturenet-eth-bootnode-lighthouse /scripts/status-internal.sh
```
Response:
```
Waiting for geth to generate DAG.... done Waiting for geth to generate DAG.... done
Waiting for beacon phase0.... done Waiting for beacon phase0.... done
Waiting for beacon altair.... done Waiting for beacon altair.... done
Waiting for beacon bellatrix pre-merge.... done Waiting for beacon bellatrix pre-merge.... done
Waiting for beacon bellatrix merge.... done Waiting for beacon bellatrix merge.... done
``` ```
**Pocket node:** **Pocket node:**
``` ```
$ laconic-so --stack fixturenet-pocket deploy exec pocket "pocket query height" $ laconic-so --stack fixturenet-pocket deploy exec pocket "pocket query height"
```
Response:
```
2023/04/20 08:07:46 Initializing Pocket Datadir 2023/04/20 08:07:46 Initializing Pocket Datadir
2023/04/20 08:07:46 datadir = /home/app/.pocket 2023/04/20 08:07:46 datadir = /home/app/.pocket
http://localhost:8081/v1/query/height http://localhost:8081/v1/query/height
@ -43,17 +43,18 @@ http://localhost:8081/v1/query/height
"height": 4 "height": 4
} }
``` ```
or or see the full logs:
``` ```
$ laconic-so --stack fixturenet-pocket deploy logs pocket $ laconic-so --stack fixturenet-pocket deploy logs pocket
``` ```
## 6. Send a relay request to Pocket node ## 5. Send a relay request to Pocket node
The Pocket node serves relay requests at `http://localhost:8081/v1/client/sim` The Pocket node serves relay requests at `http://localhost:8081/v1/client/sim`
**Example request:**
Example request:
``` ```
$ curl -X POST --data '{"relay_network_id":"0021","payload":{"data":"{\"jsonrpc\": \"2.0\",\"id\": 1,\"method\": \"eth_blockNumber\",\"params\": []}","method":"POST","path":"","headers":{}}}' http://localhost:8081/v1/client/sim $ curl -X POST --data '{"relay_network_id":"0021","payload":{"data":"{\"jsonrpc\": \"2.0\",\"id\": 1,\"method\": \"eth_blockNumber\",\"params\": []}","method":"POST","path":"","headers":{}}}' http://localhost:8081/v1/client/sim
``` ```
**Response:** Response:
``` ```
"{\"jsonrpc\":\"2.0\",\"id\":1,\"result\":\"0x6fe\"}\n" "{\"jsonrpc\":\"2.0\",\"id\":1,\"result\":\"0x6fe\"}\n"
``` ```

View File

@ -2,12 +2,14 @@ version: "1.0"
name: fixturenet-pocket name: fixturenet-pocket
description: "A single node pocket chain that can serve relays from the geth-1 node in eth-fixturenet" description: "A single node pocket chain that can serve relays from the geth-1 node in eth-fixturenet"
repos: repos:
- cerc-io/go-ethereum - github.com/cerc-io/go-ethereum
- pokt-network/pocket-core - github.com/cerc-io/lighthouse
- pokt-network/pocket-core-deployments # contains the dockerfile - github.com/pokt-network/pocket-core
- github.com/pokt-network/pocket-core-deployments # contains the dockerfile
containers: containers:
- cerc/go-ethereum - cerc/go-ethereum
- cerc/lighthouse - cerc/lighthouse
- cerc/lighthouse-cli
- cerc/fixturenet-eth-geth - cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse - cerc/fixturenet-eth-lighthouse
- cerc/pocket - cerc/pocket

View File

@ -1,7 +1,7 @@
version: "1.0" version: "1.0"
name: gelato name: gelato
repos: repos:
- cerc-io/gelato-watcher-ts - github.com/cerc-io/gelato-watcher-ts
containers: containers:
- cerc/watcher-gelato - cerc/watcher-gelato
pods: pods:

View File

@ -18,26 +18,6 @@ laconic-so --stack mobymask-v2 setup-repositories
NOTE: If repositories already exist and are checked out to different versions, `setup-repositories` command will throw an error. NOTE: If repositories already exist and are checked out to different versions, `setup-repositories` command will throw an error.
For getting around this, the repositories mentioned below can be removed and then run the command. For getting around this, the repositories mentioned below can be removed and then run the command.
Checkout to the required versions and branches in repos
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask
git checkout v0.1.2
# Optimism
cd ~/cerc/optimism
git checkout v1.0.4
```
Build the container images: Build the container images:
```bash ```bash

View File

@ -9,27 +9,11 @@ Prerequisite: L2 Optimism Geth and Node RPC endpoints
Clone required repositories: Clone required repositories:
```bash ```bash
laconic-so --stack mobymask-v2 setup-repositories --include cerc-io/MobyMask,cerc-io/watcher-ts,cerc-io/mobymask-v2-watcher-ts laconic-so --stack mobymask-v2 setup-repositories --include github.com/cerc-io/MobyMask,github.com/cerc-io/watcher-ts,github.com/cerc-io/mobymask-v2-watcher-ts
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
``` ```
Checkout to the required versions and branches in repos:
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask
git checkout v0.1.2
```
Build the container images: Build the container images:
```bash ```bash

View File

@ -1,22 +1,25 @@
version: "1.0" version: "1.0"
name: mobymask-v2 name: mobymask-v2
repos: repos:
- cerc-io/go-ethereum - github.com/cerc-io/go-ethereum
- dboreham/foundry - github.com/cerc-io/lighthouse
- ethereum-optimism/optimism - github.com/dboreham/foundry
- ethereum-optimism/op-geth - github.com/ethereum-optimism/optimism@v1.0.4
- cerc-io/watcher-ts - github.com/ethereum-optimism/op-geth@v1.101105.2
- cerc-io/mobymask-v2-watcher-ts - github.com/cerc-io/watcher-ts@v0.2.43
- cerc-io/MobyMask - github.com/cerc-io/mobymask-v2-watcher-ts@v0.1.1
- github.com/cerc-io/MobyMask@v0.1.2
containers: containers:
- cerc/go-ethereum - cerc/go-ethereum
- cerc/lighthouse - cerc/lighthouse
- cerc/lighthouse-cli
- cerc/fixturenet-eth-geth - cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse - cerc/fixturenet-eth-lighthouse
- cerc/foundry - cerc/foundry
- cerc/optimism-contracts - cerc/optimism-contracts
- cerc/optimism-l2geth - cerc/optimism-l2geth
- cerc/optimism-op-batcher - cerc/optimism-op-batcher
- cerc/optimism-op-proposer
- cerc/optimism-op-node - cerc/optimism-op-node
- cerc/watcher-ts - cerc/watcher-ts
- cerc/watcher-mobymask-v2 - cerc/watcher-mobymask-v2

View File

@ -14,7 +14,7 @@ This demo has been tested on a `Ubuntu 22.04 LTS` machine with `8GB` of RAM
Clone required repositories: Clone required repositories:
```bash ```bash
laconic-so --stack mobymask-v2 setup-repositories --include cerc-io/MobyMask,cerc-io/watcher-ts,cerc-io/mobymask-v2-watcher-ts laconic-so --stack mobymask-v2 setup-repositories --include github.com/cerc-io/MobyMask,github.com/cerc-io/watcher-ts,github.com/cerc-io/mobymask-v2-watcher-ts
# This will clone the required repositories at ~/cerc # This will clone the required repositories at ~/cerc
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned in the next step and re-run the command # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned in the next step and re-run the command
@ -30,22 +30,6 @@ Clone required repositories:
# 100%|##############################################################################################################################################| 1.41k/1.41k [00:18<00:00, 76.4B/s] # 100%|##############################################################################################################################################| 1.41k/1.41k [00:18<00:00, 76.4B/s]
``` ```
Checkout to the required versions and branches in repos:
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask
git checkout v0.1.2
```
Build the container images: Build the container images:
```bash ```bash

View File

@ -11,7 +11,7 @@ This deployment expects that ipld-eth-server's endpoints are available on the lo
## Clone required repositories ## Clone required repositories
``` ```
$ laconic-so setup-repositories --include cerc-io/watcher-ts $ laconic-so setup-repositories --include github.com/cerc-io/watcher-ts
``` ```
## Build the watcher container ## Build the watcher container

View File

@ -1,7 +1,7 @@
version: "1.0" version: "1.0"
name: mobymask-watcher name: mobymask-watcher
repos: repos:
- cerc-io/watcher-ts/v0.2.19 - github.com/cerc-io/watcher-ts/v0.2.19
containers: containers:
- cerc/watcher-mobymask - cerc/watcher-mobymask
pods: pods:

View File

@ -2,8 +2,8 @@ version: "1.1"
name: package-registry name: package-registry
decription: "Local Package Registry" decription: "Local Package Registry"
repos: repos:
- cerc-io/hosting - github.com/cerc-io/hosting
- telackey/act_runner - gitea.com/gitea/act_runner
containers: containers:
- cerc/act-runner - cerc/act-runner
- cerc/act-runner-task-executor - cerc/act-runner-task-executor

View File

@ -2,7 +2,8 @@ version: "1.0"
name: test name: test
description: "A test stack" description: "A test stack"
repos: repos:
- cerc-io/laconicd - github.com/cerc-io/laconicd
- git.vdb.to/cerc-io/test-project@test-branch
containers: containers:
- cerc/test-container - cerc/test-container
pods: pods:

View File

@ -1,8 +1,8 @@
version: "1.0" version: "1.0"
name: uniswap-v3 name: uniswap-v3
repos: repos:
- vulcanize/uniswap-watcher-ts - github.com/vulcanize/uniswap-watcher-ts
- vulcanize/uniswap-v3-info - github.com/vulcanize/uniswap-v3-info
containers: containers:
- cerc/watcher-uniswap-v3 - cerc/watcher-uniswap-v3
- cerc/uniswap-v3-info - cerc/uniswap-v3-info

View File

@ -28,108 +28,146 @@ import importlib.resources
from pathlib import Path from pathlib import Path
from .util import include_exclude_check, get_parsed_stack_config from .util import include_exclude_check, get_parsed_stack_config
class DeployCommandContext(object):
def __init__(self, cluster_context, docker):
self.cluster_context = cluster_context
self.docker = docker
@click.command()
@click.group()
@click.option("--include", help="only start these components") @click.option("--include", help="only start these components")
@click.option("--exclude", help="don\'t start these components") @click.option("--exclude", help="don\'t start these components")
@click.option("--env-file", help="env file to be used") @click.option("--env-file", help="env file to be used")
@click.option("--cluster", help="specify a non-default cluster name") @click.option("--cluster", help="specify a non-default cluster name")
@click.argument('command', required=True) # help: command: up|down|ps
@click.argument('extra_args', nargs=-1) # help: command: up|down|ps <service1> <service2>
@click.pass_context @click.pass_context
def command(ctx, include, exclude, env_file, cluster, command, extra_args): def command(ctx, include, exclude, env_file, cluster):
'''deploy a stack''' '''deploy a stack'''
# TODO: implement option exclusion and command value constraint lost with the move from argparse to click cluster_context = _make_cluster_context(ctx.obj, include, exclude, cluster, env_file)
debug = ctx.obj.debug
quiet = ctx.obj.quiet
verbose = ctx.obj.verbose
local_stack = ctx.obj.local_stack
dry_run = ctx.obj.dry_run
stack = ctx.obj.stack
cluster_context = _make_cluster_context(ctx.obj, include, exclude, cluster)
# See: https://gabrieldemarmiesse.github.io/python-on-whales/sub-commands/compose/ # See: https://gabrieldemarmiesse.github.io/python-on-whales/sub-commands/compose/
docker = DockerClient(compose_files=cluster_context.compose_files, compose_project_name=cluster_context.cluster, compose_env_file=env_file) docker = DockerClient(compose_files=cluster_context.compose_files, compose_project_name=cluster_context.cluster,
compose_env_file=cluster_context.env_file)
ctx.obj = DeployCommandContext(cluster_context, docker)
# Subcommand is executed now, by the magic of click
@command.command()
@click.argument('extra_args', nargs=-1) # help: command: up <service1> <service2>
@click.pass_context
def up(ctx, extra_args):
global_context = ctx.parent.parent.obj
extra_args_list = list(extra_args) or None extra_args_list = list(extra_args) or None
if not global_context.dry_run:
cluster_context = ctx.obj.cluster_context
container_exec_env = _make_runtime_env(global_context)
for attr, value in container_exec_env.items():
os.environ[attr] = value
if global_context.verbose:
print(f"Running compose up with container_exec_env: {container_exec_env}, extra_args: {extra_args_list}")
for pre_start_command in cluster_context.pre_start_commands:
_run_command(global_context, cluster_context.cluster, pre_start_command)
ctx.obj.docker.compose.up(detach=True, services=extra_args_list)
for post_start_command in cluster_context.post_start_commands:
_run_command(global_context, cluster_context.cluster, post_start_command)
_orchestrate_cluster_config(global_context, cluster_context.config, ctx.obj.docker, container_exec_env)
if not dry_run:
if command == "up":
container_exec_env = _make_runtime_env(ctx.obj)
for attr, value in container_exec_env.items():
os.environ[attr] = value
if verbose:
print(f"Running compose up with container_exec_env: {container_exec_env}, extra_args: {extra_args_list}")
for pre_start_command in cluster_context.pre_start_commands:
_run_command(ctx.obj, cluster_context.cluster, pre_start_command)
docker.compose.up(detach=True, services=extra_args_list)
for post_start_command in cluster_context.post_start_commands:
_run_command(ctx.obj, cluster_context.cluster, post_start_command)
_orchestrate_cluster_config(ctx.obj, cluster_context.config, docker, container_exec_env) @command.command()
@click.option("--delete-volumes/--preserve-volumes", default=False, help="delete data volumes")
@click.argument('extra_args', nargs=-1) # help: command: down<service1> <service2>
@click.pass_context
def down(ctx, delete_volumes, extra_args):
global_context = ctx.parent.parent.obj
extra_args_list = list(extra_args) or None
if not global_context.dry_run:
if global_context.verbose:
print("Running compose down")
timeout_arg = None
if extra_args_list:
timeout_arg = extra_args_list[0]
# Specify shutdown timeout (default 10s) to give services enough time to shutdown gracefully
ctx.obj.docker.compose.down(timeout=timeout_arg, volumes=delete_volumes)
elif command == "down":
if verbose:
print("Running compose down")
timeout_arg = None @command.command()
if extra_args_list: @click.pass_context
timeout_arg=extra_args_list[0] def ps(ctx):
global_context = ctx.parent.parent.obj
if not global_context.dry_run:
if global_context.verbose:
print("Running compose ps")
container_list = ctx.obj.docker.compose.ps()
if len(container_list) > 0:
print("Running containers:")
for container in container_list:
print(f"id: {container.id}, name: {container.name}, ports: ", end="")
ports = container.network_settings.ports
comma = ""
for port_mapping in ports.keys():
mapping = ports[port_mapping]
print(comma, end="")
if mapping is None:
print(f"{port_mapping}", end="")
else:
print(f"{mapping[0]['HostIp']}:{mapping[0]['HostPort']}->{port_mapping}", end="")
comma = ", "
print()
else:
print("No containers running")
# Specify shutdown timeout (default 10s) to give services enough time to shutdown gracefully
docker.compose.down(timeout=timeout_arg) @command.command()
elif command == "exec": @click.argument('extra_args', nargs=-1) # help: command: port <service1> <service2>
if extra_args_list is None or len(extra_args_list) < 2: @click.pass_context
print("Usage: exec <service> <cmd>") def port(ctx, extra_args):
sys.exit(1) global_context = ctx.parent.parent.obj
service_name = extra_args_list[0] extra_args_list = list(extra_args) or None
command_to_exec = ["sh", "-c"] + extra_args_list[1:] if not global_context.dry_run:
container_exec_env = _make_runtime_env(ctx.obj) if extra_args_list is None or len(extra_args_list) < 2:
if verbose: print("Usage: port <service> <exposed-port>")
print(f"Running compose exec {service_name} {command_to_exec}") sys.exit(1)
try: service_name = extra_args_list[0]
docker.compose.execute(service_name, command_to_exec, envs=container_exec_env) exposed_port = extra_args_list[1]
except DockerException as error: if global_context.verbose:
print(f"container command returned error exit status") print(f"Running compose port {service_name} {exposed_port}")
elif command == "port": mapped_port_data = ctx.obj.docker.compose.port(service_name, exposed_port)
if extra_args_list is None or len(extra_args_list) < 2: print(f"{mapped_port_data[0]}:{mapped_port_data[1]}")
print("Usage: port <service> <exposed-port>")
sys.exit(1)
service_name = extra_args_list[0] @command.command()
exposed_port = extra_args_list[1] @click.argument('extra_args', nargs=-1) # help: command: exec <service> <command>
if verbose: @click.pass_context
print(f"Running compose port {service_name} {exposed_port}") def exec(ctx, extra_args):
mapped_port_data = docker.compose.port(service_name, exposed_port) global_context = ctx.parent.parent.obj
print(f"{mapped_port_data[0]}:{mapped_port_data[1]}") extra_args_list = list(extra_args) or None
elif command == "ps": if not global_context.dry_run:
if verbose: if extra_args_list is None or len(extra_args_list) < 2:
print("Running compose ps") print("Usage: exec <service> <cmd>")
container_list = docker.compose.ps() sys.exit(1)
if len(container_list) > 0: service_name = extra_args_list[0]
print("Running containers:") command_to_exec = ["sh", "-c"] + extra_args_list[1:]
for container in container_list: container_exec_env = _make_runtime_env(global_context)
print(f"id: {container.id}, name: {container.name}, ports: ", end="") if global_context.verbose:
ports = container.network_settings.ports print(f"Running compose exec {service_name} {command_to_exec}")
comma = "" try:
for port_mapping in ports.keys(): ctx.obj.docker.compose.execute(service_name, command_to_exec, envs=container_exec_env)
mapping = ports[port_mapping] except DockerException as error:
print(comma, end="") print(f"container command returned error exit status")
if mapping is None:
print(f"{port_mapping}", end="")
else: @command.command()
print(f"{mapping[0]['HostIp']}:{mapping[0]['HostPort']}->{port_mapping}", end="") @click.argument('extra_args', nargs=-1) # help: command: logs <service1> <service2>
comma = ", " @click.pass_context
print() def logs(ctx, extra_args):
else: global_context = ctx.parent.parent.obj
print("No containers running") extra_args_list = list(extra_args) or None
elif command == "logs": if not global_context.dry_run:
if verbose: if global_context.verbose:
print("Running compose logs") print("Running compose logs")
logs_output = docker.compose.logs(services=extra_args_list if extra_args_list is not None else []) logs_output = ctx.obj.docker.compose.logs(services=extra_args_list if extra_args_list is not None else [])
print(logs_output) print(logs_output)
def get_stack_status(ctx, stack): def get_stack_status(ctx, stack):
@ -137,7 +175,7 @@ def get_stack_status(ctx, stack):
ctx_copy = copy.copy(ctx) ctx_copy = copy.copy(ctx)
ctx_copy.stack = stack ctx_copy.stack = stack
cluster_context = _make_cluster_context(ctx_copy, None, None, None) cluster_context = _make_cluster_context(ctx_copy, None, None, None, None)
docker = DockerClient(compose_files=cluster_context.compose_files, compose_project_name=cluster_context.cluster) docker = DockerClient(compose_files=cluster_context.compose_files, compose_project_name=cluster_context.cluster)
# TODO: refactor to avoid duplicating this code above # TODO: refactor to avoid duplicating this code above
if ctx.verbose: if ctx.verbose:
@ -162,7 +200,7 @@ def _make_runtime_env(ctx):
return container_exec_env return container_exec_env
def _make_cluster_context(ctx, include, exclude, cluster): def _make_cluster_context(ctx, include, exclude, cluster, env_file):
if ctx.local_stack: if ctx.local_stack:
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")] dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
@ -235,16 +273,17 @@ def _make_cluster_context(ctx, include, exclude, cluster):
if ctx.verbose: if ctx.verbose:
print(f"files: {compose_files}") print(f"files: {compose_files}")
return cluster_context(cluster, compose_files, pre_start_commands, post_start_commands, cluster_config) return cluster_context(cluster, compose_files, pre_start_commands, post_start_commands, cluster_config, env_file)
class cluster_context: class cluster_context:
def __init__(self, cluster, compose_files, pre_start_commands, post_start_commands, config) -> None: def __init__(self, cluster, compose_files, pre_start_commands, post_start_commands, config, env_file) -> None:
self.cluster = cluster self.cluster = cluster
self.compose_files = compose_files self.compose_files = compose_files
self.pre_start_commands = pre_start_commands self.pre_start_commands = pre_start_commands
self.post_start_commands = post_start_commands self.post_start_commands = post_start_commands
self.config = config self.config = config
self.env_file = env_file
def _convert_to_new_format(old_pod_array): def _convert_to_new_format(old_pod_array):

View File

@ -52,15 +52,111 @@ def is_git_repo(path):
# ) # )
def branch_strip(s):
return s.split('@')[0]
def host_and_path_for_repo(fully_qualified_repo):
repo_branch_split = fully_qualified_repo.split("@")
repo_branch = repo_branch_split[-1] if len(repo_branch_split) > 1 else None
repo_host_split = repo_branch_split[0].split("/")
# Legacy unqualified repo means github
if len(repo_host_split) == 2:
return "github.com", "/".join(repo_host_split), repo_branch
else:
if len(repo_host_split) == 3:
# First part is the host
return repo_host_split[0], "/".join(repo_host_split[1:]), repo_branch
# TODO: fix the messy arg list here
def process_repo(verbose, quiet, dry_run, pull, check_only, git_ssh, dev_root_path, branches_array, fully_qualified_repo):
repo_host, repo_path, repo_branch = host_and_path_for_repo(fully_qualified_repo)
git_ssh_prefix = f"git@{repo_host}:"
git_http_prefix = f"https://{repo_host}/"
full_github_repo_path = f"{git_ssh_prefix if git_ssh else git_http_prefix}{repo_path}"
repoName = repo_path.split("/")[-1]
full_filesystem_repo_path = os.path.join(dev_root_path, repoName)
is_present = os.path.isdir(full_filesystem_repo_path)
current_repo_branch = git.Repo(full_filesystem_repo_path).active_branch.name if is_present else None
if not quiet:
present_text = f"already exists active branch: {current_repo_branch}" if is_present \
else 'Needs to be fetched'
print(f"Checking: {full_filesystem_repo_path}: {present_text}")
# Quick check that it's actually a repo
if is_present:
if not is_git_repo(full_filesystem_repo_path):
print(f"Error: {full_filesystem_repo_path} does not contain a valid git repository")
sys.exit(1)
else:
if pull:
if verbose:
print(f"Running git pull for {full_filesystem_repo_path}")
if not check_only:
git_repo = git.Repo(full_filesystem_repo_path)
origin = git_repo.remotes.origin
origin.pull(progress=None if quiet else GitProgress())
else:
print("(git pull skipped)")
if not is_present:
# Clone
if verbose:
print(f'Running git clone for {full_github_repo_path} into {full_filesystem_repo_path}')
if not dry_run:
git.Repo.clone_from(full_github_repo_path,
full_filesystem_repo_path,
progress=None if quiet else GitProgress())
else:
print("(git clone skipped)")
# Checkout the requested branch, if one was specified
branch_to_checkout = None
if branches_array:
# Find the current repo in the branches list
print("Checking")
for repo_branch in branches_array:
repo_branch_tuple = repo_branch.split(" ")
if repo_branch_tuple[0] == branch_strip(fully_qualified_repo):
# checkout specified branch
branch_to_checkout = repo_branch_tuple[1]
else:
branch_to_checkout = repo_branch
if branch_to_checkout:
if current_repo_branch is None or (current_repo_branch and (current_repo_branch != branch_to_checkout)):
if not quiet:
print(f"switching to branch {branch_to_checkout} in repo {repo_path}")
git_repo = git.Repo(full_filesystem_repo_path)
git_repo.git.checkout(branch_to_checkout)
else:
if verbose:
print(f"repo {repo_path} is already switched to branch {branch_to_checkout}")
def parse_branches(branches_string):
if branches_string:
result_array = []
branches_directives = branches_string.split(",")
for branch_directive in branches_directives:
split_directive = branch_directive.split("@")
if len(split_directive) != 2:
print(f"Error: branch specified is not valid: {branch_directive}")
sys.exit(1)
result_array.append(f"{split_directive[0]} {split_directive[1]}")
return result_array
else:
return None
@click.command() @click.command()
@click.option("--include", help="only clone these repositories") @click.option("--include", help="only clone these repositories")
@click.option("--exclude", help="don\'t clone these repositories") @click.option("--exclude", help="don\'t clone these repositories")
@click.option('--git-ssh', is_flag=True, default=False) @click.option('--git-ssh', is_flag=True, default=False)
@click.option('--check-only', is_flag=True, default=False) @click.option('--check-only', is_flag=True, default=False)
@click.option('--pull', is_flag=True, default=False) @click.option('--pull', is_flag=True, default=False)
@click.option("--branches", help="override branches for repositories")
@click.option('--branches-file', help="checkout branches specified in this file") @click.option('--branches-file', help="checkout branches specified in this file")
@click.pass_context @click.pass_context
def command(ctx, include, exclude, git_ssh, check_only, pull, branches_file): def command(ctx, include, exclude, git_ssh, check_only, pull, branches, branches_file):
'''git clone the set of repositories required to build the complete system from source''' '''git clone the set of repositories required to build the complete system from source'''
quiet = ctx.obj.quiet quiet = ctx.obj.quiet
@ -68,16 +164,29 @@ def command(ctx, include, exclude, git_ssh, check_only, pull, branches_file):
dry_run = ctx.obj.dry_run dry_run = ctx.obj.dry_run
stack = ctx.obj.stack stack = ctx.obj.stack
branches = [] branches_array = []
# TODO: branches file needs to be re-worked in the context of stacks # TODO: branches file needs to be re-worked in the context of stacks
if branches_file: if branches_file:
if verbose: if branches:
print(f"loading branches from: {branches_file}") print("Error: can't specify both --branches and --branches-file")
with open(branches_file) as branches_file_open: sys.exit(1)
branches = branches_file_open.read().splitlines() else:
if verbose: if verbose:
print(f"Branches are: {branches}") print(f"loading branches from: {branches_file}")
with open(branches_file) as branches_file_open:
branches_array = branches_file_open.read().splitlines()
print(f"branches: {branches}")
if branches:
if branches_file:
print("Error: can't specify both --branches and --branches-file")
sys.exit(1)
else:
branches_array = parse_branches(branches)
if branches_array and verbose:
print(f"Branches are: {branches_array}")
local_stack = ctx.obj.local_stack local_stack = ctx.obj.local_stack
@ -119,64 +228,15 @@ def command(ctx, include, exclude, git_ssh, check_only, pull, branches_file):
repos = [] repos = []
for repo in repos_in_scope: for repo in repos_in_scope:
if include_exclude_check(repo, include, exclude): if include_exclude_check(branch_strip(repo), include, exclude):
repos.append(repo) repos.append(repo)
else: else:
if verbose: if verbose:
print(f"Excluding: {repo}") print(f"Excluding: {repo}")
def process_repo(repo):
git_ssh_prefix = "git@github.com:"
git_http_prefix = "https://github.com/"
full_github_repo_path = f"{git_ssh_prefix if git_ssh else git_http_prefix}{repo}"
repoName = repo.split("/")[-1]
full_filesystem_repo_path = os.path.join(dev_root_path, repoName)
is_present = os.path.isdir(full_filesystem_repo_path)
if not quiet:
present_text = f"already exists active branch: {git.Repo(full_filesystem_repo_path).active_branch}" if is_present \
else 'Needs to be fetched'
print(f"Checking: {full_filesystem_repo_path}: {present_text}")
# Quick check that it's actually a repo
if is_present:
if not is_git_repo(full_filesystem_repo_path):
print(f"Error: {full_filesystem_repo_path} does not contain a valid git repository")
sys.exit(1)
else:
if pull:
if verbose:
print(f"Running git pull for {full_filesystem_repo_path}")
if not check_only:
git_repo = git.Repo(full_filesystem_repo_path)
origin = git_repo.remotes.origin
origin.pull(progress=None if quiet else GitProgress())
else:
print("(git pull skipped)")
if not is_present:
# Clone
if verbose:
print(f'Running git clone for {full_github_repo_path} into {full_filesystem_repo_path}')
if not dry_run:
git.Repo.clone_from(full_github_repo_path,
full_filesystem_repo_path,
progress=None if quiet else GitProgress())
else:
print("(git clone skipped)")
# Checkout the requested branch, if one was specified
if branches:
# Find the current repo in the branches list
for repo_branch in branches:
repo_branch_tuple = repo_branch.split(" ")
if repo_branch_tuple[0] == repo:
# checkout specified branch
branch_to_checkout = repo_branch_tuple[1]
if verbose:
print(f"checking out branch {branch_to_checkout} in repo {repo}")
git_repo = git.Repo(full_filesystem_repo_path)
git_repo.git.checkout(branch_to_checkout)
for repo in repos: for repo in repos:
try: try:
process_repo(repo) process_repo(verbose, quiet, dry_run, pull, check_only, git_ssh, dev_root_path, branches_array, repo)
except git.exc.GitCommandError as error: except git.exc.GitCommandError as error:
print(f"\n******* git command returned error exit status:\n{error}") print(f"\n******* git command returned error exit status:\n{error}")
sys.exit(1) sys.exit(1)

21
cli.py
View File

@ -14,6 +14,7 @@
# along with this program. If not, see <http:#www.gnu.org/licenses/>. # along with this program. If not, see <http:#www.gnu.org/licenses/>.
import click import click
from dataclasses import dataclass
from app import setup_repositories from app import setup_repositories
from app import build_containers from app import build_containers
@ -24,17 +25,15 @@ from app import version
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help']) CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
# TODO: this seems kind of weird and heavy on boilerplate -- check it is @dataclass
# the best Python can do for us. class Options:
class Options(object): stack: str
def __init__(self, stack, quiet, verbose, dry_run, local_stack, debug, continue_on_error): quiet: bool = False
self.stack = stack verbose: bool = False
self.quiet = quiet dry_run: bool = False
self.verbose = verbose local_stack: bool = False
self.dry_run = dry_run debug: bool = False
self.local_stack = local_stack continue_on_error: bool = False
self.debug = debug
self.continue_on_error = continue_on_error
@click.group(context_settings=CONTEXT_SETTINGS) @click.group(context_settings=CONTEXT_SETTINGS)

View File

@ -0,0 +1,44 @@
#cloud-config
# Used for easily testing stacks-in-development on cloud platforms
# Assumes Ubuntu, edit the last line if targeting a different OS
# Once SSH'd into the server, run:
# `$ cd stack-orchestrator`
# `$ git checkout <branch>
# `$ ./scripts/developer-mode-setup.sh`
# `$ source ./venv/bin/activate`
# Followed by the stack instructions.
package_update: true
package_upgrade: true
groups:
- docker
system_info:
default_user:
groups: [ docker ]
packages:
- apt-transport-https
- ca-certificates
- curl
- jq
- git
- gnupg
- lsb-release
- unattended-upgrades
- python3.10-venv
- pip
runcmd:
- mkdir -p /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
- apt-get update
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker
- systemctl start docker
- git clone https://github.com/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator

View File

@ -0,0 +1,35 @@
#cloud-config
# Used for installing Stack Orchestrator on platforms that support `cloud-init`
# Tested on Ubuntu
package_update: true
package_upgrade: true
groups:
- docker
system_info:
default_user:
groups: [ docker ]
packages:
- apt-transport-https
- ca-certificates
- curl
- jq
- git
- gnupg
- lsb-release
- unattended-upgrades
runcmd:
- mkdir -p /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
- apt-get update
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker
- systemctl start docker
- curl -L -o /usr/local/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so
- chmod +x /usr/local/bin/laconic-so

View File

@ -0,0 +1,56 @@
#!/usr/bin/env bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# Dump environment variables for debugging
echo "Environment variables:"
env
# Test basic stack-orchestrator deploy
echo "Running stack-orchestrator deploy test"
# Bit of a hack, test the most recent package
TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 )
# Set a non-default repo dir
export CERC_REPO_BASE_DIR=~/stack-orchestrator-test/repo-base-dir
echo "Testing this package: $TEST_TARGET_SO"
echo "Test version command"
reported_version_string=$( $TEST_TARGET_SO version )
echo "Version reported is: ${reported_version_string}"
echo "Cloning repositories into: $CERC_REPO_BASE_DIR"
rm -rf $CERC_REPO_BASE_DIR
mkdir -p $CERC_REPO_BASE_DIR
# Test bringing the test container up and down
# with and without volume removal
$TEST_TARGET_SO --stack test setup-repositories
$TEST_TARGET_SO --stack test build-containers
$TEST_TARGET_SO --stack test deploy up
# Test deploy port command
deploy_port_output=$( $TEST_TARGET_SO --stack test deploy port test 80 )
if [[ "$deploy_port_output" =~ ^0.0.0.0:[1-9][0-9]* ]]; then
echo "Deploy port test: passed"
else
echo "Deploy port test: FAILED"
exit 1
fi
$TEST_TARGET_SO --stack test deploy down
# The next time we bring the container up the volume will be old (from the previous run above)
$TEST_TARGET_SO --stack test deploy up
log_output_1=$( $TEST_TARGET_SO --stack test deploy logs )
if [[ "$log_output_1" == *"Filesystem is old"* ]]; then
echo "Retain volumes test: passed"
else
echo "Retain volumes test: FAILED"
exit 1
fi
$TEST_TARGET_SO --stack test deploy down --delete-volumes
# Now when we bring the container up the volume will be new again
$TEST_TARGET_SO --stack test deploy up
log_output_2=$( $TEST_TARGET_SO --stack test deploy logs )
if [[ "$log_output_2" == *"Filesystem is fresh"* ]]; then
echo "Delete volumes test: passed"
else
echo "Delete volumes test: FAILED"
exit 1
fi
$TEST_TARGET_SO --stack test deploy down --delete-volumes
echo "Test passed"

View File

@ -3,7 +3,7 @@ set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x set -x
fi fi
set -e
echo "Running stack-orchestrator Ethereum fixturenet test" echo "Running stack-orchestrator Ethereum fixturenet test"
# Bit of a hack, test the most recent package # Bit of a hack, test the most recent package
TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 ) TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 )
@ -15,7 +15,7 @@ reported_version_string=$( $TEST_TARGET_SO version )
echo "Version reported is: ${reported_version_string}" echo "Version reported is: ${reported_version_string}"
echo "Cloning repositories into: $CERC_REPO_BASE_DIR" echo "Cloning repositories into: $CERC_REPO_BASE_DIR"
$TEST_TARGET_SO --stack fixturenet-eth setup-repositories $TEST_TARGET_SO --stack fixturenet-eth setup-repositories
$TEST_TARGET_SO --stack fixturenet-eth build-containers $TEST_TARGET_SO --stack fixturenet-eth build-containers
$TEST_TARGET_SO --stack fixturenet-eth deploy up $TEST_TARGET_SO --stack fixturenet-eth deploy up
# Verify that the fixturenet is up and running # Verify that the fixturenet is up and running
$TEST_TARGET_SO --stack fixturenet-eth deploy ps $TEST_TARGET_SO --stack fixturenet-eth deploy ps