Compare commits

...

323 Commits

Author SHA1 Message Date
gitea_admin ce966f1baa Update README.md
Publish / Build and publish (push) Successful in 30s Details
Deploy Test / Run deploy test suite (push) Successful in 2m50s Details
Smoke Test / Run basic test suite (push) Successful in 3m12s Details
2023-06-20 15:23:18 +00:00
gitea_admin db4728a9e3 Update README.md
Publish / Build and publish (push) Successful in 31s Details
Deploy Test / Run deploy test suite (push) Successful in 2m54s Details
Smoke Test / Run basic test suite (push) Successful in 3m16s Details
2023-06-20 15:16:46 +00:00
Zach 7ca7bcc952
Cloud init scripts for user/dev mode (#430)
* cloud init install

* add dev mode script + description

* instructions
2023-06-20 10:09:30 -04:00
Nabarun Gogoi 32f8d65bb8
Update mobymask-v2 stack with lighthouse-cli and branch checkout feature (#425)
* Update optimism stack yml for lighthouse-cli

* Use branch checkout feature in mobymask stack
2023-06-07 18:48:59 +05:30
David Boreham d19b9a65b9 Fix typo 2023-06-05 21:59:42 -06:00
David Boreham 98e1d120cc
Add missing lighthouse-cli container to pocket stack (#424)
Co-authored-by: David Boreham <david@bozemanpas.com>
2023-06-05 21:08:05 -06:00
Thomas E Lackey 26ff7a969c
Fix plugeth build. (#423) 2023-06-05 21:10:17 -05:00
Thomas E Lackey a8e198ad55
Allow configuring the number of statediff workers. (#422)
* Allow configuring the number of statediff workers.

* Leave logging alone
2023-06-05 18:16:42 -05:00
David Boreham f1a626ddf5
build local lighthouse cli (#420)
* Build lcli locally

* Pull lighthouse repo

* Enable portable lcli build

* Update ldcli options

* Add lcli container to fixturenet-eth stack

* Include --eth1-block-hash

---------

Co-authored-by: David Boreham <david@bozemanpas.com>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
2023-06-05 16:54:22 -05:00
Roy Crihfield ff616db4ad
Updates for running IPLD-ETH CI tests (#414)
* random nits

* geth - visibility of migration status

* forward CERC_RUN_STATEDIFF to geth container

* fix ipld-eth-server vars

* fix fixturenet-eth-loaded stack

* fixturenet geth genesis - include mergeNetsplitBlock

* forward CERC_STATEDIFF_DB_GOOSE_MIN_VER to env file

* add TAG_SUFFIX arg to lighthouse build

  intended to avoid sporadic failures when running lcli on github CI runners, likely related to non-portable builds
2023-05-31 03:10:58 -05:00
David Boreham 9880b48b78
Add foundry to fixturenet-plugeth-tx (#418) 2023-05-30 23:51:01 -06:00
Thomas E Lackey 23a336020c
Make a separate lighthouse container for the plugeth fixturenet. (#412)
* Make a separate lighthouse container for the plugeth fixturenet.
2023-05-26 16:57:15 -05:00
Zach 605db8a4d2
Update pokt README (#413)
* Update pokt README

* split cmds from responses
2023-05-26 10:37:59 -04:00
Thomas E Lackey 6ec55ba460
Add a plugeth-based version of the fixturenet (#411)
* plugeth version of the fixturenet

* Use pre-built plugeth.
2023-05-25 11:21:08 -05:00
David Boreham 938f51ef8c
Specify chunker stack branches (#410)
* Specify v5 branches

* Fix logic for branch switch
2023-05-24 20:00:42 -06:00
David Boreham 6d620ba9c2
git branch in stack and on command line (#409)
* Support @branch notation in stack.yml

* Refactor and support branches directive
2023-05-24 19:49:26 -06:00
erikdies 0c4c128465
cleanup Options boilerplate (#402)
Co-authored-by: David Boreham <david@bozemanpass.com>
2023-05-24 18:02:25 -06:00
David Boreham 97c1ae1c43
Use upstream act_runner project (#408) 2023-05-24 18:01:49 -06:00
David Boreham ec6b5439f4
Support for git hosts other than github (#407)
* Update repository list file

* Add host part to repo name

* Allow git hosts other than github
2023-05-24 17:19:21 -06:00
David Boreham 1d8f252a51
Detect bad reponse from yarn info (#406) 2023-05-22 13:42:55 -06:00
David Boreham 161665ef72
Fix deploy commands (#404)
* Fix bugs

* Add test for deploy port command
2023-05-22 12:43:59 -06:00
David Boreham 9c5f6469ff
Allow docker buildkit to be enabled via env var (#403) 2023-05-22 11:38:34 -06:00
David Boreham 85225c72d7 Fix another typo 2023-05-21 15:43:15 -06:00
David Boreham 223d1171e8 Change test display name 2023-05-21 07:42:09 -06:00
David Boreham 1e38e16550 Fix typo 2023-05-21 07:40:22 -06:00
David Boreham dddae8cc7a
Dboreham/deploy volume control (#401)
* Implement volume control

* Deploy test

* Add test for volumes

* Enable CI for deploy test
2023-05-21 07:39:00 -06:00
Thomas E Lackey aa702737ef Fix 397 by pegging alpine version. 2023-05-19 11:26:09 -05:00
prathamesh0 c9155eafd2
Add restart policies to fixturenet-eth and fixturenet-opimism stacks (#396)
* Add restart policies for fixturenet-optimism stack containers


Former-commit-id: e749699188c733614423ccc7ef43525b9805e23d

* Add restart policies for fixturenet-eth stack containers


Former-commit-id: 716e132300d88dbe6121ed3968a9c78b561196ef

* Remove existing bootnode ENR directory on start
2023-05-19 13:46:39 +05:30
David Boreham 1ffc6b1687 Refactor deploy into click subcommands (#399)
Former-commit-id: cb58fdb58ce1686f4638946745830f391d820f4b
2023-05-18 17:01:46 -06:00
David Boreham 87c25dfb5e Fix up test stack (#398)
Former-commit-id: 088105c7829254fc8ff1f31b71d28fd916def7eb
2023-05-18 13:54:27 -06:00
Ian 0691c22db4 Lotus (#392)
* fist commit

* manual peer connect

* add build to gitignore

* add shared volume

* connect to bootnode

* fix volume init bug

* todo generate genesis

* remove build dir

---------

Co-authored-by: iskay <ian@knowable.vc>
Former-commit-id: 5ecfcae5cc
2023-05-17 17:11:56 -04:00
prathamesh0 5c7d445500 Add a stack for Gelato watcher (#394)
* Add a stack for Gelato watcher

* Add option to create and use a state snapshot

* Add commands to create and import a state checkpoint

* Rename ipld-eth-server endpoint env variables

* Fix default env variable

Former-commit-id: 8b4b5deba8
2023-05-16 09:09:08 +05:30
David Boreham a93fa93d26 Small doc fix
Former-commit-id: d26dd4b531
2023-05-09 17:08:25 -06:00
David Boreham 1852d7d4c1 Chain chunker stack (#389)
* Fix bug in default container build flow

* Add convenience stack for chain-chunker

Former-commit-id: 3e78c321b0
2023-05-09 14:00:58 -06:00
Zach fce41994a3 match tokens (#388)
Former-commit-id: e5faeb9d3b
2023-05-09 15:23:56 -04:00
Zach a5d3d6bae7 Update laconicd-fixturenet.md (#386)
Former-commit-id: b6a0af4e95
2023-05-09 14:57:49 -04:00
Nabarun Gogoi 8add4671c0 Add environment variables for multiaddrs blacklist (#381)
* Add env variable for web apps config denyMultiaddrs

* Add watcher config option for blacklisted multiaddrs

* Update package versions

* Use provided domain for relay multiaddr in peer config

* Change delimeter while replacing deny multiaddrs list

---------

Co-authored-by: prathamesh0 <prathamesh.musale0@gmail.com>
Former-commit-id: b678a3ecb4
2023-05-05 13:32:19 +05:30
Marten O'Grady b1b1464205 Update CONTRIBUTING.md (#383)
Wrong output for Step 3 in Build A ZipApp.  Fixed it to what I just experienced while smoke testing.

Former-commit-id: bce604e4bb
2023-05-04 09:47:59 -04:00
Zach fbe901a0fb Merge pull request #382 from Escape613/patch-2
Update CONTRIBUTING.md

Former-commit-id: ff0a67f45f
2023-05-04 09:20:46 -04:00
Marten O'Grady a9558aa874 Update CONTRIBUTING.md
Missing ")" in Step #1 of INSTALL

Former-commit-id: f1bc8aa4e1
2023-05-04 09:16:14 -04:00
Nabarun Gogoi 960a24c96b Add stack for azimuth watchers with gateway-server (#379)
* Setup gateway-server with watchers

* Add js script to merge toml config files

* Remove individual watcher configs

* Add all azimuth watchers in stack

* Fix toml-js install

* Use env variables for ipld-eth-server endpoints

* Checkout to version tag in azimuth-watcher-ts repo

Former-commit-id: 5a94aed7f7
2023-05-04 15:35:04 +05:30
David Boreham c1e3f5674d Fixturenet pocket (#350)
* add fixturenet-gaia stack

* add fixturenet-pocket

* integrate with eth fixturenet

* separate out fixturenet-gaia

* use pocket-deployments Dockerfile

---------

Co-authored-by: iskay <ian@knowable.vc>
Co-authored-by: Ian <ikay@lakeheadu.ca>
Former-commit-id: b23b5ae3bf
2023-05-02 15:13:48 -06:00
prathamesh0 55e7d22e57 Upgrade to use latest lighthouse release (#378)
Former-commit-id: ed4f40118f
2023-05-02 13:18:29 +05:30
prathamesh0 3634a35479 Avoid persisting lighthouse bootnode ENR between restarts (#377)
Former-commit-id: cba2345af3
2023-05-02 12:14:48 +05:30
Zach 255a71fa4c Merge pull request #376 from cerc-io/fix-readme
Former-commit-id: 2c57fd2122
2023-04-29 14:57:13 -04:00
Zach 3751db8046 rm gerbil from doc
Former-commit-id: 97433a7bb5
2023-04-29 14:55:10 -04:00
zramsay 6bb1acc04f better direction to stacks
Former-commit-id: 993118deb4
2023-04-27 13:16:53 -04:00
Zach 9da47a2e45 Merge pull request #344 from cerc-io/console-docs
document the laconicd / sdk/ registry CLI / web console Stack

Former-commit-id: c712c181fc
2023-04-27 13:00:54 -04:00
David Boreham 8cdb9cee35 Remove gerbil builder container from build-support stack (#375)
Former-commit-id: 19e38d2a94
2023-04-27 10:54:11 -06:00
David Boreham f8306e6685 Add foundry to the fixturenet-eth-tx stack (#374)
Former-commit-id: af93743974
2023-04-27 10:52:37 -06:00
Zach feb5fe7bff Merge pull request #371 from cerc-io/zramsay-patch-1
optimism: on error, wait, then re-run 'deploy up'
Former-commit-id: b74e89fd3f
2023-04-27 12:34:04 -04:00
David Boreham b0770d7379 Remove >
Former-commit-id: 73419c341a
2023-04-27 10:22:21 -06:00
prathamesh0 4aecfcd780 Map op-batcher and op-proposer ports to host (#373)
Former-commit-id: f04b266a24
2023-04-27 18:26:58 +05:30
Thomas E Lackey 03f6d027f9 Minor script cleanup. (#372)
Former-commit-id: 323ca3b238
2023-04-26 23:26:50 -05:00
Zach 617228d0dc optimism: on error, wait, then re-run 'deploy up'
Former-commit-id: a4ff8f3dcb
2023-04-26 14:43:32 -04:00
Thomas E Lackey d8522211f4 Add script for exporting ethdb from fixturenet. (#370)
* Add script for exporting ethdb from fixturenet.

* Update README

* Script

Former-commit-id: 7a607c2994
2023-04-26 00:13:35 -05:00
prathamesh0 4ca185c753 Fix sample env in MobyMask app instructions (#369)
Former-commit-id: 6a11046ea5
2023-04-25 19:38:13 +05:30
prathamesh0 e004d93891 Add instructions to run MobyMask app with a watcher on network (#368)
* Remove unnecessary check on watcher endpoint

* Add instructions to run MobyMask app with a watcher on network

* Move watcher on network docs to a separate folder

* Add nginx config for watcher endpoint

* Add expected output logs

* Add sample nginx config for hosting the app

* Update instructions

Former-commit-id: 018950858b
2023-04-25 18:32:38 +05:30
prathamesh0 8f6703940a Upgrade Optimism (#367)
Former-commit-id: 8a054a979c
2023-04-25 15:52:38 +05:30
prathamesh0 cf0e0c5d94 Add an arg for shutdown timeout in deploy down command (#366)
Former-commit-id: 44cf57df9b
2023-04-25 11:51:49 +05:30
prathamesh0 7f3a33564a Upgrade Optimism and add op-proposer (#364)
* Use the latest stable optimism release

* Remove unnecessary repos from repo-list

* Add op-proposer service to fixturenet-optimism stack

* Add jq and bash to op-proposer image

* Update instructions

* Update op-batcher and op-geth commands

Former-commit-id: 988be0ef9a
2023-04-25 10:41:47 +05:30
Nabarun Gogoi 10337e77f6 Upgrade mobymkas-ui package version for endorse member UI (#365)
Former-commit-id: d7ea874268
2023-04-24 17:55:17 +05:30
prathamesh0 7a1ec3f196 Fix mobymask contract deployment script (#362)
Former-commit-id: 0bc54b30e0
2023-04-21 17:27:53 +05:30
David Boreham 4beb889e9f Add DOCKER_HOST inheriting from the caller, to build environment (#360)
* Add DOCKER_HOST inheriting from the caller, to build environment

* Fix for env var not set

Former-commit-id: 9feff35f53
2023-04-20 17:30:54 -06:00
David Boreham 2b8eccf167 Add more debugging
Former-commit-id: e8ec090f1d
2023-04-20 16:58:13 -06:00
David Boreham f5acbd1db0 Dump image list for debugging
Former-commit-id: 0a08a7ecba
2023-04-20 16:47:40 -06:00
David Boreham f87f3d4765 Stop on error
Former-commit-id: 7c6b2543af
2023-04-20 16:26:19 -06:00
David Boreham d2dfcc813f Enable in-container docker
Former-commit-id: 074a3fe20e
2023-04-20 16:21:26 -06:00
prathamesh0 d6f829ee65 Add instructions to join MobyMask watcher p2p network (#346)
* Refactor L2 enpoint check to contract deployment script

* Add instructions to join to an existing watcher network

* Include mobymask-v2-watcher-ts in repositories setup

* Add a clean up section and expected outputs

* Add a troubleshooting section

* Use lxdao frontend

* Update instructions for updated UI

Former-commit-id: f78176a27f
2023-04-20 15:30:19 +05:30
prathamesh0 363a0b733f Fetch geth accounts using an exposed endpoint (#357)
* Fetch account creds served by geth service

* Use fetched account creds in mobymask-v2 stack

Former-commit-id: eb777b0b47
2023-04-20 15:12:59 +05:30
Nabarun Gogoi 3d75523d73 Upgrade web-app package versions to set custom relay node (#358)
Former-commit-id: 01499a3f05
2023-04-20 14:45:01 +05:30
Nabarun Gogoi 35dd30f877 Add container to mobymask-v2 stack for LXDAO mobymask-app (#347)
* Add container to stack for lxdao mobymask-app

* Remove shm_size

* Use cerc-io scoped alias for lxdao app package

* Change alias to @cerc-io/mobymask-ui-lxdao

Former-commit-id: 46b36c3cb6
2023-04-20 10:49:19 +05:30
David Boreham b3feab0592 Turn off CI job on push except for to main
Former-commit-id: 727fa67d8e
2023-04-19 21:32:48 -06:00
David Boreham e3e48ccbf3 Delete fixturenet-eth-test.yml
Former-commit-id: 5c53e3bedc
2023-04-19 20:56:19 -06:00
Zach df13b8f630 Merge pull request #320 from cerc-io/add-kubo-stack
Add kubo (IPFS) as a stack

Former-commit-id: 45cab0f33d
2023-04-19 21:48:55 -04:00
Zach 209d49f105 Update laconicd-fixturenet.md
Former-commit-id: 32c0830e77
2023-04-19 21:10:19 -04:00
David Boreham b7094c7e7f Merge branch 'main' of github.com:cerc-io/stack-orchestrator into main
Former-commit-id: c1ba7f0c1b
2023-04-19 18:28:39 -06:00
David Boreham fefbcf031c Put file in the right place
Former-commit-id: aadd1c15f0
2023-04-19 18:27:56 -06:00
David Boreham d98b02266b Add fixturenet-eth test (#356)
* Add fixturenet-eth test

Former-commit-id: d1cada5029
2023-04-19 18:25:49 -06:00
David Boreham beac97b842 Get branch name right
Former-commit-id: 8aed8ab8ae
2023-04-19 18:22:18 -06:00
David Boreham ea7c5109b8 Add fixturenet-eth test
Former-commit-id: db6da4b75a
2023-04-19 18:20:00 -06:00
David Boreham 6ce01a79ed Update base containers (#355)
* Update to Node18

* Update to latest stable lighthouse

Former-commit-id: a335ccde3a
2023-04-19 16:43:59 -06:00
David Boreham 2d5abdce45 Set context dir to the script dir to avoid permission errors (#354)
Former-commit-id: bde48b699d
2023-04-19 16:00:41 -06:00
David Boreham db1edd85e6 Catch and report git errors (#353)
Former-commit-id: 7c867171e4
2023-04-19 15:29:41 -06:00
David Boreham 55b2d3bd25 Update setuptools in case the version on the machine is old (#352)
Former-commit-id: 5ef37894ce
2023-04-19 15:16:34 -06:00
David Boreham a3b3ac18b1 Fix build script (#351)
Former-commit-id: c07113320b
2023-04-19 14:55:36 -06:00
Nabarun Gogoi 53fbc60f55 Use standalone mobymask-v2-watcher-ts for running watcher server (#327)
* Use standalone mobymask-v2-watcher-ts to run peer test

* Add watcher-ts image for running peer tests

* Run separate containers for peer ids generation and tests

* Wait for watcher to be up before starting peer-test-app

* Resolve peer-test-app compose file and remove setup-repositories for web-apps

Former-commit-id: c4002dcc5c
2023-04-19 13:48:51 +05:30
prathamesh0 4caae1d850 Gracefully shutdown optimism batcher and op-geth containers (#345)
* Gracefully shutdown optimism batcher and op-geth containers

* Remove unnecessary env export

Former-commit-id: c6e6122516
2023-04-19 12:48:38 +05:30
prathamesh0 3f79c2b811 [WIP] Handle restarts in fixturenet-eth stack (#324)
* Use mounted volumes for data in geth nodes

* Use mounted volumes for data in lighthouse nodes

* Avoid resetting genesis time in a lighthouse node on restart

* Mount parent datadir for lighthouse nodes

* Trap signals on shutdown and clean up in lighthouse nodes

* Allow stalled sync in lighthouse beacon nodes

* Gracefully shutdown geth nodes

* Add clean up instructions

* Gracefully shutdown lighthouse boot node

Former-commit-id: 3130af1615
2023-04-19 12:22:13 +05:30
zramsay 1629129cd5 the fix
Former-commit-id: e5b9b74b4c
2023-04-18 19:55:51 -04:00
zramsay feee38140d second pass
Former-commit-id: 07c2a01a58
2023-04-18 18:48:52 -04:00
zramsay ddf51e01a3 first pass
Former-commit-id: ca29e9cf0d
2023-04-18 18:01:54 -04:00
Nabarun Gogoi b0aeff50bb Package mobymask-v2 stack web-apps similar to laconic-console app (#310)
* Build MobyMask web-app at container build step

* Fix web-app start script to use env variables in config

* Replace variables in built web-app files

* Use published mobymask-ui package from gitea

* Use published react-peer/test-app from gitea

* Remove local gitea publish TODO

Former-commit-id: cf79f0de0a
2023-04-18 18:25:58 +05:30
David Boreham cb72e5c03f Simple implementation of LACONIC_HOSTED_ENDPOINT (#342)
Former-commit-id: 172300d7bd
2023-04-17 20:46:05 -06:00
David Boreham c7a4d3f4e7 Default token needs to be empty string (#341)
Former-commit-id: 99aa1fa27e
2023-04-17 13:53:16 -06:00
David Boreham c715e11a88 Use CERC_NPM_REGISTRY_URL everywhere (#340)
Former-commit-id: 39a54bc62a
2023-04-17 13:40:49 -06:00
David Boreham 7673de73bb Add github actions
Former-commit-id: fcbea7984f
2023-04-17 13:11:45 -06:00
David Boreham f537cdbe29 Add note on developer-mode-setup script
Former-commit-id: e4b57b5815
2023-04-17 10:06:25 -06:00
Zach 09bca48498 Merge pull request #337 from cerc-io/zramsay-patch-1
docs: add missing step
Former-commit-id: 24174807c8
2023-04-17 11:28:29 -04:00
Zach ecb3b36387 Update README.md
Former-commit-id: dd59579b87
2023-04-17 11:25:21 -04:00
David Boreham 5a50e46718 Support for complete laconic stack with console and test registration record (#335)
* Configure cli with necessary gas and fees args and address

* Update version

Former-commit-id: 0068a994f6
2023-04-16 20:14:15 -06:00
David Boreham 99af105b9c quiet npm version warning (#331)
Former-commit-id: 2eb93d0933
2023-04-14 21:09:22 -06:00
David Boreham bfa86cbc29 Add doc for setup-repositories
Former-commit-id: d464c1c547
2023-04-14 17:51:22 -06:00
David Boreham 332085f80b Add force rebuild option (#329)
Former-commit-id: 1443c6c6d2
2023-04-14 14:19:27 -06:00
Zach 7406189596 Merge pull request #326 from cerc-io/typos
typos

Former-commit-id: 4e3eba1194
2023-04-13 12:32:33 -04:00
zramsay f0ce0fef1c typos
Former-commit-id: 16c9607a6c
2023-04-13 12:31:53 -04:00
zramsay a1a4837c89 lil fixes
Former-commit-id: bafdfe6d2a
2023-04-13 07:05:03 -04:00
zramsay 18f3c2cc4f update kubo stack README to enable CORS for running in the cloud
Former-commit-id: cfa32a3515
2023-04-13 06:55:47 -04:00
zramsay 391759e929 expose ports
Former-commit-id: e72ea19c5c
2023-04-13 05:27:04 -04:00
prathamesh0 3e80f5f238 Wait for transfer tx receipts when configuring Optimism (#323)
Former-commit-id: c99fc0941a
2023-04-13 12:43:41 +05:30
David Boreham 13499c6e4b Add MuKnSys npm scope (#322)
Former-commit-id: 6b27731a81
2023-04-12 19:46:50 -06:00
David Boreham 6e343bec5a Build with hosting config file (#321)
Former-commit-id: bb9c0706c3
2023-04-12 19:39:37 -06:00
zramsay 0f7c23951d run kubo as a stack
Former-commit-id: eae124fdf1
2023-04-12 17:36:47 -04:00
prathamesh0 8b11070870 Configuration fixes for mobymask-v2 stack for multiple deployments (#318)
* Fix contract deployment script in fixturenet-optimism stack

* Configure relay node's announce domain from env

* Configure relay peers list for the relay node from env

* Create and use peer ids from a mounted volume

* Fix command to create watcher config

* Fix mobymask-app deployment script

Former-commit-id: 882f0b16aa
2023-04-12 18:17:13 +05:30
David Boreham 5c6fe825fc Add comment to spec doc
Former-commit-id: 249893f5d9
2023-04-12 06:32:40 -06:00
Thomas E Lackey 80c16d2ced Update for latest act_runner. (#316)
Former-commit-id: c7c3cbde8e
2023-04-11 15:05:08 -05:00
David Boreham 6e33bd47e2 Fix syntax errors (#314)
Former-commit-id: 7c6c46febb
2023-04-11 07:20:38 -06:00
prathamesh0 c2a3ffe0dd Add an option to pass env file to deploy command (#304)
* Add an option to pass env file to deploy command

* Use env variable mapping in fixturenet-optimism stack

* Use default values from checked in env files

* Use env variable mapping in mobymask-v2 stack

* Update instructions

* Add extra hosts in app compose files and update instructions

* Add CERC prefix to env variables in fixturenet-optimism stack

* Add CERC prefix to env variables in mobymask-v2 stack

Former-commit-id: 6b62247ef7
2023-04-11 16:21:03 +05:30
David Boreham ffc24c0be8 Edit readme to trigger CI
Former-commit-id: 9a4b5810af
2023-04-10 14:39:28 -06:00
David Boreham 7021f8ed19 Add call to build tag script
Former-commit-id: 440b146e80
2023-04-10 12:17:54 -06:00
David Boreham 2e747b17be Publish test workflow (#308)
Former-commit-id: a16b1cd073
2023-04-10 11:50:09 -06:00
David Boreham 607b85f447 Fix weird whitespace
Former-commit-id: 23d5d563e1
2023-04-10 11:26:53 -06:00
David Boreham ce6ef81fe5 Publish drafts on the test branch
Former-commit-id: 49f14f3191
2023-04-10 11:20:15 -06:00
David Boreham 7bcad7b936 Fix publish workflow
Former-commit-id: e5cf52f188
2023-04-10 09:18:39 -06:00
David Boreham d72dcb6c74 Add test code
Former-commit-id: c934ffe18e
2023-04-10 07:48:22 -06:00
David Boreham 1559330fd7 Add test code
Former-commit-id: a225d5034b
2023-04-10 07:44:14 -06:00
David Boreham 75958adf82 Go back to all branches
Former-commit-id: 862cc4d448
2023-04-10 07:37:48 -06:00
David Boreham b3732bc7a6 Publish only on push to test branch
Former-commit-id: 5201138d6d
2023-04-10 07:27:31 -06:00
David Boreham 317608140f Trigger publish on PR and merge to main (#306)
Former-commit-id: c359188b3d
2023-04-10 06:59:08 -06:00
David Boreham 21d7a1ba61 Add v prefix
Former-commit-id: 1389559bc6
2023-04-10 06:55:25 -06:00
David Boreham 17d5e37368 Remove comment
Former-commit-id: a26ec970bc
2023-04-10 06:54:30 -06:00
David Boreham f90000d9cc New build version scheme (#305)
* Use separate build tag file

* Implement new versioning scheme

* Update workflow file

Former-commit-id: 80bbbafeb6
2023-04-10 06:43:23 -06:00
David Boreham bf85c61711 Update for new tagging scheme
Former-commit-id: dbb959f648
2023-04-10 06:24:00 -06:00
David Boreham 59afaf66ae Make version manually updated
Former-commit-id: f92a2fb3fc
2023-04-10 06:17:12 -06:00
Nabarun Gogoi 18279fcef3 Add env variable to enable/disable sending txs to L2 from watcher peer (#293)
* Add flag to enable/disable watcher peer L2 txs

* Update watcher-ts version in readme

Former-commit-id: d3715d1952
2023-04-10 10:42:56 +05:30
David Boreham a8815924c6 Fix typo
Former-commit-id: 93a2e1c864
2023-04-09 22:44:15 -06:00
David Boreham fe77955845 Enable publishing with fixed version
Former-commit-id: 751ed2e157
2023-04-09 22:41:01 -06:00
David Boreham 8fb9e6d39b Try to enable request logging
Former-commit-id: a930ad0d1c
2023-04-07 07:36:53 -06:00
David Boreham 0d6bc81233 Try to enable request logging
Former-commit-id: 628954f060
2023-04-07 07:27:39 -06:00
David Boreham 35d7649689 Try to enable request logging
Former-commit-id: c90790d0d7
2023-04-07 07:19:12 -06:00
David Boreham 504ed542c3 Try to upload
Former-commit-id: 749b302f5f
2023-04-06 19:47:15 -06:00
David Boreham 6dad03c031 Get the tag right
Former-commit-id: eb16a97def
2023-04-06 19:35:17 -06:00
David Boreham 124a838ab5 Try forcing a tag
Former-commit-id: a559c2e90f
2023-04-06 19:33:27 -06:00
David Boreham b36f9bc974 Fix errors
Former-commit-id: 8fe0b2dc29
2023-04-06 19:07:47 -06:00
David Boreham 4ae959eb4f Add prototype publish workflow
Former-commit-id: 681112cf07
2023-04-06 19:04:00 -06:00
Zach af82c8c431 Merge pull request #297 from cerc-io/zramsay-patch-1
Update README.md

Former-commit-id: fa37b186ca
2023-04-06 19:40:25 -04:00
Zach d2c85e6f1d Update README.md
Former-commit-id: 1e130b2a29
2023-04-06 19:39:12 -04:00
Zach 53516f3109 Merge pull request #278 from cerc-io/update-docs
update registry install instructions

Former-commit-id: 055fa14cfb
2023-04-06 19:32:29 -04:00
zramsay 5a27a94a7d more little updates
Former-commit-id: 579fa6ff66
2023-04-06 19:31:57 -04:00
David Boreham 4fa9d675ba Ubuntu install script (#289)
* Copy of Zach's install script

* Few small fixes after testing on DO droplet

* Update scripts

* Rename script

* Add sudo notice

* Fix typo

* Fix another typo

* Update docker instructions

Former-commit-id: 9e2b140e10
2023-04-06 13:50:50 -06:00
David Boreham dcc0f91140 Change step name
Former-commit-id: 4f137915bb
2023-04-06 13:36:58 -06:00
David Boreham e8139bd372 Use correct path
Former-commit-id: 2e2927f788
2023-04-06 13:35:16 -06:00
David Boreham b915db426d Add the smoke test
Former-commit-id: b850322f57
2023-04-06 13:32:56 -06:00
David Boreham 69d215e8ec Merge pull request #294 from cerc-io/dboreham/enable-tests-ci
Enable tests ci

Former-commit-id: 47275a50d4
2023-04-06 13:26:27 -06:00
David Boreham 060ac520e4 Bump CI
Former-commit-id: e7a4de5940
2023-04-06 13:24:17 -06:00
David Boreham 484d80be5f Add build
Former-commit-id: 90210e439f
2023-04-06 13:20:22 -06:00
David Boreham 792887a121 Try adding job name
Former-commit-id: 7dd07beec6
2023-04-06 07:28:13 -06:00
David Boreham 129d581141 Merge pull request #290 from cerc-io/dboreham/enable-ci
Try to enable CI

Former-commit-id: f7def488d9
2023-04-06 07:21:07 -06:00
David Boreham 44421049ae Try to enable CI
Former-commit-id: 0432a4bf29
2023-04-06 07:19:08 -06:00
Nabarun Gogoi 86f13e9c6b Separate out watcher and web-apps in mobymask-v2 stack (#287)
* Separate out watcher and web-apps in mobymask stack

* Take L2 RPC endpoint from the env file

* Changes to run watcher and mobymask web-app separately

* Support running watcher without contract deployment and L2 txs

* Remove duplicate mobymask params env

* Add code comments

* Add instructions for running web-apps separately

* Self review fixes

* Fix timeout for mobymask-app on watcher server

---------

Co-authored-by: prathamesh0 <prathamesh.musale0@gmail.com>
Former-commit-id: 6f781ae303
2023-04-06 15:17:00 +05:30
prathamesh0 72737bfa29 Handle restarts in mobymask-v2 stack (#286)
* Verify existing contract deployment

* Update mobymask-v2 demo instructions

Former-commit-id: 59fe9aae59
2023-04-05 17:52:12 +05:30
Nabarun Gogoi 63fbaa7ae3 Add ability to run mobymask-v2 stack with external optimism endpoint (#279)
* Set optimism geth endpoint from env file

* Set L1 account private keys from env

* Only deploy contract and generate invite in mobymask container

* Add readme for running mobymask v2 stack independently

* Modify mobymask container to stop running server and update readmes

* Check deployer account balance before deploying contract

* Fix for checking account balance before deploying

* Update readme description

* Update MobyMask repo tag in readme

Former-commit-id: 94e38ceaba
2023-04-05 17:26:38 +05:30
prathamesh0 464ef89a01 Handle restarts for services in `fixturenet-optimism` stack (#282)
* Check existing L1 contracts deployment

* Rename volume used for generated L2 config

* Check for existing L2 geth data directory

* Cross check existing L2 config against L1 deployment config

* Verify sequencer key in existing L2 geth data directory

* Add instructions to troubleshoot corrupt L2 geth dir

* Separate out instructions to run L2 with external L1

* Update docs

Former-commit-id: 9ffa9bb5a9
2023-04-05 10:25:50 +05:30
David Boreham 0c5f252465 Merge pull request #285 from cerc-io/dboreham/fix-gerbil-builder
Fail on error installing package

Former-commit-id: 18bb8194fe
2023-04-04 20:26:52 -06:00
David Boreham b4d9a3a831 Fail on error installing package
Former-commit-id: 11375fed0c
2023-04-04 20:26:19 -06:00
David Boreham 4da19b652e Update version
Former-commit-id: 9e4240df07
2023-04-04 11:29:16 -06:00
prathamesh0 be08ee81ea Add ability to run Optimism fixturenet with external L1 endpoint (#273)
* Remove unnecessary todos

* Set option to log commands in shell scripts

* Replace fixturenet-eth dependency with wait on endpoint

* Skip lighthouse node dependency check

* Update all services in the stack

* Use debug flag to enable shell commands logging

* Add bash in op-batcher container

* Update mobymask-v2 instructions

* Update fixturenet-optimism instructions

* Add descriptions for services

* Move ts files to container-build

* Take L1 RPC endpoint from the env file

* Add dev mode restriction for editing env file

Former-commit-id: 2515878eeb
2023-04-04 14:53:28 +05:30
zramsay 22bb1a0bfa key missing line
Former-commit-id: 358c7ea168
2023-04-03 16:41:07 -04:00
Thomas E Lackey 287468b7c0 Update run script to support COPY and WebSockets. (#275)
Former-commit-id: 4da69ebf4c
2023-04-03 14:59:31 -05:00
Nabarun Gogoi f23c22d5b9 Replace laconicd with optimism in mobymask-v2 stack (#272)
* Remove laconicd to use optimism endpoint

* Use fixturenet-optimism stack for mobymask-v2-watcher

* Fix setting L1 account private key in mobymask-v2 stack

Former-commit-id: b266ac78b4
2023-04-03 18:13:29 +05:30
prathamesh0 1881554ae0 Add Optimism Fixturenet stack (#266)
* Initial version

* Update readme

* Build op-geth container

* Add optimism go code containers

* Add optimism contracts container

* Update optimism contracts container build

* Add fixturenet-optimism-contracts service to deploy L1 contracts

* Add fixturenet-optimism op-node and op-geth

* Avoid reading addresses from a file when sending balances

* Fixes for running op-geth container

* Fix image name and command in optimism-contracts service

* Add a healthcheck to lighthouse bootnode to avoid failing eth txs

* Avoid using hardhat ethers to send balances from an account

* Update script to send balance to L1 bridge proxy contract

* Implement op-node container

* Wait for a finalized L1 block to exist

* Fix for running op-batcher

* Add a todo for restart support

* Integrate optimism-contracts service and update instructions

* Update clean-up to remove docker volumes

* Update volume access permissions

* Add a todo to replace foundry usage with web3 js

* Add known issues

* Fix README

* Fix indentation

* Update known issues

---------

Co-authored-by: David Boreham <david@bozemanpas.com>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-authored-by: nabarun <nabarun@deepstacksoft.com>
Former-commit-id: fc522140ba
2023-04-03 12:33:47 +05:30
David Boreham 21d6e33dab Merge pull request #271 from cerc-io/dboreham/add-foundry-to-erc20-stack
Add missing repo and container

Former-commit-id: 432fc4755b
2023-04-02 08:26:07 -06:00
David Boreham 009ce95914 Add missing repo and container
Former-commit-id: 045117c6fb
2023-04-02 08:25:26 -06:00
David Boreham f7bb8f2735 Merge pull request #236 from cerc-io/dboreham/add-console
Add laconic console in fixturenet-laconic-loaded

Former-commit-id: f2c6ebbc23
2023-04-01 13:32:29 -06:00
David Boreham bddf72ee46 Temporary punch the port through to the host to make things work
Former-commit-id: f6cb041634
2023-04-01 10:06:39 -06:00
David Boreham 2b94ed12c2 Run template substitution on start
Former-commit-id: e072a6bc4b
2023-03-31 16:33:48 -06:00
David Boreham 31f9f0e864 Implement deploy time config
Former-commit-id: 409f61d68d
2023-03-31 16:29:59 -06:00
David Boreham 1342be5723 Add a temporary version of config to the hosting container
Former-commit-id: 3372cac35e
2023-03-31 08:00:24 -06:00
David Boreham d25de90df2 Add console host container to stack
Former-commit-id: c45dd545dd
2023-03-31 07:59:43 -06:00
David Boreham 1fdfb9f568 Add console container pod
Former-commit-id: 3fb968c9cf
2023-03-30 00:27:19 -06:00
David Boreham 9b22685484 Add web server to host container
Former-commit-id: d01fb777d4
2023-03-30 00:15:36 -06:00
David Boreham 8ee702f6ff Merge branch 'main' into dboreham/add-console
Former-commit-id: 8bfc97bfbe
2023-03-29 22:50:22 -06:00
David Boreham fddd728037 Merge pull request #265 from cerc-io/dboreham/fix-yarn-lock-processing
Handle semver spec in package.json local dependencies

Former-commit-id: 7f155023d2
2023-03-29 22:47:48 -06:00
David Boreham fbe76f4713 Handle semver spec in package.json local dependencies
Former-commit-id: c5a532c02b
2023-03-29 22:46:58 -06:00
prathamesh0 79ad4fb15a Upgrade dependencies and start inline watcher peer in mobymask-v2 stack (#256)
* Upgrade dependencies in mobymask-v2 stack

* Run inline watcher peer in mobymask v2 stack

---------

Co-authored-by: nabarun <nabarun@deepstacksoft.com>
Former-commit-id: 71aaa41069
2023-03-30 09:44:15 +05:30
David Boreham 46cc0e8b94 Merge branch 'main' into dboreham/add-console
Former-commit-id: c5cf8dda79
2023-03-29 21:25:57 -06:00
David Boreham 75ca36e5f7 Update version
Former-commit-id: e23ee1b176
2023-03-28 20:00:16 -06:00
David Boreham 398c834b0b Merge pull request #255 from cerc-io/dboreham/fix-js-builder-gid-problem
Do not switch gid/uid for root and system users

Former-commit-id: df23476f0b
2023-03-28 19:54:37 -06:00
David Boreham 51ad55b1d1 Do not switch gid/uid for root and system users
Former-commit-id: b6fb3b396b
2023-03-28 19:52:42 -06:00
David Boreham cb7b6f05f2 Update version
Former-commit-id: 1266ab88be
2023-03-28 11:49:58 -06:00
David Boreham c358dd554f Merge pull request #253 from cerc-io/dboreham/wait-for-export-cluster-config
Detect transient errors exporting variables and re-try

Former-commit-id: a4a9607ea8
2023-03-28 11:44:54 -06:00
David Boreham 75ca0d4138 Detect transient errors exporting variables and re-try
Former-commit-id: 15c0a92f30
2023-03-28 11:44:02 -06:00
David Boreham 29f0302fb0 Merge pull request #252 from cerc-io/dboreham/fix-go-ethereum-foundry-startup
Fix health checks in erc20 containers

Former-commit-id: bb63035fcc
2023-03-28 10:47:07 -06:00
David Boreham b3ae2159ee Fix health checks in erc20 containers
Former-commit-id: 35ca979068
2023-03-28 10:46:26 -06:00
David Boreham bd61d823d7 Merge pull request #251 from cerc-io/dboreham/fix-ethereum-foundry
Update go-ethereum-foundry container for Debian base image

Former-commit-id: b1f4f2e4e3
2023-03-28 09:53:22 -06:00
David Boreham ce04ca2be5 Update go-ethereum-foundry container for Debian base image
Former-commit-id: 82acc99e2d
2023-03-28 09:52:34 -06:00
David Boreham 126d671bb0 Update version
Former-commit-id: 3caf0da956
2023-03-28 07:17:06 -06:00
David Boreham 616f85ce6b Update version
Former-commit-id: 7e1137f811
2023-03-27 08:00:19 -06:00
Nabarun Gogoi bed5e262cc Add missing notes and steps in mobymask-v2 stack readme (#241)
* Add missing notes and steps in readme

* Mention clearing of browser cache before opening invite link

Former-commit-id: 1aeb44d5ad
2023-03-27 11:17:12 +05:30
David Boreham 97da39c68b Update version
Former-commit-id: 3706dfd7db
2023-03-26 21:35:01 -06:00
David Boreham e7c5d5157e Fix missing container name change
Former-commit-id: e70dca7687
2023-03-25 18:57:36 -06:00
David Boreham 9adf48f6c8 Merge pull request #240 from cerc-io/dboreham/fix-act-container-names
Use dashes not underscore to match docker-compose file

Former-commit-id: 685ebdfb15
2023-03-25 18:27:38 -06:00
David Boreham 16ff576413 Use dashes not underscore to match docker-compose file in hosting repo and convention
Former-commit-id: 022afdc352
2023-03-25 18:26:21 -06:00
David Boreham f4e9837ed2 Merge pull request #238 from cerc-io/telackey/act_runner
Add Gitea action support via act_runner.

Former-commit-id: 7af7f654f6
2023-03-25 11:35:07 -06:00
David Boreham 0eb890edaf Merge pull request #230 from cerc-io/dboreham/update-node-version
Update to Node 18

Former-commit-id: ece01730fd
2023-03-25 11:34:07 -06:00
Thomas E Lackey 788b214116 Add Gitea action support via act_runner.
Former-commit-id: 74077d7704
2023-03-24 22:24:40 -05:00
David Boreham 94c69d6596 Merge main
Former-commit-id: 4a04d20bb2
2023-03-24 19:43:46 -06:00
David Boreham aa7697dd3e Add lirewine packages
Former-commit-id: ca82d39c0c
2023-03-25 12:07:44 -06:00
David Boreham 741b225706 Console host container builds
Former-commit-id: 4ad2729ae8
2023-03-24 19:32:41 -06:00
David Boreham 99f41a3f9f Merge branch 'dboreham/add-console' of github.com:cerc-io/stack-orchestrator into dboreham/add-console
Former-commit-id: 2ca48fd2a2
2023-03-24 18:38:50 -06:00
David Boreham 4b0f46bb1b Merge branch 'main' into dboreham/add-console
Former-commit-id: 62d2d37417
2023-03-24 18:36:36 -06:00
David Boreham 8e056c1b0c Merge pull request #237 from cerc-io/dboreham/use-local-foundry
Use our locally built foundry container

Former-commit-id: b2cef16462
2023-03-24 18:27:48 -06:00
David Boreham 5767b93e6a Use our locally built foundry container
Former-commit-id: da24a4edf6
2023-03-24 18:23:35 -06:00
David Boreham e608dca175 Add sdk repo and npm
Former-commit-id: f67e367ead
2023-03-24 17:31:41 -06:00
David Boreham 38feddb266 Update version file
Former-commit-id: 07de83e6d7
2023-03-24 10:26:48 -06:00
Zach 862b75de1e Merge pull request #235 from cerc-io/zramsay-patch-1
doc jq requirement

Former-commit-id: 3d03b1099a
2023-03-24 08:57:38 -04:00
Zach af44d005c3 doc jq requirement
Former-commit-id: a5dd62e457
2023-03-24 08:56:01 -04:00
Nabarun Gogoi 1951a5d398 Add web-apps and laconicd in MobyMask v2 watcher stack (#226)
* Rename .env file

* Add web app services to docker compose file

* Add laconicd to deploy contract and send txs

* Add demo with steps for running mobymask app with L2 chain

* Add fix for yarn install on M1 platform in react-peer

* Update multiaddrs to use websockets

* Add notes in readmes

---------

Co-authored-by: prathamesh0 <prathamesh.musale0@gmail.com>
Former-commit-id: cacd306b22
2023-03-24 15:53:54 +04:00
David Boreham 2d64f49dc5 Fix npm config
Former-commit-id: 5b8c91d19d
2023-03-24 02:59:03 -06:00
David Boreham 7c69d9477b Merge branch 'main' into dboreham/add-console
Former-commit-id: af1b5b5cfc
2023-03-24 02:56:03 -06:00
David Boreham 8a4f189286 Merge pull request #234 from cerc-io/dboreham/lirewine-builds
Add support for lirewine npm package build and consumption

Former-commit-id: ebe1dfa4bf
2023-03-24 02:54:21 -06:00
David Boreham aa01feb98a Fix up script and npm list
Former-commit-id: e6b91acdea
2023-03-24 02:52:37 -06:00
David Boreham c4e3aa54f1 Remove trailing whitespace
Former-commit-id: 2e7ef1e52e
2023-03-24 02:31:19 -06:00
David Boreham 364a71d694 Support the @lirewine npm scope
Former-commit-id: 9e322203e9
2023-03-24 02:30:04 -06:00
David Boreham d96a6def14 Add lirewine repos and npms
Former-commit-id: 870a987760
2023-03-24 02:08:56 -06:00
David Boreham db925bfe35 Merge branch 'main' into dboreham/add-console
Former-commit-id: 44ba54f4a1
2023-03-23 18:47:54 -06:00
David Boreham 1a75a270f5 Merge pull request #232 from cerc-io/dboreham/foundry-debian-container
Foundry debian-based container

Former-commit-id: 76029d24de
2023-03-23 17:46:42 -06:00
David Boreham 23fab04ec8 Fix filename
Former-commit-id: 06288fd9e1
2023-03-23 16:09:27 -06:00
David Boreham 4420f45924 Build foundry with Debian base container
Former-commit-id: 21e9f96771
2023-03-23 16:03:35 -06:00
David Boreham 9e138d8a6a Initial commit
Former-commit-id: c9e931c212
2023-03-23 16:02:55 -06:00
David Boreham 653f4c7d0a Update to Node 18
Former-commit-id: 56ff7d210b
2023-03-23 14:04:31 -06:00
David Boreham eb45899433 Merge branch 'main' into dboreham/add-console
Former-commit-id: c307b0a1af
2023-03-22 21:37:23 -06:00
David Boreham dac22b19b7 Update version
Former-commit-id: d5fd880bba
2023-03-22 21:35:48 -06:00
David Boreham be761d10b1 Initial commit
Former-commit-id: f55aeee583
2023-03-22 17:05:04 -06:00
David Boreham 08ed69e4cd Merge pull request #227 from cerc-io/dboreham/fix-fixturenet-laconicd
Update fixturenet create script

Former-commit-id: 5fca9bdac7
2023-03-22 15:01:59 -06:00
David Boreham 84bad13934 Update fixturenet create script
Former-commit-id: 01c6999027
2023-03-22 14:52:39 -06:00
David Boreham c537d787c2 Update version
Former-commit-id: 91ba0ae011
2023-03-21 09:55:40 -06:00
Zach 538bccd19a Merge pull request #225 from cerc-io/dboreham/update-readme
Update README to reflect user experience and to use --stack

Former-commit-id: 8fb93758ee
2023-03-21 09:25:53 -04:00
David Boreham d5f5ecceda Update README to reflect user experience and to use --stack directive in examples
Former-commit-id: 2c151cb8ce
2023-03-20 19:36:05 -06:00
David Boreham 39741b2e8f Add Gitea note
Former-commit-id: 74be51b892
2023-03-20 10:14:58 -06:00
David Boreham 2ce04afd1e Merge pull request #224 from cerc-io/dboreham/update-registry-doc
Update registry deploy instructions

Former-commit-id: 33bbbccef7
2023-03-20 10:09:49 -06:00
David Boreham 48a761875a Update registry deploy instructions
Former-commit-id: ea6f28a3de
2023-03-20 10:08:59 -06:00
David Boreham 6ee7c3bbba Update version
Former-commit-id: 4c92391faf
2023-03-20 08:21:41 -06:00
Nabarun Gogoi 4d3042bfcc Add a stack for mobymask-v2-watcher to run peer tests (#222)
* Add mobymask-v2-watcher stack with peer tests

* Rename stack and container

* Avoid building react-peer container

* Improve step for getting container ID

Former-commit-id: 7831078872
2023-03-20 18:25:39 +05:30
David Boreham 49092305e4 Remove now unnecessary commands
Former-commit-id: a354158680
2023-03-19 17:30:22 -06:00
David Boreham 5c3384c7ae Merge pull request #216 from cerc-io/dboreham/fixturenet-eth-deploy-contracts
deploy contracts and use foundry with fixturenet-eth

Former-commit-id: f8155cb29a
2023-03-10 16:11:04 -07:00
David Boreham b47d303ba4 Merge branch 'main' into dboreham/fixturenet-eth-deploy-contracts
Former-commit-id: c8ba07739e
2023-03-09 17:10:16 -07:00
David Boreham abdf561704 Merge pull request #214 from cerc-io/dboreham/use-local-foundry
Build foundry locally

Former-commit-id: 3aa9e427c2
2023-03-09 17:06:49 -07:00
David Boreham 8d3bca602a Build foundry locally
Former-commit-id: 6735a0d378
2023-03-09 17:06:07 -07:00
David Boreham 75e5c1d7f1 Drive by bug fix
Former-commit-id: 21104f6b18
2023-03-09 14:26:15 -07:00
David Boreham 5b79195f86 Merge branch 'main' into dboreham/fixturenet-eth-deploy-contracts
Former-commit-id: 7191dd8518
2023-03-09 08:37:45 -07:00
David Boreham 139626ef8e Update version
Former-commit-id: 01c1230cd1
2023-03-09 08:37:20 -07:00
David Boreham 4356e09440 Merge pull request #213 from cerc-io/dboreham/fix-non-stack-deploy
Fix for #212 - exception on non-stack deploy

Former-commit-id: ee778356c0
2023-03-09 08:31:01 -07:00
David Boreham 030ad25b78 Fix for #212 - exception on non-stack deploy
Former-commit-id: ecc4302c34
2023-03-09 08:30:18 -07:00
David Boreham f096c014ce Add omitted file
Former-commit-id: a3440ad05b
2023-03-09 06:43:51 -07:00
David Boreham 49b9983b3a Update doc
Former-commit-id: c07f6c51e2
2023-03-08 22:49:00 -07:00
David Boreham 4fd9e71c84 Configure foundry container
Former-commit-id: b708bb2122
2023-03-08 22:22:43 -07:00
David Boreham c12418222a Merge pull request #211 from cerc-io/dboreham/container-dependent-config
Add very basic cluster config mechanism

Former-commit-id: d9a7ea19a3
2023-03-08 17:04:06 -07:00
David Boreham f57b8b4ba0 Add very basic cluster config mechanism
Former-commit-id: 9cae493458
2023-03-08 17:03:14 -07:00
David Boreham e29ae7ac93 Merge pull request #210 from cerc-io/dboreham/laconicd-stack-doc
Update fixturenet-laconicd docs

Former-commit-id: 75376d7baf
2023-03-07 16:05:21 -07:00
David Boreham edd7d549d9 Update fixturenet-laconicd docs
Former-commit-id: 89b33c0e38
2023-03-07 16:04:43 -07:00
David Boreham 4092a127ef Merge pull request #208 from cerc-io/dboreham/implement-logs-command
Implement logs command

Former-commit-id: ba5cc68794
2023-03-07 11:58:41 -07:00
David Boreham fae8fbed5e Implement logs command
Former-commit-id: b488d82b8f
2023-03-07 11:57:24 -07:00
David Boreham a900945698 Update version
Former-commit-id: f97b1e4720
2023-03-07 10:14:20 -07:00
David Boreham 2937a07cd8 Merge pull request #207 from cerc-io/dboreham/run-cli-in-cluster
Run laconic registry cli in cluster

Former-commit-id: d80fb5dd16
2023-03-07 10:10:26 -07:00
David Boreham 6ae3c252ea Very basic key export/import implementation
Former-commit-id: 8a8fef6845
2023-03-07 10:08:04 -07:00
David Boreham 178de8f496 Add simple export/import scheme
Former-commit-id: 277be07dcd
2023-03-07 09:07:15 -07:00
David Boreham 05cdd2ed01 Merge pull request #205 from cerc-io/dboreham/eth-tooling
Add foundry cli to fixturenet-eth

Former-commit-id: 99fcdf9d40
2023-03-06 15:20:07 -07:00
David Boreham e801c28f60 Add foundry cli to fixturenet-eth
Former-commit-id: b43886a1a9
2023-03-06 15:19:19 -07:00
David Boreham eb70f2526e Initial commit
Former-commit-id: 49c524e8ed
2023-03-04 18:45:57 -07:00
David Boreham 199a4036a8 Merge pull request #203 from cerc-io/dboreham/pass-uid-to-compose-up
Pass environment variables for both exec and up

Former-commit-id: f7285be425
2023-03-02 10:56:53 -07:00
David Boreham f0b711912a Pass environment variables for both exec and up
Former-commit-id: be6a473772
2023-03-02 10:53:48 -07:00
David Boreham d3d3f92953 Merge pull request #202 from cerc-io/dboreham/remove-debug-output
Remove debug output

Former-commit-id: c0b4cefdce
2023-03-02 10:35:26 -07:00
David Boreham c3101bc25a Remove debug output
Former-commit-id: f99dbfbdf5
2023-03-02 10:32:18 -07:00
David Boreham 6c49520e69 Merge pull request #197 from cerc-io/dboreham/uid-gid-to-deploy
Pass uid and gid to compose

Former-commit-id: 900917b548
2023-02-28 10:23:34 -07:00
David Boreham 70d34e6751 Pass uid and gid to compose
Former-commit-id: cea4e5a51f
2023-02-28 10:22:27 -07:00
David Boreham c6de6d6067 Merge pull request #196 from cerc-io/dboreham/fix-project-name
Make cluster/docker-compose project name unique

Former-commit-id: 4353418fa6
2023-02-28 08:47:52 -07:00
David Boreham 69b6b9a873 Make cluster/docker-compose project name unique
Former-commit-id: a1fdeac3b7
2023-02-28 08:47:02 -07:00
David Boreham 430b884d72 Update version
Former-commit-id: 2f29e283de
2023-02-28 08:01:36 -07:00
David Boreham 5d456f5600 Merge pull request #193 from cerc-io/dboreham/disable-docker-buildkit
Revert move to BuildKit in new Docker releases

Former-commit-id: 4fdb442263
2023-02-24 23:34:48 -07:00
David Boreham ca40ffb012 Revert move to BuildKit in new Docker releases
Former-commit-id: ced6925e02
2023-02-24 23:31:17 -07:00
David Boreham 6a73d3adee Merge pull request #191 from cerc-io/dboreham/builder-js-uid
Support docker containers with non-root users and host user uid not equal to 1000

Former-commit-id: 10548beedf
2023-02-24 23:06:13 -07:00
David Boreham 9480b4082e usermod does not change group ownership of home dir
Former-commit-id: 17355c9e42
2023-02-24 22:44:50 -07:00
David Boreham 467ff235ba Propagate build env vars
Former-commit-id: 5022e355ed
2023-02-24 22:26:02 -07:00
David Boreham 25a755982e Implement new approach: build a uid-specific container
Former-commit-id: 6704cd7527
2023-02-24 22:14:28 -07:00
David Boreham e3e96fa75e Work around docker uid/gid insanity
Former-commit-id: b84a28592d
2023-02-23 20:50:20 -07:00
David Boreham 42696f8165 Add build stack doc
Former-commit-id: 187c06ef5a
2023-02-23 07:22:46 -07:00
David Boreham c26f8552e1 Merge pull request #187 from cerc-io/dboreham/document-build-stack
Add some doc for build stack

Former-commit-id: c52f9e655d
2023-02-21 11:26:43 -07:00
David Boreham 8352893d92 Add some doc
Former-commit-id: 1f57ec5326
2023-02-21 11:25:49 -07:00
Rick Manelius a92cabcb39 Merge pull request #186 from cerc-io/erc721-stack-syntax
Stack syntax for ERC721.

Former-commit-id: 9ccf3ca1af
2023-02-21 09:38:09 -06:00
Rick Manelius, PhD f841f8a2de Stack syntax for ERC721.
Former-commit-id: 463b11bb23
2023-02-21 09:30:07 -06:00
David Boreham 7f652708bb Merge pull request #185 from cerc-io/dboreham/immutable-js-build
Copy build tree

Former-commit-id: b1e618142d
2023-02-21 06:59:38 -07:00
David Boreham c769fce491 Copy build tree
Former-commit-id: 3b55d012be
2023-02-21 06:56:33 -07:00
David Boreham 02335795fb Merge pull request #184 from cerc-io/dboreham/kubo
Add a kubo pod

Former-commit-id: 25ae0cac7d
2023-02-21 06:17:41 -07:00
David Boreham c717fd8a59 Add ipfs pod
Former-commit-id: 6dddceec27
2023-02-21 06:16:59 -07:00
David Boreham 0de2db7e90 Add kubo pod
Former-commit-id: 9c00aba8f3
2023-02-21 05:59:28 -07:00
David Boreham 92f325a09b Update version
Former-commit-id: c4fa7fee30
2023-02-21 05:40:57 -07:00
David Boreham 964edbec6f Merge pull request #182 from cerc-io/dboreham/check-for-js-builder
Check for builder container in npm build

Former-commit-id: 6c1bedc67e
2023-02-20 16:16:43 -07:00
David Boreham 6f2a44ba2f Check for builder container in npm build
Former-commit-id: 156758bfa7
2023-02-20 16:16:15 -07:00
David Boreham 9053d2d782 Merge pull request #180 from cerc-io/dboreham/npm-build-with-package-stack
npm build with package-registry stack

Former-commit-id: 9f07a0ed4b
2023-02-20 12:49:20 -07:00
David Boreham 628fed4ef7 Working feature
Former-commit-id: f14a4a33d7
2023-02-20 12:46:56 -07:00
David Boreham c737ec7ed6 Wire up to build-npms
Former-commit-id: 7e6268c39d
2023-02-20 06:43:06 -07:00
David Boreham d927c92c0a Call from base stack class
Former-commit-id: f1cbce1d00
2023-02-20 06:23:21 -07:00
David Boreham 142be179f4 Initial implementation
Former-commit-id: 68293cbaa3
2023-02-20 06:09:35 -07:00
David Boreham 547ca561c0 Update python on whales
Former-commit-id: 7d51e4b9aa
2023-02-19 17:46:47 -07:00
David Boreham 863e19211e Remove debug code
Former-commit-id: 5ceed34160
2023-02-17 15:38:24 -07:00
David Boreham e20f9993d2 Merge pull request #178 from cerc-io/dboreham/package-registry-stack
Support for the package registry stack

Former-commit-id: e5197e4918
2023-02-17 15:36:28 -07:00
David Boreham 9dab9b815c Add newline
Former-commit-id: 63c93acb83
2023-02-17 15:36:09 -07:00
David Boreham 83115bcb9d Fix comment
Former-commit-id: 88d81f7df6
2023-02-17 15:35:31 -07:00
David Boreham 46c22f4e4f Add pre/post script support
Former-commit-id: bb39d90522
2023-02-17 15:31:43 -07:00
David Boreham 912483df58 Basic functionality
Former-commit-id: a1893aa153
2023-02-17 14:15:35 -07:00
David Boreham ff69670db6 Initial commit
Former-commit-id: 60c1da725e
2023-02-17 13:34:51 -07:00
David Boreham 871cd90456 Missed one update
Former-commit-id: 2bf50383dd
2023-02-17 11:53:58 -07:00
David Boreham 639ab8cbc3 Update to use stack syntax
Former-commit-id: 6569a2a2b6
2023-02-17 11:53:05 -07:00
Thomas E Lackey bc5eff71b5 Use latest keycloak plugin. (#173)
Former-commit-id: 8645ab0619
2023-02-01 16:02:49 -06:00
David Boreham 5ec774a300 Update version
Former-commit-id: b0520a042a
2023-01-30 22:29:38 +01:00
David Boreham 9a750f0d9e Update version
Former-commit-id: 10f2fbaa37
2023-01-30 22:28:33 +01:00
David Boreham 8a6ba6d01b Merge pull request #172 from cerc-io/dboreham/revert-foundry-image
Revert local foundry build

Former-commit-id: 3bd0c74e1f
2023-01-30 22:06:26 +01:00
David Boreham b4ddef7ff0 Revert local foundry build
Former-commit-id: e850923e1a
2023-01-30 14:05:48 -07:00
David Boreham a38ac5470d Merge pull request #171 from cerc-io/dboreham/fixturenet-loaded
Initial commit

Former-commit-id: 82c2bb78f2
2023-01-30 19:12:06 +01:00
David Boreham 77380aec94 Initial commit
Former-commit-id: f01bc27660
2023-01-30 16:53:57 +01:00
David Boreham ce7e5f2d82 Merge pull request #169 from cerc-io/dboreham/fix-gerbil-package-install
Install gerbil packages globally not locally in the project directory

Former-commit-id: db5775621d
2023-01-28 11:35:44 -07:00
David Boreham 4dda8e2a87 Install gerbil packages globally not locally in the project directory
Former-commit-id: 5928e40721
2023-01-28 18:56:43 +01:00
263 changed files with 7996 additions and 497 deletions

View File

@ -0,0 +1,27 @@
name: Fixturenet-Eth-Test
on:
push:
branches: 'ci-test'
jobs:
test:
name: "Run an Ethereum fixturenet test"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: cerc-io/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run fixturenet-eth tests"
run: ./tests/fixturenet-eth/run-test.sh

View File

@ -0,0 +1,46 @@
name: Publish
on:
push:
branches:
- main
- publish-test
jobs:
publish:
name: "Build and publish"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Get build info"
id: build-info
run: |
build_tag=$(./scripts/create_build_tag_file.sh)
echo "build-tag=v${build_tag}" >> $GITHUB_OUTPUT
- name: "Install Python"
uses: cerc-io/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Build local shiv package"
id: build
run: |
./scripts/build_shiv_package.sh
result_code=$?
echo "package-file=$(ls ./package/*)" >> $GITHUB_OUTPUT
exit $result_code
- name: "Stage artifact file"
run: |
cp ${{ steps.build.outputs.package-file }} ./laconic-so
- name: "Create release"
uses: cerc-io/action-gh-release@gitea-v1
with:
tag_name: ${{ steps.build-info.outputs.build-tag }}
# On the publish test branch, mark our release as a draft
# Hack using endsWith to workaround Gitea sometimes sending "publish-test" vs "refs/heads/publish-test"
draft: ${{ endsWith('publish-test', github.ref ) }}
files: ./laconic-so

View File

@ -0,0 +1,39 @@
name: Deploy Test
on:
pull_request:
branches: '*'
push:
branches:
- main
- ci-test
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs:
test:
name: "Run deploy test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: cerc-io/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run deploy tests"
run: ./tests/deploy/run-deploy-test.sh

View File

@ -0,0 +1,39 @@
name: Smoke Test
on:
pull_request:
branches: '*'
push:
branches:
- main
- ci-test
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs:
test:
name: "Run basic test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: cerc-io/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run smoke tests"
run: ./tests/smoke-test/run-smoke-test.sh

46
.github/workflows/publish.yml vendored 100644
View File

@ -0,0 +1,46 @@
name: Publish
on:
push:
branches:
- main
- publish-test
jobs:
publish:
name: "Build and publish"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Get build info"
id: build-info
run: |
build_tag=$(./scripts/create_build_tag_file.sh)
echo "build-tag=v${build_tag}" >> $GITHUB_OUTPUT
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Build local shiv package"
id: build
run: |
./scripts/build_shiv_package.sh
result_code=$?
echo "package-file=$(ls ./package/*)" >> $GITHUB_OUTPUT
exit $result_code
- name: "Stage artifact file"
run: |
cp ${{ steps.build.outputs.package-file }} ./laconic-so
- name: "Create release"
uses: softprops/action-gh-release@v1
with:
tag_name: ${{ steps.build-info.outputs.build-tag }}
# On the publish test branch, mark our release as a draft
# Hack using endsWith to workaround Gitea sometimes sending "publish-test" vs "refs/heads/publish-test"
draft: ${{ endsWith('publish-test', github.ref ) }}
files: ./laconic-so

View File

@ -0,0 +1,29 @@
name: Deploy Test
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run deploy test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run deploy tests"
run: ./tests/deploy/run-deploy-test.sh

29
.github/workflows/test.yml vendored 100644
View File

@ -0,0 +1,29 @@
name: Smoke Test
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run basic test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run smoke tests"
run: ./tests/smoke-test/run-smoke-test.sh

4
.gitignore vendored
View File

@ -5,4 +5,6 @@ laconic-so
laconic_stack_orchestrator.egg-info laconic_stack_orchestrator.egg-info
__pycache__ __pycache__
*~ *~
package
app/data/build_tag.txt
build

View File

@ -1,18 +1,25 @@
# Stack Orchestrator # Stack Orchestrator
Stack Orchestrator allows building and deployment of a Laconic Stack on a single machine with minimial prerequisites. It is a Python3 CLI tool that runs on any OS with Python3 and Docker. The following diagram summarizes the relevant repositories in the Laconic Stack - and the relationship to Stack Orchestrator. Stack Orchestrator allows building and deployment of a Laconic Stack on a single machine with minimial prerequisites. It is a Python3 CLI tool that runs on any OS with Python3 and Docker. The following diagram summarizes the relevant repositories in the Laconic Stack - and the relationship to Stack Orchestrator.
![The Stack](/docs/images/laconic-stack.png) ![The Stack](/docs/images/laconic-stack.png)
## Install ## Install
**To get started quickly** on a fresh Ubuntu instance (e.g, Digital Ocean); [try this script](./scripts/quick-install-ubuntu.sh). **WARNING:** always review scripts prior to running them so that you know what is happening on your machine.
For any other installation, follow along below and **adapt these instructions based on the specifics of your system.**
Ensure that the following are already installed: Ensure that the following are already installed:
- [Python3](https://wiki.python.org/moin/BeginnersGuide/Download): `python3 --version` >= `3.10.8` - [Python3](https://wiki.python.org/moin/BeginnersGuide/Download): `python3 --version` >= `3.8.10` (the Python3 shipped in Ubuntu 20+ is good to go)
- [Docker](https://docs.docker.com/get-docker/): `docker --version` >= `20.10.21` - [Docker](https://docs.docker.com/get-docker/): `docker --version` >= `20.10.21`
- [Docker Compose](https://docs.docker.com/compose/install/): `docker-compose --version` >= `2.13.0` - [jq](https://stedolan.github.io/jq/download/): `jq --version` >= `1.5`
Note: if installing docker-compose via package manager (as opposed to Docker Desktop), you must [install the plugin](https://docs.docker.com/compose/install/linux/#install-the-plugin-manually), e.g., on Linux: Note: if installing docker-compose via package manager on Linux (as opposed to Docker Desktop), you must [install the plugin](https://docs.docker.com/compose/install/linux/#install-the-plugin-manually), e.g. :
```bash ```bash
mkdir -p ~/.docker/cli-plugins mkdir -p ~/.docker/cli-plugins
@ -20,80 +27,38 @@ curl -SL https://github.com/docker/compose/releases/download/v2.11.2/docker-comp
chmod +x ~/.docker/cli-plugins/docker-compose chmod +x ~/.docker/cli-plugins/docker-compose
``` ```
Next, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags), into a suitable directory (e.g. `~/bin`): Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
```bash ```bash
curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so
``` ```
Give it permissions: Give it execute permissions:
```bash ```bash
chmod +x ~/bin/laconic-so chmod +x ~/bin/laconic-so
``` ```
Ensure `laconic-so` is on the [`PATH`](https://unix.stackexchange.com/a/26059) Ensure `laconic-so` is on the [`PATH`](https://unix.stackexchange.com/a/26059)
Verify operation: Verify operation (your version will probably be different, just check here that you see some version output and not an error):
``` ```
laconic-so --help laconic-so version
Usage: python -m laconic-so [OPTIONS] COMMAND [ARGS]... Version: 1.1.0-7a607c2-202304260513
Laconic Stack Orchestrator
Options:
--quiet
--verbose
--dry-run
--local-stack
-h, --help Show this message and exit.
Commands:
build-containers build the set of containers required for a complete...
build-npms build the set of npm packages required for a...
deploy-system deploy a stack
setup-repositories git clone the set of repositories required to build...
``` ```
## Usage ## Usage
Three sub-commands: `setup-repositories`, `build-containers` and `deploy-system` are generally run in order. The following is a slim example for standing up the `erc20-watcher`. Go further with the [erc20 watcher demo](/app/data/stacks/erc20) and other pieces of the stack, within the [`stacks` directory](/app/data/stacks). The various [stacks](/app/data/stacks) each contain instructions for running different stacks based on your use case. For example:
### Setup Repositories - [self-hosted Gitea](/app/data/stacks/build-support)
- [an Optimism Fixturenet](/app/data/stacks/fixturenet-optimism)
Clone the set of git repositories necessary to build a system: - [laconicd with console and CLI](app/data/stacks/fixturenet-laconic-loaded)
- [kubo (IPFS)](app/data/stacks/kubo)
```bash
laconic-so --verbose setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/watcher-ts
```
This will default to `~/cerc` or - if set - the environment variable `CERC_REPO_BASE_DIR`
### Build Containers
Build the set of docker container images required to run a system. It takes around 10 minutes to build all the containers from scratch.
```bash
laconic-so --verbose build-containers --include cerc/go-ethereum,cerc/go-ethereum-foundry,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/watcher-erc20
```
### Deploy System
Uses `docker-compose` to deploy a system (with most recently built container images).
```bash
laconic-so --verbose deploy-system --include ipld-eth-db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 up
```
Check out he GraphQL playground here: [http://localhost:3002/graphql](http://localhost:3002/graphql)
See the [erc20 watcher demo](/app/data/stacks/erc20) to continue further.
### Cleanup
```bash
laconic-so --verbose deploy-system --include ipld-eth-db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 down
```
## Contributing ## Contributing
@ -103,3 +68,4 @@ See the [CONTRIBUTING.md](/docs/CONTRIBUTING.md) for developer mode install.
Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested). Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested).

71
app/base.py 100644
View File

@ -0,0 +1,71 @@
# Copyright © 2022, 2023 Cerc
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import os
from abc import ABC, abstractmethod
from .deploy_system import get_stack_status
def get_stack(config, stack):
if stack == "package-registry":
return package_registry_stack(config, stack)
else:
return base_stack(config, stack)
class base_stack(ABC):
def __init__(self, config, stack):
self.config = config
self.stack = stack
@abstractmethod
def ensure_available(self):
pass
@abstractmethod
def get_url(self):
pass
class package_registry_stack(base_stack):
def ensure_available(self):
self.url = "<no registry url set>"
# Check if we were given an external registry URL
url_from_environment = os.environ.get("CERC_NPM_REGISTRY_URL")
if url_from_environment:
if self.config.verbose:
print(f"Using package registry url from CERC_NPM_REGISTRY_URL: {url_from_environment}")
self.url = url_from_environment
else:
# Otherwise we expect to use the local package-registry stack
# First check if the stack is up
registry_running = get_stack_status(self.config, "package-registry")
if registry_running:
# If it is available, get its mapped port and construct its URL
if self.config.debug:
print("Found local package registry stack is up")
# TODO: get url from deploy-stack
self.url = "http://gitea.local:3000/api/packages/cerc-io/npm/"
else:
# If not, print a message about how to start it and return fail to the caller
print("ERROR: The package-registry stack is not running, and no external registry specified with CERC_NPM_REGISTRY_URL")
print("ERROR: Start the local package registry with: laconic-so --stack package-registry deploy-system up")
return False
return True
def get_url(self):
return self.url

View File

@ -36,13 +36,16 @@ from .util import include_exclude_check, get_parsed_stack_config
@click.command() @click.command()
@click.option('--include', help="only build these containers") @click.option('--include', help="only build these containers")
@click.option('--exclude', help="don\'t build these containers") @click.option('--exclude', help="don\'t build these containers")
@click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
@click.option("--extra-build-args", help="Supply extra arguments to build")
@click.pass_context @click.pass_context
def command(ctx, include, exclude): def command(ctx, include, exclude, force_rebuild, extra_build_args):
'''build the set of containers required for a complete stack''' '''build the set of containers required for a complete stack'''
quiet = ctx.obj.quiet quiet = ctx.obj.quiet
verbose = ctx.obj.verbose verbose = ctx.obj.verbose
dry_run = ctx.obj.dry_run dry_run = ctx.obj.dry_run
debug = ctx.obj.debug
local_stack = ctx.obj.local_stack local_stack = ctx.obj.local_stack
stack = ctx.obj.stack stack = ctx.obj.stack
continue_on_error = ctx.obj.continue_on_error continue_on_error = ctx.obj.continue_on_error
@ -81,10 +84,20 @@ def command(ctx, include, exclude):
# TODO: make this configurable # TODO: make this configurable
container_build_env = { container_build_env = {
"CERC_NPM_URL": "http://gitea.local:3000/api/packages/cerc-io/npm/", "CERC_NPM_REGISTRY_URL": config("CERC_NPM_REGISTRY_URL", default="http://gitea.local:3000/api/packages/cerc-io/npm/"),
"CERC_NPM_AUTH_TOKEN": config("CERC_NPM_AUTH_TOKEN", default="<token-not-supplied>"), "CERC_NPM_AUTH_TOKEN": config("CERC_NPM_AUTH_TOKEN", default=""),
"CERC_REPO_BASE_DIR": dev_root_path "CERC_REPO_BASE_DIR": dev_root_path,
"CERC_CONTAINER_BASE_DIR": container_build_dir,
"CERC_HOST_UID": f"{os.getuid()}",
"CERC_HOST_GID": f"{os.getgid()}",
"DOCKER_BUILDKIT": config("DOCKER_BUILDKIT", default="0")
} }
container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})
container_build_env.update({"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args} if extra_build_args else {})
docker_host_env = os.getenv("DOCKER_HOST")
if docker_host_env:
container_build_env.update({"DOCKER_HOST": docker_host_env})
def process_container(container): def process_container(container):
if not quiet: if not quiet:
@ -102,11 +115,11 @@ def command(ctx, include, exclude):
# TODO: make this less of a hack -- should be specified in some metadata somewhere # TODO: make this less of a hack -- should be specified in some metadata somewhere
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir # Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_full_path = os.path.join(dev_root_path, repo_dir) repo_full_path = os.path.join(dev_root_path, repo_dir)
repo_dir_or_build_dir = repo_dir if os.path.exists(repo_full_path) else build_dir repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir
build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container} {repo_dir_or_build_dir}" build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container}:local {repo_dir_or_build_dir}"
if not dry_run: if not dry_run:
if verbose: if verbose:
print(f"Executing: {build_command}") print(f"Executing: {build_command} with environment: {container_build_env}")
build_result = subprocess.run(build_command, shell=True, env=container_build_env) build_result = subprocess.run(build_command, shell=True, env=container_build_env)
if verbose: if verbose:
print(f"Return code is: {build_result.returncode}") print(f"Return code is: {build_result.returncode}")

View File

@ -20,17 +20,23 @@
import os import os
import sys import sys
from shutil import rmtree, copytree
from decouple import config from decouple import config
import click import click
import importlib.resources import importlib.resources
from python_on_whales import docker, DockerException from python_on_whales import docker, DockerException
from .base import get_stack
from .util import include_exclude_check, get_parsed_stack_config from .util import include_exclude_check, get_parsed_stack_config
builder_js_image_name = "cerc/builder-js:local"
@click.command() @click.command()
@click.option('--include', help="only build these packages") @click.option('--include', help="only build these packages")
@click.option('--exclude', help="don\'t build these packages") @click.option('--exclude', help="don\'t build these packages")
@click.option("--force-rebuild", is_flag=True, default=False, help="Override existing target package version check -- force rebuild")
@click.option("--extra-build-args", help="Supply extra arguments to build")
@click.pass_context @click.pass_context
def command(ctx, include, exclude): def command(ctx, include, exclude, force_rebuild, extra_build_args):
'''build the set of npm packages required for a complete stack''' '''build the set of npm packages required for a complete stack'''
quiet = ctx.obj.quiet quiet = ctx.obj.quiet
@ -41,17 +47,38 @@ def command(ctx, include, exclude):
stack = ctx.obj.stack stack = ctx.obj.stack
continue_on_error = ctx.obj.continue_on_error continue_on_error = ctx.obj.continue_on_error
_ensure_prerequisites()
# build-npms depends on having access to a writable package registry
# so we check here that it is available
package_registry_stack = get_stack(ctx.obj, "package-registry")
registry_available = package_registry_stack.ensure_available()
if not registry_available:
print("FATAL: no npm registry available for build-npms command")
sys.exit(1)
npm_registry_url = package_registry_stack.get_url()
npm_registry_url_token = config("CERC_NPM_AUTH_TOKEN", default=None)
if not npm_registry_url_token:
print("FATAL: CERC_NPM_AUTH_TOKEN is not defined")
sys.exit(1)
if local_stack: if local_stack:
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")] dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}') print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
else: else:
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc")) dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
if not quiet: build_root_path = os.path.join(dev_root_path, "build-trees")
if verbose:
print(f'Dev Root is: {dev_root_path}') print(f'Dev Root is: {dev_root_path}')
if not os.path.isdir(dev_root_path): if not os.path.isdir(dev_root_path):
print('Dev root directory doesn\'t exist, creating') print('Dev root directory doesn\'t exist, creating')
os.makedirs(dev_root_path)
if not os.path.isdir(dev_root_path):
print('Build root directory doesn\'t exist, creating')
os.makedirs(build_root_path)
# See: https://stackoverflow.com/a/20885799/1701505 # See: https://stackoverflow.com/a/20885799/1701505
from . import data from . import data
@ -74,21 +101,42 @@ def command(ctx, include, exclude):
print(f"Building npm package: {package}") print(f"Building npm package: {package}")
repo_dir = package repo_dir = package
repo_full_path = os.path.join(dev_root_path, repo_dir) repo_full_path = os.path.join(dev_root_path, repo_dir)
# TODO: make the npm registry url configurable. # Copy the repo and build that to avoid propagating JS tooling file changes back into the cloned repo
build_command = ["sh", "-c", "cd /workspace && build-npm-package-local-dependencies.sh http://gitea.local:3000/api/packages/cerc-io/npm/"] repo_copy_path = os.path.join(build_root_path, repo_dir)
# First delete any old build tree
if os.path.isdir(repo_copy_path):
if verbose:
print(f"Deleting old build tree: {repo_copy_path}")
if not dry_run:
rmtree(repo_copy_path)
# Now copy the repo into the build tree location
if verbose:
print(f"Copying build tree from: {repo_full_path} to: {repo_copy_path}")
if not dry_run:
copytree(repo_full_path, repo_copy_path)
build_command = ["sh", "-c", f"cd /workspace && build-npm-package-local-dependencies.sh {npm_registry_url}"]
if not dry_run: if not dry_run:
if verbose: if verbose:
print(f"Executing: {build_command}") print(f"Executing: {build_command}")
envs = {"CERC_NPM_AUTH_TOKEN": os.environ["CERC_NPM_AUTH_TOKEN"]} | ({"CERC_SCRIPT_DEBUG": "true"} if debug else {}) # Originally we used the PEP 584 merge operator:
# envs = {"CERC_NPM_AUTH_TOKEN": npm_registry_url_token} | ({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
# but that isn't available in Python 3.8 (default in Ubuntu 20) so for now we use dict.update:
envs = {"CERC_NPM_AUTH_TOKEN": npm_registry_url_token,
"LACONIC_HOSTED_CONFIG_FILE": "config-hosted.yml" # Convention used by our web app packages
}
envs.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
envs.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})
envs.update({"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args} if extra_build_args else {})
try: try:
docker.run("cerc/builder-js", docker.run(builder_js_image_name,
remove=True, remove=True,
interactive=True, interactive=True,
tty=True, tty=True,
user=f"{os.getuid()}:{os.getgid()}", user=f"{os.getuid()}:{os.getgid()}",
envs=envs, envs=envs,
# TODO: detect this host name in npm_registry_url rather than hard-wiring it
add_hosts=[("gitea.local", "host-gateway")], add_hosts=[("gitea.local", "host-gateway")],
volumes=[(repo_full_path, "/workspace")], volumes=[(repo_copy_path, "/workspace")],
command=build_command command=build_command
) )
# Note that although the docs say that build_result should contain # Note that although the docs say that build_result should contain
@ -111,3 +159,13 @@ def command(ctx, include, exclude):
else: else:
if verbose: if verbose:
print(f"Excluding: {package}") print(f"Excluding: {package}")
def _ensure_prerequisites():
# Check that the builder-js container is available and
# Tell the user how to build it if not
images = docker.image.list(builder_js_image_name)
if len(images) == 0:
print(f"FATAL: builder image: {builder_js_image_name} is required but was not found")
print("Please run this command to create it: laconic-so --stack build-support build-containers")
sys.exit(1)

View File

@ -13,6 +13,8 @@ services:
grafana: grafana:
restart: always restart: always
image: grafana/grafana image: grafana/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=changeme6325
volumes: volumes:
- ../config/fixturenet-eth-metrics/grafana/etc/provisioning/dashboards:/etc/grafana/provisioning/dashboards - ../config/fixturenet-eth-metrics/grafana/etc/provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ../config/fixturenet-eth-metrics/grafana/etc/provisioning/datasources:/etc/grafana/provisioning/datasources - ../config/fixturenet-eth-metrics/grafana/etc/provisioning/datasources:/etc/grafana/provisioning/datasources

View File

@ -2,28 +2,34 @@ version: '3.7'
services: services:
fixturenet-eth-bootnode-geth: fixturenet-eth-bootnode-geth:
restart: always
hostname: fixturenet-eth-bootnode-geth hostname: fixturenet-eth-bootnode-geth
env_file: env_file:
- ../config/fixturenet-eth/fixturenet-eth.env - ../config/fixturenet-eth/fixturenet-eth.env
environment: environment:
RUN_BOOTNODE: "true" RUN_BOOTNODE: "true"
image: cerc/fixturenet-eth-geth:local image: cerc/fixturenet-eth-geth:local
volumes:
- fixturenet_eth_bootnode_geth_data:/root/ethdata
ports: ports:
- "9898" - "9898"
- "30303" - "30303"
fixturenet-eth-geth-1: fixturenet-eth-geth-1:
restart: always
hostname: fixturenet-eth-geth-1 hostname: fixturenet-eth-geth-1
cap_add: cap_add:
- SYS_PTRACE - SYS_PTRACE
environment: environment:
CERC_REMOTE_DEBUG: "true" CERC_REMOTE_DEBUG: "true"
CERC_RUN_STATEDIFF: "detect" CERC_RUN_STATEDIFF: ${CERC_RUN_STATEDIFF:-detect}
CERC_STATEDIFF_DB_NODE_ID: 1 CERC_STATEDIFF_DB_NODE_ID: 1
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG} CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
env_file: env_file:
- ../config/fixturenet-eth/fixturenet-eth.env - ../config/fixturenet-eth/fixturenet-eth.env
image: cerc/fixturenet-eth-geth:local image: cerc/fixturenet-eth-geth:local
volumes:
- fixturenet_eth_geth_1_data:/root/ethdata
healthcheck: healthcheck:
test: ["CMD", "nc", "-v", "localhost", "8545"] test: ["CMD", "nc", "-v", "localhost", "8545"]
interval: 30s interval: 30s
@ -38,6 +44,7 @@ services:
- "6060" - "6060"
fixturenet-eth-geth-2: fixturenet-eth-geth-2:
restart: always
hostname: fixturenet-eth-geth-2 hostname: fixturenet-eth-geth-2
healthcheck: healthcheck:
test: ["CMD", "nc", "-v", "localhost", "8545"] test: ["CMD", "nc", "-v", "localhost", "8545"]
@ -45,19 +52,25 @@ services:
timeout: 10s timeout: 10s
retries: 10 retries: 10
start_period: 3s start_period: 3s
environment:
CERC_KEEP_RUNNING_AFTER_GETH_EXIT: "true"
env_file: env_file:
- ../config/fixturenet-eth/fixturenet-eth.env - ../config/fixturenet-eth/fixturenet-eth.env
image: cerc/fixturenet-eth-geth:local image: cerc/fixturenet-eth-geth:local
depends_on: depends_on:
- fixturenet-eth-bootnode-geth - fixturenet-eth-bootnode-geth
volumes:
- fixturenet_eth_geth_2_data:/root/ethdata
fixturenet-eth-bootnode-lighthouse: fixturenet-eth-bootnode-lighthouse:
restart: always
hostname: fixturenet-eth-bootnode-lighthouse hostname: fixturenet-eth-bootnode-lighthouse
environment: environment:
RUN_BOOTNODE: "true" RUN_BOOTNODE: "true"
image: cerc/fixturenet-eth-lighthouse:local image: cerc/fixturenet-eth-lighthouse:local
fixturenet-eth-lighthouse-1: fixturenet-eth-lighthouse-1:
restart: always
hostname: fixturenet-eth-lighthouse-1 hostname: fixturenet-eth-lighthouse-1
healthcheck: healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"] test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"]
@ -72,6 +85,8 @@ services:
ETH1_ENDPOINT: "http://fixturenet-eth-geth-1:8545" ETH1_ENDPOINT: "http://fixturenet-eth-geth-1:8545"
EXECUTION_ENDPOINT: "http://fixturenet-eth-geth-1:8551" EXECUTION_ENDPOINT: "http://fixturenet-eth-geth-1:8551"
image: cerc/fixturenet-eth-lighthouse:local image: cerc/fixturenet-eth-lighthouse:local
volumes:
- fixturenet_eth_lighthouse_1_data:/opt/testnet/build/cl
depends_on: depends_on:
fixturenet-eth-bootnode-lighthouse: fixturenet-eth-bootnode-lighthouse:
condition: service_started condition: service_started
@ -81,6 +96,7 @@ services:
- "8001" - "8001"
fixturenet-eth-lighthouse-2: fixturenet-eth-lighthouse-2:
restart: always
hostname: fixturenet-eth-lighthouse-2 hostname: fixturenet-eth-lighthouse-2
healthcheck: healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"] test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"]
@ -96,8 +112,17 @@ services:
EXECUTION_ENDPOINT: "http://fixturenet-eth-geth-2:8551" EXECUTION_ENDPOINT: "http://fixturenet-eth-geth-2:8551"
LIGHTHOUSE_GENESIS_STATE_URL: "http://fixturenet-eth-lighthouse-1:8001/eth/v2/debug/beacon/states/0" LIGHTHOUSE_GENESIS_STATE_URL: "http://fixturenet-eth-lighthouse-1:8001/eth/v2/debug/beacon/states/0"
image: cerc/fixturenet-eth-lighthouse:local image: cerc/fixturenet-eth-lighthouse:local
volumes:
- fixturenet_eth_lighthouse_2_data:/opt/testnet/build/cl
depends_on: depends_on:
fixturenet-eth-bootnode-lighthouse: fixturenet-eth-bootnode-lighthouse:
condition: service_started condition: service_started
fixturenet-eth-geth-2: fixturenet-eth-geth-2:
condition: service_healthy condition: service_healthy
volumes:
fixturenet_eth_bootnode_geth_data:
fixturenet_eth_geth_1_data:
fixturenet_eth_geth_2_data:
fixturenet_eth_lighthouse_1_data:
fixturenet_eth_lighthouse_2_data:

View File

@ -0,0 +1,8 @@
services:
laconic-console:
restart: unless-stopped
image: cerc/laconic-console-host:local
environment:
- LACONIC_HOSTED_ENDPOINT=${LACONIC_HOSTED_ENDPOINT:-http://localhost}
ports:
- "80"

View File

@ -1,21 +1,27 @@
version: "3.2"
services: services:
laconicd: laconicd:
restart: unless-stopped restart: unless-stopped
image: cerc/laconicd:local image: cerc/laconicd:local
command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"] command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes: volumes:
# TODO: look at folding this script into the container # The cosmos-sdk node's database directory:
- laconicd-data:/root/.laconicd/data
# TODO: look at folding these scripts into the container
- ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh - ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
- ../config/fixturenet-laconicd/export-mykey.sh:/docker-entrypoint-scripts.d/export-mykey.sh
- ../config/fixturenet-laconicd/export-myaddress.sh:/docker-entrypoint-scripts.d/export-myaddress.sh
# TODO: determine which of the ports below is really needed # TODO: determine which of the ports below is really needed
ports: ports:
- "6060" - "6060"
- "26657" - "26657"
- "26656" - "26656"
- "9473" - "9473:9473"
- "8545" - "8545"
- "8546" - "8546"
- "9090" - "9090"
- "9091" - "9091"
- "1317" - "1317"
cli:
image: cerc/laconic-registry-cli:local
volumes:
- ../config/fixturenet-laconicd/registry-cli-config-template.yml:/registry-cli-config-template.yml

View File

@ -0,0 +1,68 @@
version: "3.8"
services:
lotus-miner:
hostname: lotus-miner
env_file:
- ../config/fixturenet-lotus/lotus-env.env
image: cerc/lotus:local
volumes:
- ../config/fixturenet-lotus/setup-miner.sh:/docker-entrypoint-scripts.d/setup-miner.sh
- ../config/fixturenet-lotus/genesis/devgen.car:/devgen.car
- $HOME/stack-orchestrator/app/data/config/fixturenet-lotus/genesis/.genesis-sectors:/root/.genesis-sectors
- lotus-shared:/root/.lotus-shared
healthcheck:
# test: ["CMD-SHELL", "grep 'started ChainNotify channel' /var/log/lotus.log"]
# test: ["CMD-SHELL", "[ -f /root/.lotus-shared/miner.addr ]"]
test: ["CMD-SHELL", "[ -d /root/.lotus-miner-local-net ]"]
interval: 10s
timeout: 10s
retries: 10
start_period: 60s
entrypoint: ["sh", "/docker-entrypoint-scripts.d/setup-miner.sh"]
ports:
- "1234"
- "2345"
- "3456"
- "1777"
lotus-node-1:
hostname: lotus-node-1
env_file:
- ../config/fixturenet-lotus/lotus-env.env
image: cerc/lotus:local
volumes:
- ../config/fixturenet-lotus/setup-node.sh:/docker-entrypoint-scripts.d/setup-node.sh
- ../config/fixturenet-lotus/genesis/devgen.car:/devgen.car
- lotus-shared:/root/.lotus-shared
depends_on:
lotus-miner:
condition: service_healthy
entrypoint: ["sh", "/docker-entrypoint-scripts.d/setup-node.sh"]
ports:
- "1234"
- "2345"
- "3456"
- "1777"
lotus-node-2:
hostname: lotus-node-2
env_file:
- ../config/fixturenet-lotus/lotus-env.env
image: cerc/lotus:local
volumes:
- ../config/fixturenet-lotus/setup-node.sh:/docker-entrypoint-scripts.d/setup-node.sh
- ../config/fixturenet-lotus/genesis/devgen.car:/devgen.car
- lotus-shared:/root/.lotus-shared
depends_on:
lotus-miner:
condition: service_healthy
entrypoint: ["sh", "/docker-entrypoint-scripts.d/setup-node.sh"]
ports:
- "1234"
- "2345"
- "3456"
- "1777"
volumes:
lotus-shared:

View File

@ -0,0 +1,165 @@
version: '3.7'
services:
# Generates and funds the accounts required when setting up the L2 chain (outputs to volume l2_accounts)
# Creates / updates the configuration for L1 contracts deployment
# Deploys the L1 smart contracts (outputs to volume l1_deployment)
fixturenet-optimism-contracts:
restart: on-failure
hostname: fixturenet-optimism-contracts
image: cerc/optimism-contracts:local
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_CHAIN_ID: ${CERC_L1_CHAIN_ID}
CERC_L1_RPC: ${CERC_L1_RPC}
CERC_L1_ACCOUNTS_CSV_URL: ${CERC_L1_ACCOUNTS_CSV_URL}
CERC_L1_ADDRESS: ${CERC_L1_ADDRESS}
CERC_L1_PRIV_KEY: ${CERC_L1_PRIV_KEY}
CERC_L1_ADDRESS_2: ${CERC_L1_ADDRESS_2}
CERC_L1_PRIV_KEY_2: ${CERC_L1_PRIV_KEY_2}
# Waits for L1 endpoint to be up before running the script
command: |
"./wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- ./run.sh"
volumes:
- ../config/wait-for-it.sh:/app/packages/contracts-bedrock/wait-for-it.sh
- ../container-build/cerc-optimism-contracts/hardhat-tasks/verify-contract-deployment.ts:/app/packages/contracts-bedrock/tasks/verify-contract-deployment.ts
- ../container-build/cerc-optimism-contracts/hardhat-tasks/rekey-json.ts:/app/packages/contracts-bedrock/tasks/rekey-json.ts
- ../container-build/cerc-optimism-contracts/hardhat-tasks/send-balance.ts:/app/packages/contracts-bedrock/tasks/send-balance.ts
- ../config/fixturenet-optimism/optimism-contracts/update-config.js:/app/packages/contracts-bedrock/update-config.js
- ../config/fixturenet-optimism/optimism-contracts/run.sh:/app/packages/contracts-bedrock/run.sh
- l2_accounts:/l2-accounts
- l1_deployment:/app/packages/contracts-bedrock
extra_hosts:
- "host.docker.internal:host-gateway"
# Generates the config files required for L2 (outputs to volume l2_config)
op-node-l2-config-gen:
restart: on-failure
image: cerc/optimism-op-node:local
depends_on:
fixturenet-optimism-contracts:
condition: service_completed_successfully
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC}
volumes:
- ../config/fixturenet-optimism/generate-l2-config.sh:/app/generate-l2-config.sh
- l1_deployment:/contracts-bedrock:ro
- l2_config:/app
command: ["sh", "/app/generate-l2-config.sh"]
extra_hosts:
- "host.docker.internal:host-gateway"
# Initializes and runs the L2 execution client (outputs to volume l2_geth_data)
op-geth:
restart: always
image: cerc/optimism-l2geth:local
depends_on:
op-node-l2-config-gen:
condition: service_started
volumes:
- ../config/fixturenet-optimism/run-op-geth.sh:/run-op-geth.sh
- l2_config:/op-node:ro
- l2_accounts:/l2-accounts:ro
- l2_geth_data:/datadir
entrypoint: "sh"
command: "/run-op-geth.sh"
ports:
- "0.0.0.0:8545:8545"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8545"]
interval: 30s
timeout: 10s
retries: 10
start_period: 10s
# Runs the L2 consensus client (Sequencer node)
op-node:
restart: always
image: cerc/optimism-op-node:local
depends_on:
op-geth:
condition: service_healthy
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC}
volumes:
- ../config/fixturenet-optimism/run-op-node.sh:/app/run-op-node.sh
- l2_config:/op-node-data:ro
- l2_accounts:/l2-accounts:ro
command: ["sh", "/app/run-op-node.sh"]
ports:
- "0.0.0.0:8547:8547"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8547"]
interval: 30s
timeout: 10s
retries: 10
start_period: 10s
extra_hosts:
- "host.docker.internal:host-gateway"
# Runs the batcher (takes transactions from the Sequencer and publishes them to L1)
op-batcher:
restart: always
image: cerc/optimism-op-batcher:local
depends_on:
op-node:
condition: service_healthy
op-geth:
condition: service_healthy
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC}
volumes:
- ../config/wait-for-it.sh:/wait-for-it.sh
- ../config/fixturenet-optimism/run-op-batcher.sh:/run-op-batcher.sh
- l2_accounts:/l2-accounts:ro
entrypoint: ["sh", "-c"]
# Waits for L1 endpoint to be up before running the batcher
command: |
"/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-batcher.sh"
ports:
- "127.0.0.1:8548:8548"
extra_hosts:
- "host.docker.internal:host-gateway"
# Runs the proposer (periodically submits new state roots to L1)
op-proposer:
restart: always
image: cerc/optimism-op-proposer:local
depends_on:
op-node:
condition: service_healthy
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC}
volumes:
- ../config/wait-for-it.sh:/wait-for-it.sh
- ../config/fixturenet-optimism/run-op-proposer.sh:/run-op-proposer.sh
- l1_deployment:/contracts-bedrock:ro
- l2_accounts:/l2-accounts:ro
entrypoint: ["sh", "-c"]
# Waits for L1 endpoint to be up before running the proposer
command: |
"/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-proposer.sh"
ports:
- "127.0.0.1:8560:8560"
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
l1_deployment:
l2_accounts:
l2_config:
l2_geth_data:

View File

@ -0,0 +1,129 @@
services:
fixturenet-eth-bootnode-geth:
restart: always
hostname: fixturenet-eth-bootnode-geth
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
environment:
RUN_BOOTNODE: "true"
image: cerc/fixturenet-plugeth-plugeth:local
volumes:
- fixturenet_plugeth_bootnode_geth_data:/root/ethdata
- ../config/fixturenet-plugeth/plugins:/root/ethdata/plugins
ports:
- "9898"
- "30303"
fixturenet-eth-geth-1:
restart: always
hostname: fixturenet-eth-geth-1
cap_add:
- SYS_PTRACE
environment:
CERC_REMOTE_DEBUG: "true"
CERC_RUN_STATEDIFF: "detect"
CERC_STATEDIFF_DB_NODE_ID: 1
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
image: cerc/fixturenet-plugeth-plugeth:local
volumes:
- fixturenet_plugeth_geth_1_data:/root/ethdata
- ../config/fixturenet-plugeth/plugins:/root/ethdata/plugins
healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8545/"]
interval: 30s
timeout: 10s
retries: 10
start_period: 3s
depends_on:
- fixturenet-eth-bootnode-geth
ports:
- "8545"
- "40000"
- "6060"
fixturenet-eth-geth-2:
restart: always
hostname: fixturenet-eth-geth-2
healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8545/"]
interval: 30s
timeout: 10s
retries: 10
start_period: 3s
environment:
CERC_KEEP_RUNNING_AFTER_GETH_EXIT: "true"
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
image: cerc/fixturenet-plugeth-plugeth:local
depends_on:
- fixturenet-eth-bootnode-geth
volumes:
- fixturenet_plugeth_geth_2_data:/root/ethdata
- ../config/fixturenet-plugeth/plugins:/root/ethdata/plugins
fixturenet-eth-bootnode-lighthouse:
restart: always
hostname: fixturenet-eth-bootnode-lighthouse
environment:
RUN_BOOTNODE: "true"
image: cerc/fixturenet-plugeth-lighthouse:local
fixturenet-eth-lighthouse-1:
restart: always
hostname: fixturenet-eth-lighthouse-1
healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"]
interval: 30s
timeout: 10s
retries: 10
start_period: 30s
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
environment:
NODE_NUMBER: "1"
ETH1_ENDPOINT: "http://fixturenet-eth-geth-1:8545"
EXECUTION_ENDPOINT: "http://fixturenet-eth-geth-1:8551"
image: cerc/fixturenet-plugeth-lighthouse:local
volumes:
- fixturenet_plugeth_lighthouse_1_data:/opt/testnet/build/cl
depends_on:
fixturenet-eth-bootnode-lighthouse:
condition: service_started
fixturenet-eth-geth-1:
condition: service_healthy
ports:
- "8001"
fixturenet-eth-lighthouse-2:
restart: always
hostname: fixturenet-eth-lighthouse-2
healthcheck:
test: ["CMD", "wget", "--tries=1", "--connect-timeout=1", "--quiet", "-O", "-", "http://localhost:8001/eth/v2/beacon/blocks/head"]
interval: 30s
timeout: 10s
retries: 10
start_period: 30s
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
environment:
NODE_NUMBER: "2"
ETH1_ENDPOINT: "http://fixturenet-eth-geth-2:8545"
EXECUTION_ENDPOINT: "http://fixturenet-eth-geth-2:8551"
LIGHTHOUSE_GENESIS_STATE_URL: "http://fixturenet-eth-lighthouse-1:8001/eth/v2/debug/beacon/states/0"
image: cerc/fixturenet-plugeth-lighthouse:local
volumes:
- fixturenet_plugeth_lighthouse_2_data:/opt/testnet/build/cl
depends_on:
fixturenet-eth-bootnode-lighthouse:
condition: service_started
fixturenet-eth-geth-2:
condition: service_healthy
volumes:
fixturenet_plugeth_bootnode_geth_data:
fixturenet_plugeth_geth_1_data:
fixturenet_plugeth_geth_2_data:
fixturenet_plugeth_lighthouse_1_data:
fixturenet_plugeth_lighthouse_2_data:

View File

@ -0,0 +1,18 @@
version: "3.2"
services:
pocket:
restart: unless-stopped
image: cerc/pocket:local
# command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
entrypoint: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes:
# TODO: look at folding these scripts into the container
- ../config/fixturenet-pocket/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
- ../config/fixturenet-pocket/chains.json:/home/app/pocket-configs/chains.json
- ../config/fixturenet-pocket/genesis.json:/home/app/pocket-configs/genesis.json
ports:
- "8081:8081" # pocket relay rpc
networks:
net1:
name: fixturenet-eth_default
external: true

View File

@ -0,0 +1,9 @@
# Add-on pod to include foundry tooling within a fixturenet
services:
foundry:
restart: always
image: cerc/foundry:local
command: ["while :; do sleep 600; done"]
volumes:
- ../config/foundry/foundry.toml:/foundry.toml
- ./foundry/workspace:/workspace

View File

@ -8,7 +8,7 @@ services:
condition: service_healthy condition: service_healthy
image: cerc/go-ethereum-foundry:local image: cerc/go-ethereum-foundry:local
healthcheck: healthcheck:
test: ["CMD", "nc", "-v", "localhost", "8545"] test: ["CMD", "nc", "-vz", "localhost", "8545"]
interval: 30s interval: 30s
timeout: 3s timeout: 3s
retries: 10 retries: 10

View File

@ -7,11 +7,9 @@ services:
condition: service_healthy condition: service_healthy
image: cerc/ipld-eth-server:local image: cerc/ipld-eth-server:local
environment: environment:
IPLD_SERVER_GRAPHQL: "true" SERVER_HTTP_PATH: 0.0.0.0:8081
IPLD_POSTGRAPHILEPATH: http://graphql:5000 SERVER_GRAPHQL: "true"
ETH_SERVER_HTTPPATH: 0.0.0.0:8081 SERVER_GRAPHQLPATH: 0.0.0.0:8082
ETH_SERVER_GRAPHQL: "true"
ETH_SERVER_GRAPHQLPATH: 0.0.0.0:8082
VDB_COMMAND: "serve" VDB_COMMAND: "serve"
ETH_CHAIN_CONFIG: "/tmp/chain.json" ETH_CHAIN_CONFIG: "/tmp/chain.json"
DATABASE_NAME: cerc_testing DATABASE_NAME: cerc_testing

View File

@ -0,0 +1,13 @@
version: "3.2"
# See: https://docs.ipfs.tech/install/run-ipfs-inside-docker/#set-up
services:
ipfs:
image: ipfs/kubo:master-2023-02-20-714a968
restart: always
volumes:
- ./ipfs/import:/import
- ./ipfs/data:/data/ipfs
ports:
- "0.0.0.0:8080:8080"
- "0.0.0.0:4001:4001"
- "0.0.0.0:5001:5001"

View File

@ -0,0 +1,70 @@
version: '3.2'
services:
# Builds and serves the MobyMask react-app
mobymask-app:
restart: unless-stopped
image: cerc/mobymask-ui:local
env_file:
- ../config/watcher-mobymask-v2/mobymask-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_CHAIN_ID: ${CERC_CHAIN_ID}
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}
CERC_APP_WATCHER_URL: ${CERC_APP_WATCHER_URL}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_BUILD_DIR: "@cerc-io/mobymask-ui/build"
working_dir: /scripts
command: ["sh", "mobymask-app-start.sh"]
volumes:
- ../config/wait-for-it.sh:/scripts/wait-for-it.sh
- ../config/watcher-mobymask-v2/mobymask-app-start.sh:/scripts/mobymask-app-start.sh
- peers_ids:/peers
- mobymask_deployment:/server
ports:
- "0.0.0.0:3002:80"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "80"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
extra_hosts:
- "host.docker.internal:host-gateway"
# Builds and serves the LXDAO version of MobyMask react-app
lxdao-mobymask-app:
restart: unless-stopped
image: cerc/mobymask-ui:local
env_file:
- ../config/watcher-mobymask-v2/mobymask-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_CHAIN_ID: ${CERC_CHAIN_ID}
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}
CERC_APP_WATCHER_URL: ${CERC_APP_WATCHER_URL}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_BUILD_DIR: "@cerc-io/mobymask-ui-lxdao/build"
working_dir: /scripts
command: ["sh", "mobymask-app-start.sh"]
volumes:
- ../config/wait-for-it.sh:/scripts/wait-for-it.sh
- ../config/watcher-mobymask-v2/mobymask-app-start.sh:/scripts/mobymask-app-start.sh
- peers_ids:/peers
- mobymask_deployment:/server
ports:
- "0.0.0.0:3004:80"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "80"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
mobymask_deployment:
peers_ids:

View File

@ -0,0 +1,32 @@
version: '3.2'
services:
# Builds and serves the peer-test react-app
peer-test-app:
restart: unless-stopped
image: cerc/react-peer:local
working_dir: /scripts
env_file:
- ../config/watcher-mobymask-v2/mobymask-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
command: ["sh", "test-app-start.sh"]
volumes:
- ../config/wait-for-it.sh:/scripts/wait-for-it.sh
- ../config/watcher-mobymask-v2/test-app-start.sh:/scripts/test-app-start.sh
- peers_ids:/peers
ports:
- "0.0.0.0:3003:80"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "80"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
peers_ids:

View File

@ -1,7 +1,13 @@
version: "3.2"
services: services:
test: test:
image: cerc/test-container:local image: cerc/test-container:local
restart: always restart: always
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
volumes:
- test-data:/var
ports: ports:
- "80" - "80"
volumes:
test-data:

View File

@ -0,0 +1,304 @@
version: '3.2'
services:
# Starts the PostgreSQL database for watchers
watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=azimuth-watcher,azimuth-watcher-job-queue,censures-watcher,censures-watcher-job-queue,claims-watcher,claims-watcher-job-queue,conditional-star-release-watcher,conditional-star-release-watcher-job-queue,delegated-sending-watcher,delegated-sending-watcher-job-queue,ecliptic-watcher,ecliptic-watcher-job-queue,linear-star-release-watcher,linear-star-release-watcher-job-queue,polls-watcher,polls-watcher-job-queue
- POSTGRES_EXTENSION=azimuth-watcher-job-queue:pgcrypto,censures-watcher-job-queue:pgcrypto,claims-watcher-job-queue:pgcrypto,conditional-star-release-watcher-job-queue:pgcrypto,delegated-sending-watcher-job-queue:pgcrypto,ecliptic-watcher-job-queue:pgcrypto,linear-star-release-watcher-job-queue:pgcrypto,polls-watcher-job-queue:pgcrypto,
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- watcher_db_data:/var/lib/postgresql/data
ports:
- "0.0.0.0:15432:5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
# Starts the azimuth-watcher server
azimuth-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/azimuth-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/azimuth-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/azimuth-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/azimuth-watcher/start-server.sh
ports:
- "3001"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3001"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the censures-watcher server
censures-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/censures-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/censures-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/censures-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/censures-watcher/start-server.sh
ports:
- "3002"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3002"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the claims-watcher server
claims-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/claims-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/claims-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/claims-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/claims-watcher/start-server.sh
ports:
- "3003"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3003"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the conditional-star-release-watcher server
conditional-star-release-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/conditional-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/conditional-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/conditional-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/conditional-star-release-watcher/start-server.sh
ports:
- "3004"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3004"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the delegated-sending-watcher server
delegated-sending-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/delegated-sending-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/delegated-sending-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/delegated-sending-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/delegated-sending-watcher/start-server.sh
ports:
- "3005"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3005"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the ecliptic-watcher server
ecliptic-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/ecliptic-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/ecliptic-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/ecliptic-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/ecliptic-watcher/start-server.sh
ports:
- "3006"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3006"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the linear-star-release-watcher server
linear-star-release-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/linear-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/linear-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/linear-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/linear-star-release-watcher/start-server.sh
ports:
- "3007"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3007"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the polls-watcher server
polls-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/polls-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/polls-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/polls-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/polls-watcher/start-server.sh
ports:
- "3008"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3008"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the gateway-server for proxying queries
gateway-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
azimuth-watcher-server:
condition: service_healthy
censures-watcher-server:
condition: service_healthy
claims-watcher-server:
condition: service_healthy
conditional-star-release-watcher-server:
condition: service_healthy
delegated-sending-watcher-server:
condition: service_healthy
ecliptic-watcher-server:
condition: service_healthy
linear-star-release-watcher-server:
condition: service_healthy
polls-watcher-server:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
working_dir: /app/packages/gateway-server
command: "yarn server"
volumes:
- ../config/watcher-azimuth/gateway-watchers.json:/app/packages/gateway-server/dist/watchers.json
ports:
- "0.0.0.0:4000:4000"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "4000"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
watcher_db_data:

View File

@ -39,7 +39,7 @@ services:
- "0.0.0.0:3002:3001" - "0.0.0.0:3002:3001"
- "0.0.0.0:9002:9001" - "0.0.0.0:9002:9001"
healthcheck: healthcheck:
test: ["CMD", "nc", "-v", "localhost", "3002"] test: ["CMD", "nc", "-vz", "localhost", "3001"]
interval: 20s interval: 20s
timeout: 5s timeout: 5s
retries: 15 retries: 15

View File

@ -0,0 +1,91 @@
version: '3.2'
services:
# Starts the PostgreSQL database for watcher
gelato-watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=gelato-watcher,gelato-watcher-job-queue
- POSTGRES_EXTENSION=gelato-watcher-job-queue:pgcrypto
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- gelato_watcher_db_data:/var/lib/postgresql/data
ports:
- "0.0.0.0:15432:5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 10s
timeout: 5s
retries: 15
start_period: 10s
# Starts the gelato-watcher job runner
gelato-watcher-job-runner:
image: cerc/watcher-gelato:local
restart: unless-stopped
depends_on:
gelato-watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-gelato/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
command: ["./start-job-runner.sh"]
volumes:
- ../config/watcher-gelato/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-gelato/start-job-runner.sh:/app/start-job-runner.sh
ports:
- "0.0.0.0:9000:9000"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "9000"]
interval: 10s
timeout: 5s
retries: 15
start_period: 10s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the gelato-watcher server
gelato-watcher-server:
image: cerc/watcher-gelato:local
restart: unless-stopped
depends_on:
gelato-watcher-db:
condition: service_healthy
gelato-watcher-job-runner:
condition: service_healthy
env_file:
- ../config/watcher-gelato/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_USE_STATE_SNAPSHOT: ${CERC_USE_STATE_SNAPSHOT}
CERC_SNAPSHOT_GQL_ENDPOINT: ${CERC_SNAPSHOT_GQL_ENDPOINT}
CERC_SNAPSHOT_BLOCKHASH: ${CERC_SNAPSHOT_BLOCKHASH}
command: ["./start-server.sh"]
volumes:
- ../config/watcher-gelato/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-gelato/start-server.sh:/app/start-server.sh
- ../config/watcher-gelato/create-and-import-checkpoint.sh:/app/create-and-import-checkpoint.sh
- gelato_watcher_state_gql:/app/state_checkpoint
ports:
- "0.0.0.0:3008:3008"
- "0.0.0.0:9001:9001"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "3008"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
gelato_watcher_db_data:
gelato_watcher_state_gql:

View File

@ -0,0 +1,135 @@
version: '3.2'
services:
# Starts the PostgreSQL database for watcher
mobymask-watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=mobymask-watcher,mobymask-watcher-job-queue
- POSTGRES_EXTENSION=mobymask-watcher-job-queue:pgcrypto
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- mobymask_watcher_db_data:/var/lib/postgresql/data
ports:
- "0.0.0.0:15432:5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
# Deploys the MobyMask contract and generates an invite link
# Deployment is skipped if CERC_DEPLOYED_CONTRACT env is set
mobymask:
image: cerc/mobymask:local
working_dir: /app/packages/server
env_file:
- ../config/watcher-mobymask-v2/optimism-params.env
- ../config/watcher-mobymask-v2/mobymask-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
ENV: "PROD"
CERC_L2_GETH_RPC: ${CERC_L2_GETH_RPC}
CERC_L1_ACCOUNTS_CSV_URL: ${CERC_L1_ACCOUNTS_CSV_URL}
CERC_PRIVATE_KEY_DEPLOYER: ${CERC_PRIVATE_KEY_DEPLOYER}
CERC_MOBYMASK_APP_BASE_URI: ${CERC_MOBYMASK_APP_BASE_URI}
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}
CERC_L2_GETH_HOST: ${CERC_L2_GETH_HOST}
CERC_L2_GETH_PORT: ${CERC_L2_GETH_PORT}
CERC_L2_NODE_HOST: ${CERC_L2_NODE_HOST}
CERC_L2_NODE_PORT: ${CERC_L2_NODE_PORT}
command: ["sh", "deploy-and-generate-invite.sh"]
volumes:
- ../config/wait-for-it.sh:/app/packages/server/wait-for-it.sh
- ../config/watcher-mobymask-v2/secrets-template.json:/app/packages/server/secrets-template.json
- ../config/watcher-mobymask-v2/deploy-and-generate-invite.sh:/app/packages/server/deploy-and-generate-invite.sh
- mobymask_deployment:/app/packages/server
extra_hosts:
- "host.docker.internal:host-gateway"
# Creates peer-id files if they don't exist
peer-ids-gen:
image: cerc/watcher-ts:local
restart: on-failure
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
working_dir: /app/packages/peer
command: ["sh", "generate-peer-ids.sh"]
volumes:
- ../config/watcher-mobymask-v2/generate-peer-ids.sh:/app/packages/peer/generate-peer-ids.sh
- peers_ids:/peer-ids
# Starts the mobymask-v2-watcher server
mobymask-watcher-server:
image: cerc/watcher-mobymask-v2:local
restart: unless-stopped
depends_on:
mobymask-watcher-db:
condition: service_healthy
peer-ids-gen:
condition: service_completed_successfully
mobymask:
condition: service_completed_successfully
env_file:
- ../config/watcher-mobymask-v2/optimism-params.env
- ../config/watcher-mobymask-v2/mobymask-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L2_GETH_RPC: ${CERC_L2_GETH_RPC}
CERC_L1_ACCOUNTS_CSV_URL: ${CERC_L1_ACCOUNTS_CSV_URL}
CERC_PRIVATE_KEY_PEER: ${CERC_PRIVATE_KEY_PEER}
CERC_RELAY_PEERS: ${CERC_RELAY_PEERS}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_RELAY_ANNOUNCE_DOMAIN: ${CERC_RELAY_ANNOUNCE_DOMAIN}
CERC_ENABLE_PEER_L2_TXS: ${CERC_ENABLE_PEER_L2_TXS}
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}
command: ["sh", "start-server.sh"]
volumes:
- ../config/watcher-mobymask-v2/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-mobymask-v2/start-server.sh:/app/start-server.sh
- peers_ids:/app/peers
- mobymask_deployment:/server
# Expose GQL, metrics and relay node ports
ports:
- "0.0.0.0:3001:3001"
- "0.0.0.0:9001:9001"
- "0.0.0.0:9090:9090"
healthcheck:
test: ["CMD", "busybox", "nc", "localhost", "9090"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Container to run peer tests
peer-tests:
image: cerc/watcher-ts:local
restart: on-failure
depends_on:
mobymask-watcher-server:
condition: service_healthy
peer-ids-gen:
condition: service_completed_successfully
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
working_dir: /app/packages/peer
command:
- sh
- -c
- |
./set-tests-env.sh && \
tail -f /dev/null
volumes:
- ../config/watcher-mobymask-v2/set-tests-env.sh:/app/packages/peer/set-tests-env.sh
- peers_ids:/peer-ids
volumes:
mobymask_watcher_db_data:
peers_ids:
mobymask_deployment:

View File

@ -17,7 +17,8 @@ CERC_STATEDIFF_DB_PORT=5432
CERC_STATEDIFF_DB_NAME="cerc_testing" CERC_STATEDIFF_DB_NAME="cerc_testing"
CERC_STATEDIFF_DB_USER="vdbm" CERC_STATEDIFF_DB_USER="vdbm"
CERC_STATEDIFF_DB_PASSWORD="password" CERC_STATEDIFF_DB_PASSWORD="password"
CERC_STATEDIFF_DB_GOOSE_MIN_VER=23 CERC_STATEDIFF_DB_GOOSE_MIN_VER=${CERC_STATEDIFF_DB_GOOSE_MIN_VER:-18}
CERC_STATEDIFF_DB_LOG_STATEMENTS="false" CERC_STATEDIFF_DB_LOG_STATEMENTS="false"
CERC_STATEDIFF_WORKERS=2
CERC_GETH_VMODULE="statediff/*=5,rpc/*=5" CERC_GETH_VMODULE="statediff/*=5,rpc/*=5"

View File

@ -1,8 +1,8 @@
#!/bin/sh #!/bin/bash
# Originally from: https://github.com/cerc-io/laconicd/blob/main/init.sh
# TODO: fold this back into the laconicd repo
# TODO: this file is now an unmodified copy of cerc-io/laconicd/init.sh
# so we should have a mechanism to bundle it inside the container rather than link from here
# at deploy time.
KEY="mykey" KEY="mykey"
CHAINID="laconic_9000-1" CHAINID="laconic_9000-1"
@ -10,7 +10,7 @@ MONIKER="localtestnet"
KEYRING="test" KEYRING="test"
KEYALGO="eth_secp256k1" KEYALGO="eth_secp256k1"
LOGLEVEL="info" LOGLEVEL="info"
# to trace evm # trace evm
TRACE="--trace" TRACE="--trace"
# TRACE="" # TRACE=""
@ -28,7 +28,7 @@ laconicd config chain-id $CHAINID
# if $KEY exists it should be deleted # if $KEY exists it should be deleted
laconicd keys add $KEY --keyring-backend $KEYRING --algo $KEYALGO laconicd keys add $KEY --keyring-backend $KEYRING --algo $KEYALGO
# Set moniker and chain-id for laconic (Moniker can be anything, chain-id must be an integer) # Set moniker and chain-id for Ethermint (Moniker can be anything, chain-id must be an integer)
laconicd init $MONIKER --chain-id $CHAINID laconicd init $MONIKER --chain-id $CHAINID
# Change parameter token denominations to aphoton # Change parameter token denominations to aphoton
@ -37,28 +37,28 @@ cat $HOME/.laconicd/config/genesis.json | jq '.app_state["crisis"]["constant_fee
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["gov"]["deposit_params"]["min_deposit"][0]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["gov"]["deposit_params"]["min_deposit"][0]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["mint"]["params"]["mint_denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["mint"]["params"]["mint_denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
# Custom modules # Custom modules
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["record_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["record_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_commit_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_commit_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_reveal_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_reveal_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_minimum_bid"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_minimum_bid"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
if [[ "$TEST_NAMESERVICE_EXPIRY" == "true" ]]; then if [[ "$TEST_REGISTRY_EXPIRY" == "true" ]]; then
echo "Setting timers for expiry tests." echo "Setting timers for expiry tests."
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["record_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["record_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_grace_period"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_grace_period"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
fi fi
if [[ "$TEST_AUCTION_ENABLED" == "true" ]]; then if [[ "$TEST_AUCTION_ENABLED" == "true" ]]; then
echo "Enabling auction and setting timers." echo "Enabling auction and setting timers."
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_enabled"]=true' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_enabled"]=true' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_grace_period"]="300s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_grace_period"]="300s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_commits_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_commits_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_reveals_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_reveals_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
fi fi
# increase block time (?) # increase block time (?)

View File

@ -0,0 +1,2 @@
#!/bin/sh
laconicd keys show mykey | grep address | cut -d ' ' -f 3

View File

@ -0,0 +1,2 @@
#!/bin/sh
echo y | laconicd keys export mykey --unarmored-hex --unsafe

View File

@ -0,0 +1,9 @@
services:
cns:
restEndpoint: 'http://laconicd:1317'
gqlEndpoint: 'http://laconicd:9473/api'
userKey: REPLACE_WITH_MYKEY
bondId:
chainId: laconic_9000-1
gas: 250000
fees: 200000aphoton

View File

@ -0,0 +1 @@
}+V<>{iνΆΠΉ<CEA0>²<EFBFBD>¨ΣΗ\k»qς  —?δΪAΒ~μ©™LΉ<4C>tb·yqτ·²ηξΔ<CEBE>Ο?ξaΣ<61>J

View File

@ -0,0 +1 @@
Β~μ©™LΉ<4C>tb·yqτ·²ηξΔ<CEBE>Ο?ξaΣ<61>J

View File

@ -0,0 +1,71 @@
{
"t01000": {
"ID": "t01000",
"Owner": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Worker": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"PeerId": "12D3KooWG5q6pWJVdPBhDBv9AjWVbUh4xxTAZ7xvgZSjczWuD2Z9",
"MarketBalance": "0",
"PowerBalance": "0",
"SectorSize": 2048,
"Sectors": [
{
"CommR": {
"/": "bagboea4b5abcboxypcewlkmrat2myu4vthk3ii2pcomak7nhqmdbb6sxlolp2wdf"
},
"CommD": {
"/": "baga6ea4seaqn3jfixthmdgksv4vhfeuyvr6upw6tvaqbmzmsyxnzosm4pwgnmlq"
},
"SectorID": 0,
"Deal": {
"PieceCID": {
"/": "baga6ea4seaqn3jfixthmdgksv4vhfeuyvr6upw6tvaqbmzmsyxnzosm4pwgnmlq"
},
"PieceSize": 2048,
"VerifiedDeal": false,
"Client": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Provider": "t01000",
"Label": "0",
"StartEpoch": 0,
"EndEpoch": 9001,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealClientKey": {
"Type": "bls",
"PrivateKey": "tFvSRiSg2G3Ssgg0PSYy23XyjaIMXpsmdyG2B7UFLT4="
},
"ProofType": 5
},
{
"CommR": {
"/": "bagboea4b5abcb6krzypqcczhcnbeyjcqkeo6omfergm336o3kitugh3jgjog2yqq"
},
"CommD": {
"/": "baga6ea4seaqhondpb2373hjasjplxvbjzi5n5mm4fbbhjxp5ptnbq4cibapkeii"
},
"SectorID": 1,
"Deal": {
"PieceCID": {
"/": "baga6ea4seaqhondpb2373hjasjplxvbjzi5n5mm4fbbhjxp5ptnbq4cibapkeii"
},
"PieceSize": 2048,
"VerifiedDeal": false,
"Client": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Provider": "t01000",
"Label": "1",
"StartEpoch": 0,
"EndEpoch": 9001,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealClientKey": {
"Type": "bls",
"PrivateKey": "tFvSRiSg2G3Ssgg0PSYy23XyjaIMXpsmdyG2B7UFLT4="
},
"ProofType": 5
}
]
}
}

View File

@ -0,0 +1 @@
7b2254797065223a22626c73222c22507269766174654b6579223a227446765352695367324733537367673050535979323358796a61494d5870736d64794732423755464c54343d227d

View File

@ -0,0 +1,11 @@
{
"ID": "f355523e-69d0-4984-bd0e-9588487c6231",
"Weight": 0,
"CanSeal": false,
"CanStore": false,
"MaxStorage": 0,
"Groups": null,
"AllowTo": null,
"AllowTypes": null,
"DenyTypes": null
}

View File

@ -0,0 +1,108 @@
{
"NetworkVersion": 18,
"Accounts": [
{
"Type": "account",
"Balance": "50000000000000000000000000",
"Meta": {
"Owner": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q"
}
}
],
"Miners": [
{
"ID": "t01000",
"Owner": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Worker": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"PeerId": "12D3KooWG5q6pWJVdPBhDBv9AjWVbUh4xxTAZ7xvgZSjczWuD2Z9",
"MarketBalance": "0",
"PowerBalance": "0",
"SectorSize": 2048,
"Sectors": [
{
"CommR": {
"/": "bagboea4b5abcboxypcewlkmrat2myu4vthk3ii2pcomak7nhqmdbb6sxlolp2wdf"
},
"CommD": {
"/": "baga6ea4seaqn3jfixthmdgksv4vhfeuyvr6upw6tvaqbmzmsyxnzosm4pwgnmlq"
},
"SectorID": 0,
"Deal": {
"PieceCID": {
"/": "baga6ea4seaqn3jfixthmdgksv4vhfeuyvr6upw6tvaqbmzmsyxnzosm4pwgnmlq"
},
"PieceSize": 2048,
"VerifiedDeal": false,
"Client": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Provider": "t01000",
"Label": "0",
"StartEpoch": 0,
"EndEpoch": 9001,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealClientKey": {
"Type": "bls",
"PrivateKey": "tFvSRiSg2G3Ssgg0PSYy23XyjaIMXpsmdyG2B7UFLT4="
},
"ProofType": 5
},
{
"CommR": {
"/": "bagboea4b5abcb6krzypqcczhcnbeyjcqkeo6omfergm336o3kitugh3jgjog2yqq"
},
"CommD": {
"/": "baga6ea4seaqhondpb2373hjasjplxvbjzi5n5mm4fbbhjxp5ptnbq4cibapkeii"
},
"SectorID": 1,
"Deal": {
"PieceCID": {
"/": "baga6ea4seaqhondpb2373hjasjplxvbjzi5n5mm4fbbhjxp5ptnbq4cibapkeii"
},
"PieceSize": 2048,
"VerifiedDeal": false,
"Client": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Provider": "t01000",
"Label": "1",
"StartEpoch": 0,
"EndEpoch": 9001,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealClientKey": {
"Type": "bls",
"PrivateKey": "tFvSRiSg2G3Ssgg0PSYy23XyjaIMXpsmdyG2B7UFLT4="
},
"ProofType": 5
}
]
}
],
"NetworkName": "localnet-6d52dae5-ff29-4bac-a45d-f84e6c07564c",
"VerifregRootKey": {
"Type": "multisig",
"Balance": "0",
"Meta": {
"Signers": [
"t1ceb34gnsc6qk5dt6n7xg6ycwzasjhbxm3iylkiy"
],
"Threshold": 1,
"VestingDuration": 0,
"VestingStart": 0
}
},
"RemainderAccount": {
"Type": "multisig",
"Balance": "0",
"Meta": {
"Signers": [
"t1ceb34gnsc6qk5dt6n7xg6ycwzasjhbxm3iylkiy"
],
"Threshold": 1,
"VestingDuration": 0,
"VestingStart": 0
}
}
}

View File

@ -0,0 +1,5 @@
LOTUS_PATH=~/.lotus-local-net
LOTUS_MINER_PATH=~/.lotus-miner-local-net
LOTUS_SKIP_GENESIS_CHECK=_yes_
CGO_CFLAGS_ALLOW="-D__BLST_PORTABLE__"
CGO_CFLAGS="-D__BLST_PORTABLE__"

View File

@ -0,0 +1,39 @@
#!/bin/bash
lotus --version
# # remove old bootnode peer info if present
# [ -f /root/.lotus-shared/miner.addr ] && rm /root/.lotus-shared/miner.addr
##TODO: generate genesis files inside container instead of bundling in config dir
##something like commands below should work, other scripts/compose will have to be updated to corresponding directories
# lotus fetch-params 2048
# lotus-seed pre-seal --sector-size 2KiB --num-sectors 2
# lotus-seed genesis new localnet.json
# lotus-seed genesis add-miner localnet.json ~/.genesis-sectors/pre-seal-t01000.json
# start daemon
nohup lotus daemon --genesis=/devgen.car --profile=bootstrapper --bootstrap=false > /var/log/lotus.log 2>&1 &
# Loop until the daemon is started
echo "Waiting for daemon to start..."
while ! grep -q "started ChainNotify channel" /var/log/lotus.log ; do
sleep 5
done
echo "Daemon started."
# publish bootnode peer info to shared volume
lotus net listen | awk 'NR==1{print}' > /root/.lotus-shared/miner.addr
# if miner not already initialized
if [ ! -d /root/.lotus-miner-local-net ]; then
# initialize miner
lotus wallet import --as-default ~/.genesis-sectors/pre-seal-t01000.key
lotus-miner init --genesis-miner --actor=t01000 --sector-size=2KiB --pre-sealed-sectors=~/.genesis-sectors --pre-sealed-metadata=~/.genesis-sectors/pre-seal-t01000.json --nosync
fi
# start miner
nohup lotus-miner run --nosync &
tail -f /dev/null

View File

@ -0,0 +1,24 @@
#!/bin/bash
lotus --version
##TODO: paths can use values from lotus-env.env file
# if not already initialized
if [ ! -f /root/.lotus-local-net/config.toml ]; then
# init node config
mkdir $HOME/.lotus-local-net
lotus config default > $HOME/.lotus-local-net/config.toml
# add bootstrap peer info if available
if [ -f /root/.lotus-shared/miner.addr ]; then
MINER_ADDR=\"$(cat /root/.lotus-shared/miner.addr)\"
# add bootstrap peer id to config file
sed -i "/^\[Libp2p\]/a \ \ BootstrapPeers = [$MINER_ADDR]" $HOME/.lotus-local-net/config.toml
else
echo "Bootstrap peer info not found, unable to configure. Manual peering will be required."
fi
fi
# start node
lotus daemon --genesis=/devgen.car

View File

@ -0,0 +1,37 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Check existing config if it exists
if [ -f /app/jwt.txt ] && [ -f /app/rollup.json ]; then
echo "Found existing L2 config, cross-checking with L1 deployment config"
SOURCE_L1_CONF=$(cat /contracts-bedrock/deploy-config/getting-started.json)
EXP_L1_BLOCKHASH=$(echo "$SOURCE_L1_CONF" | jq -r '.l1StartingBlockTag')
EXP_BATCHER=$(echo "$SOURCE_L1_CONF" | jq -r '.batchSenderAddress')
GEN_L2_CONF=$(cat /app/rollup.json)
GEN_L1_BLOCKHASH=$(echo "$GEN_L2_CONF" | jq -r '.genesis.l1.hash')
GEN_BATCHER=$(echo "$GEN_L2_CONF" | jq -r '.genesis.system_config.batcherAddr')
if [ "$EXP_L1_BLOCKHASH" = "$GEN_L1_BLOCKHASH" ] && [ "$EXP_BATCHER" = "$GEN_BATCHER" ]; then
echo "Config cross-checked, exiting"
exit 0
fi
echo "Existing L2 config doesn't match the L1 deployment config, please clear L2 config volume before starting"
exit 1
fi
op-node genesis l2 \
--deploy-config /contracts-bedrock/deploy-config/getting-started.json \
--deployment-dir /contracts-bedrock/deployments/getting-started/ \
--outfile.l2 /app/genesis.json \
--outfile.rollup /app/rollup.json \
--l1-rpc $CERC_L1_RPC
openssl rand -hex 32 > /app/jwt.txt

View File

@ -0,0 +1,12 @@
# Defaults
# L1 endpoint
DEFAULT_CERC_L1_CHAIN_ID=1212
DEFAULT_CERC_L1_RPC="http://fixturenet-eth-geth-1:8545"
DEFAULT_CERC_L1_HOST="fixturenet-eth-geth-1"
DEFAULT_CERC_L1_PORT=8545
# URL to get CSV with credentials for accounts on L1
# that are used to send balance to Optimism Proxy contract
# (enables them to do transactions on L2)
DEFAULT_CERC_L1_ACCOUNTS_CSV_URL="http://fixturenet-eth-bootnode-geth:9898/accounts.csv"

View File

@ -0,0 +1,131 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_CHAIN_ID="${CERC_L1_CHAIN_ID:-${DEFAULT_CERC_L1_CHAIN_ID}}"
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
CERC_L1_ACCOUNTS_CSV_URL="${CERC_L1_ACCOUNTS_CSV_URL:-${DEFAULT_CERC_L1_ACCOUNTS_CSV_URL}}"
echo "Using L1 RPC endpoint ${CERC_L1_RPC}"
IMPORT_1="import './verify-contract-deployment'"
IMPORT_2="import './rekey-json'"
IMPORT_3="import './send-balance'"
# Append mounted tasks to tasks/index.ts file if not present
if ! grep -Fxq "$IMPORT_1" tasks/index.ts; then
echo "$IMPORT_1" >> tasks/index.ts
echo "$IMPORT_2" >> tasks/index.ts
echo "$IMPORT_3" >> tasks/index.ts
fi
# Update the chainId in the hardhat config
sed -i "/getting-started/ {n; s/.*chainId.*/ chainId: $CERC_L1_CHAIN_ID,/}" hardhat.config.ts
# Exit if a deployment already exists (on restarts)
# Note: fixturenet-eth-geth currently starts fresh on a restart
if [ -d "deployments/getting-started" ]; then
echo "Deployment directory deployments/getting-started found, checking SystemDictator deployment"
# Read JSON file into variable
SYSTEM_DICTATOR_DETAILS=$(cat deployments/getting-started/SystemDictator.json)
# Parse JSON into variables
SYSTEM_DICTATOR_ADDRESS=$(echo "$SYSTEM_DICTATOR_DETAILS" | jq -r '.address')
SYSTEM_DICTATOR_TXHASH=$(echo "$SYSTEM_DICTATOR_DETAILS" | jq -r '.transactionHash')
if yarn hardhat verify-contract-deployment --contract "${SYSTEM_DICTATOR_ADDRESS}" --transaction-hash "${SYSTEM_DICTATOR_TXHASH}"; then
echo "Deployment verfication successful, exiting"
exit 0
else
echo "Deployment verfication failed, please clear L1 deployment volume before starting"
exit 1
fi
fi
# Generate the L2 account addresses
yarn hardhat rekey-json --output /l2-accounts/keys.json
# Read JSON file into variable
KEYS_JSON=$(cat /l2-accounts/keys.json)
# Parse JSON into variables
ADMIN_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Admin.address')
ADMIN_PRIV_KEY=$(echo "$KEYS_JSON" | jq -r '.Admin.privateKey')
PROPOSER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Proposer.address')
BATCHER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Batcher.address')
SEQUENCER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Sequencer.address')
# Get the private keys of L1 accounts
if [ -n "$CERC_L1_ACCOUNTS_CSV_URL" ] && \
l1_accounts_response=$(curl -L --write-out '%{http_code}' --silent --output /dev/null "$CERC_L1_ACCOUNTS_CSV_URL") && \
[ "$l1_accounts_response" -eq 200 ];
then
echo "Fetching L1 account credentials using provided URL"
mkdir -p /geth-accounts
wget -O /geth-accounts/accounts.csv "$CERC_L1_ACCOUNTS_CSV_URL"
CERC_L1_ADDRESS=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 2)
CERC_L1_PRIV_KEY=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 3)
CERC_L1_ADDRESS_2=$(awk -F, 'NR==2{print $(NF-1)}' /geth-accounts/accounts.csv)
CERC_L1_PRIV_KEY_2=$(awk -F, 'NR==2{print $NF}' /geth-accounts/accounts.csv)
else
echo "Couldn't fetch L1 account credentials, using them from env"
fi
# Send balances to the above L2 addresses
yarn hardhat send-balance --to "${ADMIN_ADDRESS}" --amount 2 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
yarn hardhat send-balance --to "${PROPOSER_ADDRESS}" --amount 5 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
yarn hardhat send-balance --to "${BATCHER_ADDRESS}" --amount 1000 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
echo "Balances sent to L2 accounts"
# Select a finalized L1 block as the starting point for roll ups
until FINALIZED_BLOCK=$(cast block finalized --rpc-url "$CERC_L1_RPC"); do
echo "Waiting for a finalized L1 block to exist, retrying after 10s"
sleep 10
done
L1_BLOCKNUMBER=$(echo "$FINALIZED_BLOCK" | awk '/number/{print $2}')
L1_BLOCKHASH=$(echo "$FINALIZED_BLOCK" | awk '/hash/{print $2}')
L1_BLOCKTIMESTAMP=$(echo "$FINALIZED_BLOCK" | awk '/timestamp/{print $2}')
echo "Selected L1 block ${L1_BLOCKNUMBER} as the starting block for roll ups"
# Update the deployment config
sed -i 's/"l2OutputOracleStartingTimestamp": TIMESTAMP/"l2OutputOracleStartingTimestamp": '"$L1_BLOCKTIMESTAMP"'/g' deploy-config/getting-started.json
jq --arg chainid "$CERC_L1_CHAIN_ID" '.l1ChainID = ($chainid | tonumber)' deploy-config/getting-started.json > tmp.json && mv tmp.json deploy-config/getting-started.json
node update-config.js deploy-config/getting-started.json "$ADMIN_ADDRESS" "$PROPOSER_ADDRESS" "$BATCHER_ADDRESS" "$SEQUENCER_ADDRESS" "$L1_BLOCKHASH"
echo "Updated the deployment config"
# Create a .env file
echo "L1_RPC=$CERC_L1_RPC" > .env
echo "PRIVATE_KEY_DEPLOYER=$ADMIN_PRIV_KEY" >> .env
echo "Deploying the L1 smart contracts, this will take a while..."
# Deploy the L1 smart contracts
yarn hardhat deploy --network getting-started --tags l1
echo "Deployed the L1 smart contracts"
# Read Proxy contract's JSON and get the address
PROXY_JSON=$(cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json)
PROXY_ADDRESS=$(echo "$PROXY_JSON" | jq -r '.address')
# Send balance to the above Proxy contract in L1 for reflecting balance in L2
# First account
yarn hardhat send-balance --to "${PROXY_ADDRESS}" --amount 1 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
# Second account
yarn hardhat send-balance --to "${PROXY_ADDRESS}" --amount 1 --private-key "${CERC_L1_PRIV_KEY_2}" --network getting-started
echo "Balance sent to Proxy L2 contract"
echo "Use following accounts for transactions in L2:"
echo "${CERC_L1_ADDRESS}"
echo "${CERC_L1_ADDRESS_2}"
echo "Done"

View File

@ -0,0 +1,36 @@
const fs = require('fs')
// Get the command-line argument
const configFile = process.argv[2]
const adminAddress = process.argv[3]
const proposerAddress = process.argv[4]
const batcherAddress = process.argv[5]
const sequencerAddress = process.argv[6]
const blockHash = process.argv[7]
// Read the JSON file
const configData = fs.readFileSync(configFile)
const configObj = JSON.parse(configData)
// Update the finalSystemOwner property with the ADMIN_ADDRESS value
configObj.finalSystemOwner =
configObj.portalGuardian =
configObj.controller =
configObj.l2OutputOracleChallenger =
configObj.proxyAdminOwner =
configObj.baseFeeVaultRecipient =
configObj.l1FeeVaultRecipient =
configObj.sequencerFeeVaultRecipient =
configObj.governanceTokenOwner =
adminAddress
configObj.l2OutputOracleProposer = proposerAddress
configObj.batchSenderAddress = batcherAddress
configObj.p2pSequencerAddress = sequencerAddress
configObj.l1StartingBlockTag = blockHash
// Write the updated JSON object back to the file
fs.writeFileSync(configFile, JSON.stringify(configObj, null, 2))

View File

@ -0,0 +1,39 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Get Batcher key from keys.json
BATCHER_KEY=$(jq -r '.Batcher.privateKey' /l2-accounts/keys.json | tr -d '"')
cleanup() {
echo "Signal received, cleaning up..."
kill ${batcher_pid}
wait
echo "Done"
}
trap 'cleanup' INT TERM
# Run op-batcher
op-batcher \
--l2-eth-rpc=http://op-geth:8545 \
--rollup-rpc=http://op-node:8547 \
--poll-interval=1s \
--sub-safety-margin=6 \
--num-confirmations=1 \
--safe-abort-nonce-too-low-count=3 \
--resubmission-timeout=30s \
--rpc.addr=0.0.0.0 \
--rpc.port=8548 \
--rpc.enable-admin \
--max-channel-duration=1 \
--l1-eth-rpc=$CERC_L1_RPC \
--private-key=$BATCHER_KEY \
&
batcher_pid=$!
wait $batcher_pid

View File

@ -0,0 +1,90 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# TODO: Add in container build or use other tool
echo "Installing jq"
apk update && apk add jq
# Get Sequencer key from keys.json
SEQUENCER_KEY=$(jq -r '.Sequencer.privateKey' /l2-accounts/keys.json | tr -d '"')
# Initialize op-geth if datadir/geth not found
if [ -f /op-node/jwt.txt ] && [ -d datadir/geth ]; then
echo "Found existing datadir, checking block signer key"
BLOCK_SIGNER_KEY=$(cat datadir/block-signer-key)
if [ "$SEQUENCER_KEY" = "$BLOCK_SIGNER_KEY" ]; then
echo "Sequencer and block signer keys match, skipping initialization"
else
echo "Sequencer and block signer keys don't match, please clear L2 geth data volume before starting"
exit 1
fi
else
echo "Initializing op-geth"
mkdir -p datadir
echo "pwd" > datadir/password
echo $SEQUENCER_KEY > datadir/block-signer-key
geth account import --datadir=datadir --password=datadir/password datadir/block-signer-key
while [ ! -f "/op-node/jwt.txt" ]
do
echo "Config files not created. Checking after 5 seconds."
sleep 5
done
echo "Config files created by op-node, proceeding with the initialization..."
geth init --datadir=datadir /op-node/genesis.json
echo "Node Initialized"
fi
SEQUENCER_ADDRESS=$(jq -r '.Sequencer.address' /l2-accounts/keys.json | tr -d '"')
echo "SEQUENCER_ADDRESS: ${SEQUENCER_ADDRESS}"
cleanup() {
echo "Signal received, cleaning up..."
kill ${geth_pid}
wait
echo "Done"
}
trap 'cleanup' INT TERM
# Run op-geth
geth \
--datadir ./datadir \
--http \
--http.corsdomain="*" \
--http.vhosts="*" \
--http.addr=0.0.0.0 \
--http.api=web3,debug,eth,txpool,net,engine \
--ws \
--ws.addr=0.0.0.0 \
--ws.port=8546 \
--ws.origins="*" \
--ws.api=debug,eth,txpool,net,engine \
--syncmode=full \
--gcmode=archive \
--nodiscover \
--maxpeers=0 \
--networkid=42069 \
--authrpc.vhosts="*" \
--authrpc.addr=0.0.0.0 \
--authrpc.port=8551 \
--authrpc.jwtsecret=/op-node/jwt.txt \
--rollup.disabletxpoolgossip=true \
--password=./datadir/password \
--allow-insecure-unlock \
--mine \
--miner.etherbase=$SEQUENCER_ADDRESS \
--unlock=$SEQUENCER_ADDRESS \
&
geth_pid=$!
wait $geth_pid

View File

@ -0,0 +1,26 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Get Sequencer key from keys.json
SEQUENCER_KEY=$(jq -r '.Sequencer.privateKey' /l2-accounts/keys.json | tr -d '"')
# Run op-node
op-node \
--l2=http://op-geth:8551 \
--l2.jwt-secret=/op-node-data/jwt.txt \
--sequencer.enabled \
--sequencer.l1-confs=3 \
--verifier.l1-confs=3 \
--rollup.config=/op-node-data/rollup.json \
--rpc.addr=0.0.0.0 \
--rpc.port=8547 \
--p2p.disable \
--rpc.enable-admin \
--p2p.sequencer.key=$SEQUENCER_KEY \
--l1=$CERC_L1_RPC \
--l1.rpckind=any

View File

@ -0,0 +1,36 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Read the L2OutputOracle contract address from the deployment
L2OO_DEPLOYMENT=$(cat /contracts-bedrock/deployments/getting-started/L2OutputOracle.json)
L2OO_ADDR=$(echo "$L2OO_DEPLOYMENT" | jq -r '.address')
# Get Proposer key from keys.json
PROPOSER_KEY=$(jq -r '.Proposer.privateKey' /l2-accounts/keys.json | tr -d '"')
cleanup() {
echo "Signal received, cleaning up..."
kill ${proposer_pid}
wait
echo "Done"
}
trap 'cleanup' INT TERM
# Run op-proposer
op-proposer \
--poll-interval 12s \
--rpc.port 8560 \
--rollup-rpc http://op-node:8547 \
--l2oo-address $L2OO_ADDR \
--private-key $PROPOSER_KEY \
--l1-eth-rpc $CERC_L1_RPC \
&
proposer_pid=$!
wait $proposer_pid

View File

@ -0,0 +1 @@
See: https://docs.plugeth.org/

View File

@ -0,0 +1,18 @@
[
{
"id": "0001",
"url": "http://127.0.0.1:8081/",
"basic_auth": {
"username": "",
"password": ""
}
},
{
"id": "0021",
"url": "http://fixturenet-eth-geth-1:8545/",
"basic_auth": {
"username": "",
"password": ""
}
}
]

View File

@ -0,0 +1,65 @@
#!/bin/bash
# TODO: we should have a mechanism to bundle it inside the container rather than link from here
# at deploy time.
CHAINID="pocketlocal-1"
MONIKER="localtestnet"
SERVICE_URL="http://127.0.0.1:8081"
PASSWORD="mypassword" # wallet password, required by cli
# check if jq is installed; install if necessary
# command -v jq > /dev/null 2>&1 || { echo >&2 "jq not installed. More info: https://stedolan.github.io/jq/download/"; exit 1; }
if ! command -v jq > /dev/null 2>&1; then
echo "jq not installed, downloading..."
mkdir -p /home/app/bin
wget -O /home/app/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
chmod +x /home/app/bin/jq
export PATH=$PATH:/home/app/bin
fi
# remove existing daemon and client
rm -rf ~/.pocket*
# create a wallet with password "mypassword" and save the address for later
address=$(pocket accounts create --pwd $PASSWORD | awk '/Address:/ {print $2}')
# set this address as the validator address for the node
pocket accounts set-validator $address --pwd $PASSWORD
# save the public key for later
pubkey=$(pocket accounts show $address | awk '/Public Key:/ {print $3}')
# set node's moniker
echo $(pocket util print-configs) | jq '.tendermint_config.Moniker = "'"$MONIKER"'"' | jq . > $HOME/.pocket/config/config.json
# pocket mainnet has block time of 15 minutes, set closer to 1 minute instead
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPropose = 8000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutProposeDelta = 600000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrevote = 4000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrevoteDelta = 600000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrecommit = 4000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrecommitDelta = 6000000006' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutCommit = 52000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.CreateEmptyBlocksInterval = 60000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.PeerGossipSleepDuration = 2000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.PeerQueryMaj23SleepDuration = 1200000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
# include genesis.json and chains.json
cp $HOME/pocket-configs/genesis.json $HOME/.pocket/config/genesis.json
cp $HOME/pocket-configs/chains.json $HOME/.pocket/config/chains.json
# set chain-id and add node to genesis.json as a validator
cat $HOME/.pocket/config/genesis.json | jq '.chain_id="'"$CHAINID"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.auth.accounts[0].value.address="'"$address"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.auth.accounts[0].value.public_key.value="'"$pubkey"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].address="'"$address"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].public_key="'"$pubkey"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].service_url="'"$SERVICE_URL"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
# if [[ $1 == "pending" ]]; then
# echo "pending mode is on, please wait for the first block committed."
# fi
# Start the node
pocket start --simulateRelay

View File

@ -0,0 +1,272 @@
{
"genesis_time": "2020-07-28T15:00:00.000000Z",
"chain_id": "testnet",
"consensus_params": {
"block": {
"max_bytes": "4000000",
"max_gas": "-1",
"time_iota_ms": "1"
},
"evidence": {
"max_age": "120000000000"
},
"validator": {
"pub_key_types": [
"ed25519"
]
}
},
"app_hash": "",
"app_state": {
"application": {
"params": {
"unstaking_time": "1814000000000000",
"max_applications": "9223372036854775807",
"app_stake_minimum": "1000000",
"base_relays_per_pokt": "167",
"stability_adjustment": "0",
"participation_rate_on": false,
"maximum_chains": "15"
},
"applications": [],
"exported": false
},
"auth": {
"params": {
"max_memo_characters": "75",
"tx_sig_limit": "8",
"fee_multipliers": {
"fee_multiplier": [],
"default": "1"
}
},
"accounts": [
{
"type": "posmint/Account",
"value": {
"address": "!validator-address",
"coins": [
{
"amount": "0",
"denom": "upokt"
}
],
"public_key": {
"type": "crypto/ed25519_public_key",
"value": "!validator-pubkey"
}
}
}
],
"supply": []
},
"gov": {
"params": {
"acl": [
{
"acl_key": "application/ApplicationStakeMinimum",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/AppUnstakingTime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/BaseRelaysPerPOKT",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/MaxApplications",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/MaximumChains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/ParticipationRateOn",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/StabilityAdjustment",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/MaxMemoCharacters",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/TxSigLimit",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/acl",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/daoOwner",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/upgrade",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ClaimExpiration",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/FeeMultipliers",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ReplayAttackBurnMultiplier",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/ProposerPercentage",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ClaimSubmissionWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/MinimumNumberOfProofs",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/SessionNodeCount",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/SupportedBlockchains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/BlocksPerSession",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/DAOAllocation",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/DowntimeJailDuration",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxEvidenceAge",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaximumChains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxJailedBlocks",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxValidators",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MinSignedPerWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/RelaysToTokensMultiplier",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SignedBlocksWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SlashFractionDoubleSign",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SlashFractionDowntime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/StakeDenom",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/StakeMinimum",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/UnstakingTime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
}
],
"dao_owner": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4",
"upgrade": {
"Height": "0",
"Version": "0"
}
},
"DAO_Tokens": "50000000000000"
},
"pos": {
"params": {
"relays_to_tokens_multiplier": "10000",
"unstaking_time": "1814000000000000",
"max_validators": "5000",
"stake_denom": "upokt",
"stake_minimum": "15000000000",
"session_block_frequency": "4",
"dao_allocation": "10",
"proposer_allocation": "1",
"maximum_chains": "15",
"max_jailed_blocks": "37960",
"max_evidence_age": "120000000000",
"signed_blocks_window": "10",
"min_signed_per_window": "0.60",
"downtime_jail_duration": "3600000000000",
"slash_fraction_double_sign": "0.05",
"slash_fraction_downtime": "0.000001"
},
"prevState_total_power": "0",
"prevState_validator_powers": null,
"validators": [
{
"address": "!validator-address",
"public_key": "!validator-pubkey",
"jailed": false,
"status": 2,
"tokens": "5000000000000",
"service_url": "!validator-url",
"chains": [
"0001",
"0021"
],
"unstaking_time": "2021-05-15T00:00:00Z"
}
],
"exported": false,
"signing_infos": {},
"missed_blocks": {},
"previous_proposer": ""
},
"pocketcore": {
"params": {
"session_node_count": "5",
"proof_waiting_period": "3",
"supported_blockchains": [
"0001",
"0021"
],
"claim_expiration": "120",
"replay_attack_burn_multiplier": "3",
"minimum_number_of_proofs": "10"
},
"receipts": null,
"claims": null
}
}
}

View File

@ -0,0 +1,2 @@
[profile.default]
eth-rpc-url = "http://fixturenet-eth-geth-1:8545"

View File

@ -0,0 +1,182 @@
#!/usr/bin/env bash
# Use this script to test if a given TCP host/port are available
WAITFORIT_cmdname=${0##*/}
echoerr() { if [[ $WAITFORIT_QUIET -ne 1 ]]; then echo "$@" 1>&2; fi }
usage()
{
cat << USAGE >&2
Usage:
$WAITFORIT_cmdname host:port [-s] [-t timeout] [-- command args]
-h HOST | --host=HOST Host or IP under test
-p PORT | --port=PORT TCP port under test
Alternatively, you specify the host and port as host:port
-s | --strict Only execute subcommand if the test succeeds
-q | --quiet Don't output any status messages
-t TIMEOUT | --timeout=TIMEOUT
Timeout in seconds, zero for no timeout
-- COMMAND ARGS Execute command with args after the test finishes
USAGE
exit 1
}
wait_for()
{
if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
echoerr "$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
else
echoerr "$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout"
fi
WAITFORIT_start_ts=$(date +%s)
while :
do
if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then
nc -z $WAITFORIT_HOST $WAITFORIT_PORT
WAITFORIT_result=$?
else
(echo -n > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1
WAITFORIT_result=$?
fi
if [[ $WAITFORIT_result -eq 0 ]]; then
WAITFORIT_end_ts=$(date +%s)
echoerr "$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds"
break
fi
sleep 1
done
return $WAITFORIT_result
}
wait_for_wrapper()
{
# In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
if [[ $WAITFORIT_QUIET -eq 1 ]]; then
timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --quiet --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
else
timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
fi
WAITFORIT_PID=$!
trap "kill -INT -$WAITFORIT_PID" INT
wait $WAITFORIT_PID
WAITFORIT_RESULT=$?
if [[ $WAITFORIT_RESULT -ne 0 ]]; then
echoerr "$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
fi
return $WAITFORIT_RESULT
}
# process arguments
while [[ $# -gt 0 ]]
do
case "$1" in
*:* )
WAITFORIT_hostport=(${1//:/ })
WAITFORIT_HOST=${WAITFORIT_hostport[0]}
WAITFORIT_PORT=${WAITFORIT_hostport[1]}
shift 1
;;
--child)
WAITFORIT_CHILD=1
shift 1
;;
-q | --quiet)
WAITFORIT_QUIET=1
shift 1
;;
-s | --strict)
WAITFORIT_STRICT=1
shift 1
;;
-h)
WAITFORIT_HOST="$2"
if [[ $WAITFORIT_HOST == "" ]]; then break; fi
shift 2
;;
--host=*)
WAITFORIT_HOST="${1#*=}"
shift 1
;;
-p)
WAITFORIT_PORT="$2"
if [[ $WAITFORIT_PORT == "" ]]; then break; fi
shift 2
;;
--port=*)
WAITFORIT_PORT="${1#*=}"
shift 1
;;
-t)
WAITFORIT_TIMEOUT="$2"
if [[ $WAITFORIT_TIMEOUT == "" ]]; then break; fi
shift 2
;;
--timeout=*)
WAITFORIT_TIMEOUT="${1#*=}"
shift 1
;;
--)
shift
WAITFORIT_CLI=("$@")
break
;;
--help)
usage
;;
*)
echoerr "Unknown argument: $1"
usage
;;
esac
done
if [[ "$WAITFORIT_HOST" == "" || "$WAITFORIT_PORT" == "" ]]; then
echoerr "Error: you need to provide a host and port to test."
usage
fi
WAITFORIT_TIMEOUT=${WAITFORIT_TIMEOUT:-15}
WAITFORIT_STRICT=${WAITFORIT_STRICT:-0}
WAITFORIT_CHILD=${WAITFORIT_CHILD:-0}
WAITFORIT_QUIET=${WAITFORIT_QUIET:-0}
# Check to see if timeout is from busybox?
WAITFORIT_TIMEOUT_PATH=$(type -p timeout)
WAITFORIT_TIMEOUT_PATH=$(realpath $WAITFORIT_TIMEOUT_PATH 2>/dev/null || readlink -f $WAITFORIT_TIMEOUT_PATH)
WAITFORIT_BUSYTIMEFLAG=""
if [[ $WAITFORIT_TIMEOUT_PATH =~ "busybox" ]]; then
WAITFORIT_ISBUSY=1
# Check if busybox timeout uses -t flag
# (recent Alpine versions don't support -t anymore)
if timeout &>/dev/stdout | grep -q -e '-t '; then
WAITFORIT_BUSYTIMEFLAG="-t"
fi
else
WAITFORIT_ISBUSY=0
fi
if [[ $WAITFORIT_CHILD -gt 0 ]]; then
wait_for
WAITFORIT_RESULT=$?
exit $WAITFORIT_RESULT
else
if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
wait_for_wrapper
WAITFORIT_RESULT=$?
else
wait_for
WAITFORIT_RESULT=$?
fi
fi
if [[ $WAITFORIT_CLI != "" ]]; then
if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then
echoerr "$WAITFORIT_cmdname: strict mode, refusing to execute subprocess"
exit $WAITFORIT_RESULT
fi
exec "${WAITFORIT_CLI[@]}"
else
exit $WAITFORIT_RESULT
fi

View File

@ -0,0 +1,34 @@
[
{
"endpoint": "http://azimuth-watcher-server:3001/graphql",
"prefix": "azimuth"
},
{
"endpoint": "http://censures-watcher-server:3002/graphql",
"prefix": "censures"
},
{
"endpoint": "http://claims-watcher-server:3003/graphql",
"prefix": "claims"
},
{
"endpoint": "http://conditional-star-release-watcher-server:3004/graphql",
"prefix": "conditionalStarRelease"
},
{
"endpoint": "http://delegated-sending-watcher-server:3005/graphql",
"prefix": "delegatedSending"
},
{
"endpoint": "http://ecliptic-watcher-server:3006/graphql",
"prefix": "ecliptic"
},
{
"endpoint": "http://linear-star-release-watcher-server:3007/graphql",
"prefix": "linearStarRelease"
},
{
"endpoint": "http://polls-watcher-server:3008/graphql",
"prefix": "polls"
}
]

View File

@ -0,0 +1,31 @@
const fs = require('fs');
const tomlJS = require('toml-js');
const toml = require('toml');
const { merge } = require('lodash')
const main = () => {
const overrideConfigString = fs.readFileSync('environments/watcher-config.toml', 'utf-8');
const configString = fs.readFileSync('environments/local.toml', 'utf-8');
const overrideConfig = toml.parse(overrideConfigString)
const config = toml.parse(configString)
// Merge configs
const updatedConfig = merge(config, overrideConfig);
// Form dbConnectionString for jobQueue DB
const parts = config.jobQueue.dbConnectionString.split("://");
const credsAndDB = parts[1].split("@");
const creds = credsAndDB[0].split(":");
creds[0] = overrideConfig.database.username;
creds[1] = overrideConfig.database.password;
credsAndDB[0] = creds.join(":");
const dbName = credsAndDB[1].split("/")[1]
credsAndDB[1] = [overrideConfig.database.host, dbName].join("/");
parts[1] = credsAndDB.join("@");
updatedConfig.jobQueue.dbConnectionString = parts.join("://");
fs.writeFileSync('environments/local.toml', tomlJS.dump(updatedConfig), 'utf-8');
}
main();

View File

@ -0,0 +1,27 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_IPLD_ETH_RPC="${CERC_IPLD_ETH_RPC:-${DEFAULT_CERC_IPLD_ETH_RPC}}"
CERC_IPLD_ETH_GQL="${CERC_IPLD_ETH_GQL:-${DEFAULT_CERC_IPLD_ETH_GQL}}"
echo "Using IPLD ETH RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using IPLD GQL endpoint ${CERC_IPLD_ETH_GQL}"
# Replace env variables in template TOML file
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/watcher-config.toml
# Merge SO watcher config with existing config file
node merge-toml.js
echo 'yarn server'
yarn server

View File

@ -0,0 +1,14 @@
[server]
host = "0.0.0.0"
maxSimultaneousRequests = -1
[database]
host = "watcher-db"
port = 5432
username = "vdbm"
password = "password"
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_GQL"
rpcProviderEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_RPC"

View File

@ -0,0 +1,5 @@
# Defaults
# ipld-eth-server endpoints
DEFAULT_CERC_IPLD_ETH_RPC=
DEFAULT_CERC_IPLD_ETH_GQL=

View File

@ -0,0 +1,28 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_SNAPSHOT_GQL_ENDPOINT="${CERC_SNAPSHOT_GQL_ENDPOINT:-${DEFAULT_CERC_SNAPSHOT_GQL_ENDPOINT}}"
CERC_SNAPSHOT_BLOCKHASH="${CERC_SNAPSHOT_BLOCKHASH:-${DEFAULT_CERC_SNAPSHOT_BLOCKHASH}}"
CHECKPOINT_FILE_PATH="./state_checkpoint/state-gql-${CERC_SNAPSHOT_BLOCKHASH}"
if [ -f "${CHECKPOINT_FILE_PATH}" ]; then
# Skip checkpoint creation if the file already exists
echo "File at ${CHECKPOINT_FILE_PATH} already exists, skipping checkpoint creation..."
else
# Create a checkpoint using GQL endpoint
echo "Creating a state checkpoint using GQL endpoint..."
yarn create-state-gql \
--snapshot-block-hash "${CERC_SNAPSHOT_BLOCKHASH}" \
--gql-endpoint "${CERC_SNAPSHOT_GQL_ENDPOINT}" \
--output "${CHECKPOINT_FILE_PATH}"
fi
echo "Initializing watcher using a state snapshot..."
# Import the state checkpoint
# (skips if snapshot block is already indexed)
yarn import-state --import-file "${CHECKPOINT_FILE_PATH}"

View File

@ -0,0 +1,23 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_IPLD_ETH_RPC="${CERC_IPLD_ETH_RPC:-${DEFAULT_CERC_IPLD_ETH_RPC}}"
CERC_IPLD_ETH_GQL="${CERC_IPLD_ETH_GQL:-${DEFAULT_CERC_IPLD_ETH_GQL}}"
echo "Using ETH server RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using ETH server GQL endpoint ${CERC_IPLD_ETH_GQL}"
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/local.toml
echo "Running job-runner"
DEBUG=vulcanize:* exec node --enable-source-maps dist/job-runner.js

View File

@ -0,0 +1,32 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_IPLD_ETH_RPC="${CERC_IPLD_ETH_RPC:-${DEFAULT_CERC_IPLD_ETH_RPC}}"
CERC_IPLD_ETH_GQL="${CERC_IPLD_ETH_GQL:-${DEFAULT_CERC_IPLD_ETH_GQL}}"
CERC_USE_STATE_SNAPSHOT="${CERC_USE_STATE_SNAPSHOT:-${DEFAULT_CERC_USE_STATE_SNAPSHOT}}"
echo "Using ETH server RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using ETH server GQL endpoint ${CERC_IPLD_ETH_GQL}"
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/local.toml
if [ "$CERC_USE_STATE_SNAPSHOT" = true ] ; then
./create-and-import-checkpoint.sh
else
echo "Initializing watcher using fill..."
yarn fill --start-block $DEFAULT_CERC_GELATO_START_BLOCK --end-block $DEFAULT_CERC_GELATO_START_BLOCK
fi
echo "Running active server"
DEBUG=vulcanize:* exec node --enable-source-maps dist/server.js

View File

@ -0,0 +1,75 @@
[server]
host = "0.0.0.0"
port = 3008
kind = "active"
# Checkpointing state.
checkpointing = true
# Checkpoint interval in number of blocks.
checkpointInterval = 2000
# Enable state creation
# CAUTION: Disable only if state creation is not desired or can be filled subsequently
enableState = true
subgraphPath = "./subgraph"
# Interval to restart wasm instance periodically
wasmRestartBlocksInterval = 20
# Interval in number of blocks at which to clear entities cache.
clearEntitiesCacheInterval = 1000
# Boolean to filter logs by contract.
filterLogs = true
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# GQL cache settings
[server.gqlCache]
enabled = true
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
[metrics]
host = "0.0.0.0"
port = 9000
[metrics.gql]
port = 9001
[database]
type = "postgres"
host = "gelato-watcher-db"
port = 5432
database = "gelato-watcher"
username = "vdbm"
password = "password"
synchronize = true
logging = false
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_GQL"
rpcProviderEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_RPC"
[upstream.cache]
name = "requests"
enabled = false
deleteOnStart = false
[jobQueue]
dbConnectionString = "postgres://vdbm:password@gelato-watcher-db/gelato-watcher-job-queue"
maxCompletionLagInSecs = 300
jobDelayInMilliSecs = 100
eventsInBatch = 50
blockDelayInMilliSecs = 2000
prefetchBlocksInMem = true
prefetchBlockCount = 10

View File

@ -0,0 +1,13 @@
# ipld-eth-server endpoints
DEFAULT_CERC_IPLD_ETH_RPC="http://ipld-eth-server:8082"
DEFAULT_CERC_IPLD_ETH_GQL="http://ipld-eth-server:8083/graphql"
# Gelato start block
DEFAULT_CERC_GELATO_START_BLOCK=11361987
# Whether to use a state snapshot to initialize the watcher
DEFAULT_CERC_USE_STATE_SNAPSHOT=false
# State snapshot params
DEFAULT_CERC_SNAPSHOT_GQL_ENDPOINT=
DEFAULT_CERC_SNAPSHOT_BLOCKHASH=

View File

@ -0,0 +1,89 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L2_GETH_RPC="${CERC_L2_GETH_RPC:-${DEFAULT_CERC_L2_GETH_RPC}}"
CERC_L1_ACCOUNTS_CSV_URL="${CERC_L1_ACCOUNTS_CSV_URL:-${DEFAULT_CERC_L1_ACCOUNTS_CSV_URL}}"
CERC_MOBYMASK_APP_BASE_URI="${CERC_MOBYMASK_APP_BASE_URI:-${DEFAULT_CERC_MOBYMASK_APP_BASE_URI}}"
CERC_DEPLOYED_CONTRACT="${CERC_DEPLOYED_CONTRACT:-${DEFAULT_CERC_DEPLOYED_CONTRACT}}"
# Check if CERC_DEPLOYED_CONTRACT environment variable set to skip contract deployment
if [ -n "$CERC_DEPLOYED_CONTRACT" ]; then
echo "CERC_DEPLOYED_CONTRACT is set to '$CERC_DEPLOYED_CONTRACT'"
echo "Skipping contract deployment"
exit 0
fi
echo "Using L2 RPC endpoint ${CERC_L2_GETH_RPC}"
if [ -n "$CERC_L1_ACCOUNTS_CSV_URL" ] && \
l1_accounts_response=$(curl -L --write-out '%{http_code}' --silent --output /dev/null "$CERC_L1_ACCOUNTS_CSV_URL") && \
[ "$l1_accounts_response" -eq 200 ];
then
echo "Fetching L1 account credentials using provided URL"
mkdir -p /geth-accounts
wget -O /geth-accounts/accounts.csv "$CERC_L1_ACCOUNTS_CSV_URL"
# Read the private key of an L1 account to deploy contract
CERC_PRIVATE_KEY_DEPLOYER=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 3)
else
echo "Couldn't fetch L1 account credentials, using CERC_PRIVATE_KEY_DEPLOYER from env"
fi
# Set the private key
jq --arg privateKey "$CERC_PRIVATE_KEY_DEPLOYER" '.privateKey = $privateKey' secrets-template.json > secrets.json
# Set the RPC URL
jq --arg rpcUrl "$CERC_L2_GETH_RPC" '.rpcUrl = $rpcUrl' secrets.json > secrets_updated.json && mv secrets_updated.json secrets.json
# Set the MobyMask app base URI
jq --arg baseURI "$CERC_MOBYMASK_APP_BASE_URI" '.baseURI = $baseURI' secrets.json > secrets_updated.json && mv secrets_updated.json secrets.json
# Wait for L2 Optimism Geth and Node servers to be up before deploying contract
CERC_L2_GETH_HOST="${CERC_L2_GETH_HOST:-${DEFAULT_CERC_L2_GETH_HOST}}"
CERC_L2_GETH_PORT="${CERC_L2_GETH_PORT:-${DEFAULT_CERC_L2_GETH_PORT}}"
CERC_L2_NODE_HOST="${CERC_L2_NODE_HOST:-${DEFAULT_CERC_L2_NODE_HOST}}"
CERC_L2_NODE_PORT="${CERC_L2_NODE_PORT:-${DEFAULT_CERC_L2_NODE_PORT}}"
./wait-for-it.sh -h "${CERC_L2_GETH_HOST}" -p "${CERC_L2_GETH_PORT}" -s -t 0
./wait-for-it.sh -h "${CERC_L2_NODE_HOST}" -p "${CERC_L2_NODE_PORT}" -s -t 0
export RPC_URL="${CERC_L2_GETH_RPC}"
# Check and exit if a deployment already exists (on restarts)
if [ -f ./config.json ]; then
echo "config.json already exists, checking the contract deployment"
# Read JSON file
DEPLOYMENT_DETAILS=$(cat config.json)
CONTRACT_ADDRESS=$(echo "$DEPLOYMENT_DETAILS" | jq -r '.address')
cd ../hardhat
if yarn verifyDeployment --network optimism --contract "${CONTRACT_ADDRESS}"; then
echo "Deployment verfication successful"
cd ../server
else
echo "Deployment verfication failed, please clear MobyMask deployment volume before starting"
exit 1
fi
fi
# Wait until balance for deployer account is updated
cd ../hardhat
while true; do
ACCOUNT_BALANCE=$(yarn balance --network optimism "$CERC_PRIVATE_KEY_DEPLOYER" | grep ETH)
if [ "$ACCOUNT_BALANCE" != "0.0 ETH" ]; then
echo "Account balance updated: $ACCOUNT_BALANCE"
break # exit the loop
fi
echo "Account balance not updated: $ACCOUNT_BALANCE"
echo "Checking after 2 seconds"
sleep 2
done
cd ../server
npm run deployAndGenerateInvite

View File

@ -0,0 +1,20 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# Check for peer ids in ./peers folder, create if not present
if [ -f /peer-ids/relay-id.json ]; then
echo "Using peer id for relay node from the mounted volume"
else
echo "Creating a new peer id for relay node"
yarn create-peer -f /peer-ids/relay-id.json
fi
if [ -f /peer-ids/peer-id.json ]; then
echo "Using peer id for peer node from the mounted volume"
else
echo "Creating a new peer id for peer node"
yarn create-peer -f /peer-ids/peer-id.json
fi

View File

@ -0,0 +1,43 @@
#!/usr/bin/env bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_CHAIN_ID="${CERC_CHAIN_ID:-${DEFAULT_CERC_CHAIN_ID}}"
CERC_DEPLOYED_CONTRACT="${CERC_DEPLOYED_CONTRACT:-${DEFAULT_CERC_DEPLOYED_CONTRACT}}"
CERC_RELAY_NODES="${CERC_RELAY_NODES:-${DEFAULT_CERC_RELAY_NODES}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
CERC_APP_WATCHER_URL="${CERC_APP_WATCHER_URL:-${DEFAULT_CERC_APP_WATCHER_URL}}"
# If not set (or []), check the mounted volume for relay peer id
if [ -z "$CERC_RELAY_NODES" ] || [ "$CERC_RELAY_NODES" = "[]" ]; then
echo "CERC_RELAY_NODES not provided, taking from the mounted volume"
CERC_RELAY_NODES="[\"/ip4/127.0.0.1/tcp/9090/ws/p2p/$(jq -r '.id' /peers/relay-id.json)\"]"
fi
echo "Using CERC_RELAY_NODES $CERC_RELAY_NODES"
if [ -z "$CERC_DEPLOYED_CONTRACT" ]; then
# Use config from mounted volume (when running web-app along with watcher stack)
echo "Taking config for deployed contract from mounted volume"
while [ ! -f /server/config.json ]; do
echo "Config not found, retrying after 5 seconds"
sleep 5
done
# Get deployed contract address and chain id
CERC_DEPLOYED_CONTRACT=$(jq -r '.address' /server/config.json | tr -d '"')
CERC_CHAIN_ID=$(jq -r '.chainId' /server/config.json)
else
echo "Taking deployed contract details from env"
fi
# Use yq to create config.yml with environment variables
yq -n ".address = env(CERC_DEPLOYED_CONTRACT)" > /config/config.yml
yq ".watcherUrl = env(CERC_APP_WATCHER_URL)" -i /config/config.yml
yq ".chainId = env(CERC_CHAIN_ID)" -i /config/config.yml
yq ".relayNodes = strenv(CERC_RELAY_NODES)" -i /config/config.yml
yq ".denyMultiaddrs = strenv(CERC_DENY_MULTIADDRS)" -i /config/config.yml
/scripts/start-serving-app.sh

View File

@ -0,0 +1,29 @@
# Defaults
# Watcher endpoint
DEFAULT_CERC_APP_WATCHER_URL="http://localhost:3001"
# Set of relay peers to connect to from the relay node
DEFAULT_CERC_RELAY_PEERS=[]
# Domain to be used in the relay node's announce address
DEFAULT_CERC_RELAY_ANNOUNCE_DOMAIN=
# Base URI for mobymask-app (used for generating invite)
DEFAULT_CERC_MOBYMASK_APP_BASE_URI="http://127.0.0.1:3002/#"
# Set to false for disabling watcher peer to send txs to L2
DEFAULT_CERC_ENABLE_PEER_L2_TXS=true
# Set deployed MobyMask contract address to avoid deploying contract in stack
# mobymask-app will use this contract address in config if run separately
DEFAULT_CERC_DEPLOYED_CONTRACT=
# Chain ID is used by mobymask web-app for txs
DEFAULT_CERC_CHAIN_ID=42069
# Set of relay nodes to be used by web-apps
DEFAULT_CERC_RELAY_NODES=[]
# Set of multiaddrs to be avoided while dialling
DEFAULT_CERC_DENY_MULTIADDRS=[]

View File

@ -0,0 +1,14 @@
# Defaults
# L2 endpoints
DEFAULT_CERC_L2_GETH_RPC="http://op-geth:8545"
# Endpoints waited on before contract deployment
DEFAULT_CERC_L2_GETH_HOST="op-geth"
DEFAULT_CERC_L2_GETH_PORT=8545
DEFAULT_CERC_L2_NODE_HOST="op-node"
DEFAULT_CERC_L2_NODE_PORT=8547
# URL to get CSV with credentials for accounts on L1 to perform txs on L2
DEFAULT_CERC_L1_ACCOUNTS_CSV_URL="http://fixturenet-eth-bootnode-geth:9898/accounts.csv"

View File

@ -0,0 +1,5 @@
{
"rpcUrl": "",
"privateKey": "",
"baseURI": ""
}

View File

@ -0,0 +1,10 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_RELAY_MULTIADDR="/dns4/mobymask-watcher-server/tcp/9090/ws/p2p/$(jq -r '.id' /peer-ids/relay-id.json)"
# Write the relay node's multiaddr to /app/packages/peer/.env for running tests
echo "RELAY=\"$CERC_RELAY_MULTIADDR\"" > ./.env

View File

@ -0,0 +1,64 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L2_GETH_RPC="${CERC_L2_GETH_RPC:-${DEFAULT_CERC_L2_GETH_RPC}}"
CERC_L1_ACCOUNTS_CSV_URL="${CERC_L1_ACCOUNTS_CSV_URL:-${DEFAULT_CERC_L1_ACCOUNTS_CSV_URL}}"
CERC_RELAY_PEERS="${CERC_RELAY_PEERS:-${DEFAULT_CERC_RELAY_PEERS}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
CERC_RELAY_ANNOUNCE_DOMAIN="${CERC_RELAY_ANNOUNCE_DOMAIN:-${DEFAULT_CERC_RELAY_ANNOUNCE_DOMAIN}}"
CERC_ENABLE_PEER_L2_TXS="${CERC_ENABLE_PEER_L2_TXS:-${DEFAULT_CERC_ENABLE_PEER_L2_TXS}}"
CERC_DEPLOYED_CONTRACT="${CERC_DEPLOYED_CONTRACT:-${DEFAULT_CERC_DEPLOYED_CONTRACT}}"
echo "Using L2 RPC endpoint ${CERC_L2_GETH_RPC}"
# Use public domain for relay multiaddr in peer config if specified
# Otherwise, use the docker container's host IP
if [ -n "$CERC_RELAY_ANNOUNCE_DOMAIN" ]; then
CERC_RELAY_MULTIADDR="/dns4/${CERC_RELAY_ANNOUNCE_DOMAIN}/tcp/443/wss/p2p/$(jq -r '.id' /app/peers/relay-id.json)"
else
CERC_RELAY_MULTIADDR="/dns4/mobymask-watcher-server/tcp/9090/ws/p2p/$(jq -r '.id' /app/peers/relay-id.json)"
fi
# Use contract address from environment variable or set from config.json in mounted volume
if [ -n "$CERC_DEPLOYED_CONTRACT" ]; then
CONTRACT_ADDRESS="${CERC_DEPLOYED_CONTRACT}"
else
# Assign deployed contract address from server config (created by mobymask container after deploying contract)
CONTRACT_ADDRESS=$(jq -r '.address' /server/config.json | tr -d '"')
fi
if [ -n "$CERC_L1_ACCOUNTS_CSV_URL" ] && \
l1_accounts_response=$(curl -L --write-out '%{http_code}' --silent --output /dev/null "$CERC_L1_ACCOUNTS_CSV_URL") && \
[ "$l1_accounts_response" -eq 200 ];
then
echo "Fetching L1 account credentials using provided URL"
mkdir -p /geth-accounts
wget -O /geth-accounts/accounts.csv "$CERC_L1_ACCOUNTS_CSV_URL"
# Read the private key of an L1 account for sending txs from peer
CERC_PRIVATE_KEY_PEER=$(awk -F, 'NR==2{print $NF}' /geth-accounts/accounts.csv)
else
echo "Couldn't fetch L1 account credentials, using CERC_PRIVATE_KEY_PEER from env"
fi
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_RELAY_PEERS|${CERC_RELAY_PEERS}|g; \
s|REPLACE_WITH_CERC_DENY_MULTIADDRS|${CERC_DENY_MULTIADDRS}|g; \
s/REPLACE_WITH_CERC_RELAY_ANNOUNCE_DOMAIN/${CERC_RELAY_ANNOUNCE_DOMAIN}/g; \
s|REPLACE_WITH_CERC_RELAY_MULTIADDR|${CERC_RELAY_MULTIADDR}|g; \
s/REPLACE_WITH_CERC_ENABLE_PEER_L2_TXS/${CERC_ENABLE_PEER_L2_TXS}/g; \
s/REPLACE_WITH_CERC_PRIVATE_KEY_PEER/${CERC_PRIVATE_KEY_PEER}/g; \
s/REPLACE_WITH_CONTRACT_ADDRESS/${CONTRACT_ADDRESS}/g; \
s|REPLACE_WITH_CERC_L2_GETH_RPC_ENDPOINT|${CERC_L2_GETH_RPC}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/local.toml
echo 'yarn server'
yarn server

View File

@ -0,0 +1,7 @@
{
"relayNodes": [],
"peer": {
"denyMultiaddrs": [],
"enableDebugInfo": true
}
}

View File

@ -0,0 +1,22 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_RELAY_NODES="${CERC_RELAY_NODES:-${DEFAULT_CERC_RELAY_NODES}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
# If not set (or []), check the mounted volume for relay peer id
if [ -z "$CERC_RELAY_NODES" ] || [ "$CERC_RELAY_NODES" = "[]" ]; then
echo "CERC_RELAY_NODES not provided, taking from the mounted volume"
CERC_RELAY_NODES="[\"/ip4/127.0.0.1/tcp/9090/ws/p2p/$(jq -r '.id' /peers/relay-id.json)\"]"
fi
echo "Using CERC_RELAY_NODES $CERC_RELAY_NODES"
# Use yq to create config.yml with environment variables
yq -n ".relayNodes = strenv(CERC_RELAY_NODES)" > /config/config.yml
yq ".denyMultiaddrs = strenv(CERC_DENY_MULTIADDRS)" -i /config/config.yml
/scripts/start-serving-app.sh

View File

@ -0,0 +1,78 @@
[server]
host = "0.0.0.0"
port = 3001
kind = "lazy"
# Checkpointing state.
checkpointing = true
# Checkpoint interval in number of blocks.
checkpointInterval = 2000
# Enable state creation
enableState = true
# Boolean to filter logs by contract.
filterLogs = true
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = -1
[server.p2p]
enableRelay = true
enablePeer = true
[server.p2p.relay]
host = "0.0.0.0"
port = 9090
relayPeers = REPLACE_WITH_CERC_RELAY_PEERS
denyMultiaddrs = REPLACE_WITH_CERC_DENY_MULTIADDRS
peerIdFile = './peers/relay-id.json'
announce = 'REPLACE_WITH_CERC_RELAY_ANNOUNCE_DOMAIN'
enableDebugInfo = true
[server.p2p.peer]
relayMultiaddr = 'REPLACE_WITH_CERC_RELAY_MULTIADDR'
pubSubTopic = 'mobymask'
denyMultiaddrs = REPLACE_WITH_CERC_DENY_MULTIADDRS
peerIdFile = './peers/peer-id.json'
enableDebugInfo = true
enableL2Txs = REPLACE_WITH_CERC_ENABLE_PEER_L2_TXS
[server.p2p.peer.l2TxsConfig]
privateKey = 'REPLACE_WITH_CERC_PRIVATE_KEY_PEER'
contractAddress = 'REPLACE_WITH_CONTRACT_ADDRESS'
[metrics]
host = "0.0.0.0"
port = 9000
[metrics.gql]
port = 9001
[database]
type = "postgres"
host = "mobymask-watcher-db"
port = 5432
database = "mobymask-watcher"
username = "vdbm"
password = "password"
synchronize = true
logging = false
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "http://ipld-eth-server:8083/graphql"
rpcProviderEndpoint = "REPLACE_WITH_CERC_L2_GETH_RPC_ENDPOINT"
blockDelayInMilliSecs = 60000
[upstream.cache]
name = "requests"
enabled = false
deleteOnStart = false
[jobQueue]
dbConnectionString = "postgres://vdbm:password@mobymask-watcher-db/mobymask-watcher-job-queue"
maxCompletionLagInSecs = 300
jobDelayInMilliSecs = 100
eventsInBatch = 50

View File

@ -0,0 +1,13 @@
# source'ed into container build scripts to do generic command setup
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
set -x
echo "Build environment variables:"
env
fi
build_command_args=""
if [[ ${CERC_FORCE_REBUILD} == "true" ]]; then
build_command_args="${build_command_args} --no-cache"
fi
if [[ -n "$CERC_CONTAINER_EXTRA_BUILD_ARGS" ]]; then
build_command_args="${build_command_args} ${CERC_CONTAINER_EXTRA_BUILD_ARGS}"
fi

View File

@ -0,0 +1,5 @@
#!/usr/bin/env bash
# Build a local version of the task executor for act-runner
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/act-runner-task-executor:local -f ${CERC_REPO_BASE_DIR}/hosting/gitea/Dockerfile.task-executor ${build_command_args} ${SCRIPT_DIR}

View File

@ -0,0 +1,5 @@
#!/usr/bin/env bash
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# Build a local version of the act-runner image
# TODO: enhance the default build code path to cope with this container (repo has an _ which needs to be converted to - in the image tag)
docker build -t cerc/act-runner:local -f ${CERC_REPO_BASE_DIR}/act_runner/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/act_runner

View File

@ -16,6 +16,12 @@ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NO
RUN mkdir /scripts RUN mkdir /scripts
COPY install-dependencies.sh /scripts COPY install-dependencies.sh /scripts
# Override the definition of GERBIL_PATH in the base image, but
# is safe because (at present) no gerbil packages are installed in the base image
# We do this in order to allow a set of pre-installed packages from the container
# to be used with an arbitrary, potentially different set of projects bind mounted
# at /src
ENV GERBIL_PATH=/.gerbil
RUN bash /scripts/install-dependencies.sh RUN bash /scripts/install-dependencies.sh
# Needed to prevent git from raging about /src # Needed to prevent git from raging about /src

View File

@ -10,6 +10,7 @@ DEPS=(github.com/fare/gerbil-utils
github.com/vyzo/gerbil-libp2p github.com/vyzo/gerbil-libp2p
) ; ) ;
for i in ${DEPS[@]} ; do for i in ${DEPS[@]} ; do
gxpkg install $i && echo "Installing gerbil package: $i"
gxpkg install $i
gxpkg build $i gxpkg build $i
done done

View File

@ -1,14 +1,37 @@
# Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile # Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile
# Which depends on: https://github.com/nodejs/docker-node/blob/main/Dockerfile-debian.template
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster # [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
ARG VARIANT=16-bullseye ARG VARIANT=18-bullseye
FROM node:${VARIANT} FROM node:${VARIANT}
# Set these args to change the uid/gid for the base container's "node" user to match that of the host user (so bind mounts work as expected).
ARG CERC_HOST_UID=1000
ARG CERC_HOST_GID=1000
# Make these values available at runtime to allow a consistency check.
ENV HOST_UID=${CERC_HOST_UID}
ENV HOST_GID=${CERC_HOST_GID}
ARG USERNAME=node ARG USERNAME=node
ARG NPM_GLOBAL=/usr/local/share/npm-global ARG NPM_GLOBAL=/usr/local/share/npm-global
# Add NPM global to PATH. # Add NPM global to PATH.
ENV PATH=${NPM_GLOBAL}/bin:${PATH} ENV PATH=${NPM_GLOBAL}/bin:${PATH}
SHELL ["/bin/bash", "-c"]
RUN \
# Don't switch container uid/gid if the host uid/gid is 1000 (which means it's already correct),
# or root (which won't work anyway) or <= 100 (which also won't work).
if [[ ${CERC_HOST_GID} -ne 1000 && ${CERC_HOST_GID} -ne 0 && ${CERC_HOST_GID} -gt 100 ]]; then \
groupmod -g ${CERC_HOST_GID} ${USERNAME}; \
fi \
&& if [[ ${CERC_HOST_UID} -ne 1000 && ${CERC_HOST_UID} -ne 0 && ${CERC_HOST_UID} -gt 100 ]]; then \
usermod -u ${CERC_HOST_UID} -g ${CERC_HOST_GID} ${USERNAME} && chown ${CERC_HOST_UID}:${CERC_HOST_GID} /home/${USERNAME}; \
fi
# Prevents npm from printing version warnings
ENV NPM_CONFIG_UPDATE_NOTIFIER=false
RUN \ RUN \
# Configure global npm install location, use group to adapt to UID/GID changes # Configure global npm install location, use group to adapt to UID/GID changes
if ! cat /etc/group | grep -e "^npm:" > /dev/null 2>&1; then groupadd -r npm; fi \ if ! cat /etc/group | grep -e "^npm:" > /dev/null 2>&1; then groupadd -r npm; fi \
@ -39,6 +62,7 @@ RUN mkdir /scripts
COPY build-npm-package.sh /scripts COPY build-npm-package.sh /scripts
COPY yarn-local-registry-fixup.sh /scripts COPY yarn-local-registry-fixup.sh /scripts
COPY build-npm-package-local-dependencies.sh /scripts COPY build-npm-package-local-dependencies.sh /scripts
COPY check-uid.sh /scripts
ENV PATH="${PATH}:/scripts" ENV PATH="${PATH}:/scripts"
COPY entrypoint.sh . COPY entrypoint.sh .

View File

@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# Usage: build-npm-package-local-dependencies.sh <registry-url> <publish-with-this-version> # Usage: build-npm-package-local-dependencies.sh <registry-url> <publish-with-this-version>
# Runs build-npm-package.sh after first fixing up yarn.lock to use a local # Runs build-npm-package.sh after first fixing up yarn.lock to use a local
# npm registry for all packages in a spcific scope (currently @cerc-io) # npm registry for all packages in a specific scope (currently @cerc-io, @lirewine and @muknsys)
if [ -n "$CERC_SCRIPT_DEBUG" ]; then if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x set -x
fi fi
@ -17,18 +17,21 @@ fi
set -e set -e
local_npm_registry_url=$1 local_npm_registry_url=$1
package_publish_version=$2 package_publish_version=$2
# TODO: make this a paramater and allow a list of scopes # If we need to handle an additional scope, add it to the list below:
npm_scope_for_local="@cerc-io" npm_scopes_to_handle=("@cerc-io" "@lirewine" "@muknsys")
# We need to configure the local registry for npm_scope_for_local in ${npm_scopes_to_handle[@]}
npm config set ${npm_scope_for_local}:registry ${local_npm_registry_url}
npm config set -- ${local_npm_registry_url}:_authToken ${CERC_NPM_AUTH_TOKEN}
# Find the set of dependencies from the specified scope
mapfile -t dependencies_from_scope < <(cat package.json | jq -r '.dependencies | with_entries(if (.key|test("^'${npm_scope_for_local}'/.*$")) then ( {key: .key, value: .value } ) else empty end ) | keys[]')
echo "Fixing up dependencies"
for package in "${dependencies_from_scope[@]}"
do do
# We need to configure the local registry
npm config set ${npm_scope_for_local}:registry ${local_npm_registry_url}
npm config set -- ${local_npm_registry_url}:_authToken ${CERC_NPM_AUTH_TOKEN}
# Find the set of dependencies from the specified scope
mapfile -t dependencies_from_scope < <(cat package.json | jq -r '.dependencies | with_entries(if (.key|test("^'${npm_scope_for_local}'/.*$")) then ( {key: .key, value: .value } ) else empty end ) | keys[]')
echo "Fixing up dependencies in scope ${npm_scope_for_local}"
for package in "${dependencies_from_scope[@]}"
do
echo "Fixing up package ${package}" echo "Fixing up package ${package}"
yarn-local-registry-fixup.sh $package ${local_npm_registry_url} yarn-local-registry-fixup.sh $package ${local_npm_registry_url}
done
done done
echo "Running build" echo "Running build"
build-npm-package.sh ${local_npm_registry_url} ${package_publish_version} build-npm-package.sh ${local_npm_registry_url} ${package_publish_version}

View File

@ -22,14 +22,24 @@ set -e
# Get the name of this package from package.json since we weren't passed that # Get the name of this package from package.json since we weren't passed that
package_name=$( cat package.json | jq -r .name ) package_name=$( cat package.json | jq -r .name )
local_npm_registry_url=$1 local_npm_registry_url=$1
npm config set @lirewine:registry ${local_npm_registry_url}
npm config set @cerc-io:registry ${local_npm_registry_url} npm config set @cerc-io:registry ${local_npm_registry_url}
npm config set -- ${local_npm_registry_url}:_authToken ${CERC_NPM_AUTH_TOKEN} npm config set @lirewine:registry ${local_npm_registry_url}
npm config set @muknsys:registry ${local_npm_registry_url}
# Workaround bug in npm unpublish where it needs the url to be of the form //<foo> and not http://<foo>
local_npm_registry_url_fixed=$( echo ${local_npm_registry_url} | sed -e 's/^http[s]\{0,1\}://')
npm config set -- ${local_npm_registry_url_fixed}:_authToken ${CERC_NPM_AUTH_TOKEN}
# First check if the version of this package we're trying to build already exists in the registry # First check if the version of this package we're trying to build already exists in the registry
package_exists=$( yarn info --json ${package_name}@${package_publish_version} 2>/dev/null | jq -r .data.dist.tarball ) package_exists=$( yarn info --json ${package_name}@${package_publish_version} 2>/dev/null | jq -r .data.dist.tarball )
if [[ ! -z "$package_exists" && "$package_exists" != "null" ]]; then if [[ ! -z "$package_exists" && "$package_exists" != "null" ]]; then
echo "${package_publish_version} of ${package_name} already exists in the registry, skipping build" echo "${package_publish_version} of ${package_name} already exists in the registry"
if [[ ${CERC_FORCE_REBUILD} == "true" ]]; then
# Attempt to unpublish the existing package
echo "NOTE: unpublishing existing package version since force rebuild is enabled"
npm unpublish --force ${package_name}@${package_publish_version}
else
echo "skipping build since target version already exists"
exit 0 exit 0
fi
fi fi
echo "Build and publish ${package_name} version ${package_publish_version}" echo "Build and publish ${package_name} version ${package_publish_version}"
yarn install yarn install

View File

@ -0,0 +1,21 @@
#!/bin/bash
# Make the container usable for uid/gid != 1000
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
set -x
fi
current_uid=$(id -u)
current_gid=$(id -g)
# Don't check if running as root
if [[ ${current_uid} == 0 ]]; then
exit 0
fi
# Check the current uid/gid vs the uid/gid used to build the container.
# We do this because both bind mounts and npm tooling require the uid/gid to match.
if [[ ${current_gid} != ${HOST_GID} ]]; then
echo "Warning: running with gid: ${current_gid} which is not the gid for which this container was built (${HOST_GID})"
exit 0
fi
if [[ ${current_uid} != ${HOST_UID} ]]; then
echo "Warning: running with gid: ${current_uid} which is not the uid for which this container was built (${HOST_UID})"
exit 0
fi

View File

@ -1,2 +1,3 @@
#!/bin/sh #!/bin/sh
/scripts/check-uid.sh
exec "$@" exec "$@"

View File

@ -18,16 +18,17 @@ fi
set -e set -e
target_package=$1 target_package=$1
local_npm_registry_url=$2 local_npm_registry_url=$2
# TODO: use jq rather than sed here: # Extract the actual version pinned in yarn.lock
versioned_target_package=$(grep ${target_package} package.json | sed -e 's#[[:space:]]\{1,\}\"\('${target_package}'\)\":[[:space:]]\{1,\}\"\(.*\)\",#\1@\2#' ) # See: https://stackoverflow.com/questions/60454251/how-to-know-the-version-of-currently-installed-package-from-yarn-lock
versioned_target_package=$(yarn list --pattern ${target_package} --depth=0 --json --non-interactive --no-progress | jq -r '.data.trees[].name')
# Use yarn info to get URL checksums etc from the new registry # Use yarn info to get URL checksums etc from the new registry
yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null) yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null)
# First check if the target version actually exists. # First check if the target version actually exists.
# If it doesn't exist there will be no .data.dist.tarball element, # If it doesn't exist there will be no .data.dist.tarball element,
# and jq will output the string "null" # and jq will output the string "null"
package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball) package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball)
if [[ $package_tarball == "null" ]]; then if [[ "$yarn_info_output" == "" || $package_tarball == "null" ]]; then
echo "FATAL: Target package version ($versioned_target_package) not found" >&2 echo "FATAL: Target package version ($versioned_target_package) not found (or bad npm auth token)" >&2
exit 1 exit 1
fi fi
# Code below parses out the values we need # Code below parses out the values we need

View File

@ -1,3 +1,4 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Build cerc/eth-probe # Build cerc/eth-probe
docker build -t cerc/eth-probe:local ${CERC_REPO_BASE_DIR}/eth-probe source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
docker build -t cerc/eth-probe:local ${build_command_args} ${CERC_REPO_BASE_DIR}/eth-probe

View File

@ -1,3 +1,4 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Build cerc/eth-statediff-fill-service # Build cerc/eth-statediff-fill-service
docker build -t cerc/eth-statediff-fill-service:local ${CERC_REPO_BASE_DIR}/eth-statediff-fill-service source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
docker build -t cerc/eth-statediff-fill-service:local ${build_command_args} ${CERC_REPO_BASE_DIR}/eth-statediff-fill-service

Some files were not shown because too many files have changed in this diff Show More