Centrifuge
Set up your Centrifuge Mainnet or Testnet node.
Prerequisites
- Set up your Axelar validator
- Minimum hardware requirements: 2+ cores CPU, 4GB+ RAM, 200GB+ free storage space.
- Recommended hardware: 4+ CPU cores, 16GB RAM, 1TB SSD or faster storage.
- Check out Centrifuge Releases to identify the one for mainnet.
- Docker OR
rustup
is installed
Options
- Run with Docker
- Run with binaries
The CLI options in this doc are a reference that works for the Centrifuge team, but feel free to play with the settings for your own setup using the official docs: https://docs.substrate.io/reference/command-line-tools/node-template/ The info here has been derived from the official documentation, where you can find more info and also all the options for logging and debugging: https://docs.substrate.io/deploy/deployment-options/
1. Run with Docker
You can use the container published by Centrifuge on their DockerHub repo or be fully trustless by cloning the cent-chain repository and using the Dockerfile (2-4h build time on an average machine), in the latter make sure to checkout the specific commit for the latest release before building.
To find the latest release go to the Centrifuge repository, and look for the listed Docker Image.
More images in the official Docker Hub repository.
Note: using the tag latest
is always going to point to the latest release, make sure that your system always pulls the image though or you’ll end up with your local cached verison. For docker-compose this is achieved using the pull --policy always
option
Create docker compose file
Create a docker-compose.yml
file with the following contents.
Change the ports
based on your network setup.
Replace /mnt/my_volume/data
with the volume and/or data folder you want to use.
From version 0.10.35 onwards, the Docker container uses an unprivileged user to run the centrifuge-chain and thus
it is possible it won’t be able to access the /data
folder, to be sure either:
- Make sure your data folder is owned by the centrifuge user: “
- Run the container as root adding
user: root
to your service definition in the docker-compose below (not recommended)
From version 0.10.35 the relay chain DB folder has changed names from relay-chain
to polkadot
.
The container is prepared to handle this change automatically but it is advised to check your /mnt/my_volume/data
to see if there are any extra folders after you upgrade from a previous version.
For a first time setup, ignore this message.
Run the container
2. Get or build binaries
Prepare user and folder
Getting the binary
Use latest
for testent, or a specific release tag for mainbet binaries. Keep in mind that the retrieved binary is build for Linux.
Configure systemd
1. Create systemd service file
We are now ready to start the node, but to ensure it is running in the background and auto-restarts in case of a server failure, we will set up a service file using systemd.
Change the ports
based on your network setup.
Note: It is important to leave the --bootnodes $ADDR
in one line as otherwise the arguments are not parsed correctly.
Making it impossible for the chain to find peers as no bootnodes will be present
You’ll have to download the chain specs first:
Create systemd file now:
2. Start the systemd service
Actually enable the previously generated service and start it.
If everything was set-up correctly, your node should now be starting the process of synchronization. This will take several hours, depending on your hardware. To check the status of the running service or to follow the logs, use:
Test your RPC connection
Once your node is fully synced, you can run a cURL request to see the status of your node, use
the port you configured in your /etc/systemd/system/centrifuge.service
file above
Expected output if node is synced is {"peers":35,"isSyncing":false,"shouldHavePeers":true}
Optional: Using a snapshot instead of synching from scratch
Centrifuge currently does not maintain automated snapshots, if you need a newer snapshot please reach out to the dev team
- By downloading a snapshot from the Centrifuge Dev team:
- You get faster sync, your fullnode will be ready in within hours (time depends on how old the snapshot is)
- You are trusting the Centrifuge’s team snapshots and therefore is not as “trustless” or “decentralized” as synching from scratch
Prerequisites:
- Install Gcloud CLI
- Install lz4. Ex:
sudo snap install lz4
Step-by-step instructions:
Inspect the $DATA_FOLDER_PATH
it should contain a chain
and a relay-chain
directory, the parachain and relay chain
DB data folders respectively. Use $DATA_FOLDER_PATH
on your node config as --base-path=$DATA_FOLDER_PATH
directly
Remove the chain
directory to sync ONLY the parachain from scratch but keep the relay DB (usually much bigger), this will remove a little bit of trust
in the Centrifuge dev team by synching the parachain from scratch at least, which is what Axelar validators care about most in terms of data.
Configure vald
In order for vald
to connect to your local node, your rpc_addr
should be exposed in
vald’s config.toml
.
Troubleshooting
Upgrading from an older version
If you are asked to update your node version run this checlist:
- Make sure your CLI arguments look like this docs, there could have been changes for a new version
- It is a good practice to clone your
data
dir and setup a new node with the new version to make sure it runs, then either point to the new one or replace the old one with the exact same parameters - Use the latest release, not the latest code from the main branch on the centrifuge-chain repo.
Error logs during syncing
During fast syncing it is expected to see the following error messages on the [Relaychain]
side.
as long as the following logs are seen
everything is working correctly. Once the chain is fully synced the errors logs will go away.
Stalled Syncing
If the chain stops syncing, mostly due to the unavailable blocks then please restart your node. The reason is in most cases that the p2p-view of your node is bad at the moment. Resulting in your node dropping the peers and being unable to further sync. A restart helps in theses cases.
Example logs will look like the following:
Changed bootnode or peer identities
It is common that bootnode change their p2p-identity leading to the following logs:
These logs can be safely ignored.