Part 0: Guide to Staking Ethereum 2.0 on the Cloud (Azure, Lighthouse)

Randy
15 min readMar 14, 2021

This guide was last updated as of March 7th, 2021.

This is Part 0 of a step by step by step guide that covers the basics of deploying an application to the cloud! Stay tuned for future parts!

This part is a step-by-step guide to staking on the Ethereum 2.0 Mainnet using the Lighthouse client. We will NOT be setting up an Ethereum 1.0 Node in this guide, although we may add it into future guides. Look forward to some future guides that help automate more of the process!

WARNING: Do not follow this guide to completion until you’ve tried it on eth testnet! Feel free to comment if you’d like to see another guide for eth testnet!

This guide is based on these primarily on these technologies:

Acknowledgements

A ton of credit for this guide goes to Somer Esat and Coin Cashew’s Eth Staking Guides, as well as to various Azure guides and to various folks who helped review my setup. Shoutout also to the Lighthouse Discord channel! Super helpful.

Disclaimer

The material and information contains on this guide is for general information purposes only. You should not rely upon the material or information on the website as a basis for making any business, legal, or any other decisions. Staking Ethereum is an inherently risky activity, thus the author makes no guarantees or warranties about the information contained in this guide.

Full disclaimer found on the bottom of the guide.

Author’s Note

The point of staking Ethereum is to increase decentralization, to ensure that there is as little downtime of the system as possible. Running nodes on Azure can help create high uptime nodes, but it is inherently NOT decentralized. If Azure were to encounter any outages, all validators running on Azure would be subject to outages. If enough nodes were running on Azure, this could result in a high percentage of the network being down. Therefore, the author recommends running Ethereum 2.0 Validator nodes on personal hardware if possible! This guide is intended for those who do not have access to stable internet access, reliable electricity, or do not have the space/resources to have their own hardware setup.

Support

Check out Somer Esat and Coin Cashew’s guides linked above! Also feel free to hop into the Lighthouse Discord Server or CoinCashew Discord Server. You can find me on both as @ssh.randy.

Requirements

PuTTY or terminal that can ssh into a remote machine.

An azure account!

Metamask Wallet and 32 Ethereum.

Experience Requirements

This guide assumes that you’ve attempted CoinCashew’s Testnet Guide, have a basic competence using Linux, have a working knowledge of Ethereum, and a working knowledge of what it means to stake Ethereum.

Overview

By the end of this guide, you’ll have built a virtual machine hosted on Azure with two main components: a validator client and a beacon chain client.

Validator Client - Responsible for proposing new blocks and making attestations in the beacon chain and shard chains.

Beacon Chain Client - Responsible for managing validators and their stakes; nominating the chosen block proposer for each shard at each step; organising validators into committees to vote on the proposed blocks; applying the consensus rules; applying rewards and penalties to validators; and, being an anchor point on which the shards register their states to facilitate cross-shard transactions

Azure Virtual Machine - This is where we’ll be running the above clients! We will use the cloud in order to run our validator node, through Microsoft Azure. This gets us a bit more reliability in terms of power and internet outages, and has no space requirement for hosting a physical machine. The main downside to this approach, is that it does not add as much redundancy to the Ethereum Network. If Azure goes down, all nodes hosted on Azure will probably go down with it, which leads to higher penalties and more vulnerability on the Ethereum Blockchain.

Step 0: Create Azure VM

If you’re already comfortable with setting up Azure VMs, feel free to skip this part of the guide. In this portion of the guide, we’ll do a step-by-step walkthrough of how to setup an Azure VM, give it its own SSH Keypair, and create and attach a datadisk to it.

Creating a Resource Group
Sign into your Azure account, and create a Resource Group

Setting Up an SSH Key
Follow this guide to set up your own private public keypair, so that you can SSH into your machine.

Then, from your Resource Group, select the “Add” button and select SSH key.

Make sure to upload the public key you just created, by copy pasting the contents of the public key file. Create your private public key pair.

Creating the Virtual Machine

Go back to your resource group, and select the “Add” button again. Then, type “ubuntu” into the search bar, and select the “Ubuntu Server 18.04 LTS” option.

While setting up your VM, select the size Standard_B2s — 2 vcpus 4GiB memory.

Next, set up your SSH key. Make sure to select “Use existing key stored in Azure”, and use the stored key you created earlier:

Also, select “None” for “Public inbound ports”. We’ll whitelist the SSH port later

After configuring port rules, you’ll have to select an OS disk type, and create and attach a data disk. For the OS disk type, choose the “Premium SSD”. You could use a regular SSD, but the OS disk will be relatively small and you want it to be performant.

Select “Create and attach a new desk” in order to select a Data Disk to attach to your VM

Click on “Change size”, and select either the 256 GB or 512 GB Standard SSD

Under the “Network Interface” menu, keep the defaults, and make sure “None” is selected for Public inbound ports.

In the management menu, disable “Auto-shutdown”, as you don’t want your VM to shut down at night.

Finally, click on “Review + create” to review your VM settings!

Confirm that your settings look something like this. Once confirmed, click “Create”

Voila! You’ve created your Azure VM and Data Disk!

Step 1: Setup Network and Port Rules

In this step, we’ll be setting up our Azure Network Rules, that will allow for us to SSH into the VM, as well as open up the ports we’ll need for Grafana and Lighthouse to communicate with other validators.

First, use whatismyipaddress to find your IP address.

Then, navigate over to the VM that you just created. Select the “Networking” option on the left

Once selected, you should be brought to the Networking page for your VM. Select “Add inbound port rule”. We’ll be doing this a few times, to add rules that will allow you to SSH into your machine, as well as to allow Lighthouse to communicate with other Eth 2.0 clients, and finally to allow for you to view Grafana metrics through your browser.

First, add an inbound security rule for the SSH service. Select Source as “IP Addresses”, and in the Source IP addresses field, input the IP address you found from the previous step.

You technically do not have to restrict SSH to your IP address, but it helps harden your SSH port against attackers, as they would need access to the specific IP address provided in order to access the VM.

Repeat the same steps as above, but selecting Source as “Any”, to allow for Lighthouse and Grafana.

Once you’ve added your new rules, they should look like this:

Step 2: Prepare VM For Long Term Use

SSH into your machine for the first time
At this point now, you should be able to ssh directly into your VM! You should be able to find your “Public IP address” in the Overview or Networking menu of your VM. Once found, run the following command to SSH in. Remember to use the private keyfile you generated earlier in Step 0.

ssh -i <path-to-private-keyfile> <username>@<public-ip-address>

Disable SSH Password Authentication and Root Login
From here, you should disable ssh password authentication and root login.

sudo nano /etc/ssh/sshd_config

Change these two lines, if they say “yes” modify them to “no”, if either are commented out or not there, add them.

PasswordAuthentication no
PermitRootLogin no

Create Partition and Attach Disk
We will be following this Azure manual for attaching our Data Disk to our VM, and mounting it to a directory called /datadrive.

First, run lsblk to see the name of your data disk assigned at startup.

You should see an output like this:

user@rare-steak-mainnet:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 29.9G 0 part /
├─sda14 8:14 0 4M 0 part
└─sda15 8:15 0 106M 0 part /boot/efi
sdb 8:16 0 256G 0 disk
sdc 8:32 0 32G 0 disk
└─sdc1 8:33 0 32G 0 part /mnt
sr0 11:0 1 628K 0 rom

We see here that “sdb” is the 256 GB Disk we attached earlier in Step 0 of the tutorial.

Run these 3 commands to partition a new disk. Only do this step if your disk is empty, and you are creating a new disk.

sudo parted /dev/sdb --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mkfs.xfs /dev/sdb1
sudo partprobe /dev/sdb1

Mount the disk:

sudo mkdir /datadrive
sudo mount /dev/sdb1 /datadrive

Then, to ensure that the drive is remounted automatically after a reboot, we must add it to the /etc/fstab file. In order to do this, first run sudo blkid to find the UUID of our new partition.

The output will look something like this:

/dev/sda1: LABEL="cloudimg-rootfs" UUID="11111111-1b1b-1c1c-1d1d-1e1e1e1e1e1e" TYPE="ext4" PARTUUID="1a1b1c1d-11aa-1234-1a1a1a1a1a1a"
/dev/sda15: LABEL="UEFI" UUID="BCD7-96A6" TYPE="vfat" PARTUUID="1e1g1cg1h-11aa-1234-1u1u1a1a1u1u"
/dev/sdb1: UUID="22222222-2b2b-2c2c-2d2d-2e2e2e2e2e2e" TYPE="ext4" TYPE="ext4" PARTUUID="1a2b3c4d-01"
/dev/sda14: PARTUUID="2e2g2cg2h-11aa-1234-1u1u1a1a1u1u"
/dev/sdc1: UUID="33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e" TYPE="xfs" PARTLABEL="xfspart" PARTUUID="c1c2c3c4-1234-cdef-asdf3456ghjk"

Then, run sudo nano /etc/fstab to open up your fstab file for editing, and add the following line to your fstab file. Make sure to use the UUID that corresponds to the partition you just created. In our case, we would use the one corresponding to sdb1.

UUID=22222222-2b2b-2c2c-2d2d-2e2e2e2e2e2e   /datadrive   xfs   defaults,nofail   1   2

You can now use this command to validate your disk and mountpoint:

lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"

Try rebooting your VM, and validate that the disk is still mounted using the above command. The name of the mountpoint might change, but the partition should still be present.

Give SSH User File Permission Ownership for Mounted Directory

Run these commands to give your SSH user ownership of the directory you created in the previous step.

sudo chmod --reference=/home/<user> /datadrive
sudo chown --reference=/home/<user> /datadrive

Extra Security: Setup fail2ban

Run these commands to install fail2ban

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y fail2ban

Then, run this command to configure your jail. This will set a maximum number of retries for any IP address trying to SSH into your VM.

sudo nano /etc/fail2ban/jail.local

Paste the following:

[sshd]
enabled = true
port = 22
filter = sshd
logpath = /var/log/auth.log
maxretry = 5

Now start up fail2ban

sudo systemctl start fail2ban
sudo systemctl enable fail2ban

Step 3: Setup Mnemonic and Signing/Deposit Keys

Setup your validator keys. If you want to be completely secure, look at Coin Cashew’s tutorial under Step 2 and click on the tab “Advanced — Most Secure” for the most secure way to generate keys.

Otherwise, just directly use the ethereum foundation deposit tool. Instructions will be below.

Run these commands to download the eth2.0 deposit client:

cd $HOME
wget https://github.com/ethereum/eth2.0-deposit-cli/releases/download/v1.1.0/eth2deposit-cli-ed5a6d3-linux-amd64.tar.gz

Run this command to validate the checksum of the deposit client:

echo “2107f26f954545f423530e3501ae616c222b6bf77774a4f2743effb8fe4bcbe7 *eth2deposit-cli-ed5a6d3-linux-amd64.tar.gz” | shasum -a 256 — check

Once validated, run these commands to setup the eth2 deposit client

tar -xvf eth2deposit-cli-ed5a6d3-linux-amd64.tar.gz
mv eth2deposit-cli-ed5a6d3-linux-amd64 eth2deposit-cli
rm eth2deposit-cli-ed5a6d3-linux-amd64.tar.gz
cd eth2deposit-cli

Then, run the deposit client to create a new mnemonic.

./deposit new-mnemonic -chain mainnet

Choose a KEYSTORE password. Write down the mnemonic and keep it OFFLINE.

Keep your mnemonic OFFLINE, preferably on a piece of paper or metal seed. Keep multiple copies if possible, keep them safe.

Make an offline backup of your validator_keys directory. Use SCP to extract the keys to your local machine, and store them offline on a thumbdrive or external hard drive.

Step 4: Set Up Lighthouse Beacon Chain

Set up dependencies for Lighthouse. Always make sure to double check all URLs when using curl

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
echo export PATH="$HOME/.cargo/bin:$PATH" >> ~/.bashrc
source ~/.bashrc
sudo apt-get update
sudo apt install -y git gcc g++ make cmake pkg-config libssl-dev

Build Lighthouse from source. Make sure to check that the github repo points to the official lighthouse version, and check to see what is in the latest stable release.

mkdir /datadrive/git
cd /datadrive/git
git clone https://github.com/sigp/lighthouse.git
cd lighthouse
git fetch --all && git checkout stable && git pull
make
lighthouse --version

Import validator keys generated in Step 3

lighthouse account validator import --network mainnet --directory=$HOME/eth2deposit-cli/validator_keys --datadir /datadrive/.lighthouse/

Create a data directory for beacon-chain and create a least-privileged account to run the beacon-chain

mkdir /datadrive/.lighthouse/beacon
sudo adduser --system --no-create-home beacon-chain
sudo chown -R beacon-chain /datadrive/.lighthouse/beacon/

Create a beacon-chain.service file for systemctl. Systemctl is a utility we will use so that if/when the VM is reset, the validator will begin running on its own without user interference.

Note: For eth1, we are using the public eth.cloud.ava endpoint, and as a backup we’ve set up our own infura API. Set up your own Infura endpoint here. Redundancy will help prevent your validator from missing attestations when ava is down. For maximum redundancy, you should create your own eth1 node. Please leave a message in the comments if you’re interested in a followup tutorial!

cat > $HOME/beacon-chain.service << EOF
# The eth2 beacon chain service (part of systemd)
# file: /etc/systemd/system/beacon-chain.service
[Unit]
Description = eth2 beacon chain service
Wants = network-online.target
After = network-online.target
[Service]
User = beacon-chain
ExecStart = $(which lighthouse) bn --staking --metrics --network mainnet --datadir /datadrive/.lighthouse --eth1-endpoints https://mainnet.eth.cloud.ava.do,https://mainnet.infura.io/v3/<URL>
Restart = on-failure
[Install]
WantedBy = multi-user.target
EOF

Move your service file and give it the correct permissions

sudo mv $HOME/beacon-chain.service /etc/systemd/system/beacon-chain.service
sudo chmod 644 /etc/systemd/system/beacon-chain.service

Enable your service to start up on system reset

sudo systemctl daemon-reload
sudo systemctl enable beacon-chain
sudo systemctl start beacon-chain

Validate that the beacon-chain is running! It should start syncing to eth1, and after a while you should see a message like this:

journalctl --unit=beacon-chain -fMar 14 19:11:17.001 INFO Synced                                  slot: 743754, block: 0x68c0…a482, epoch: 23242, finalized_epoch: 23240...

Step 5: Set Up Lighthouse Validator

Once again, set up a directory for your validator to run in, and create an account to run the validator.

sudo adduser --system --no-create-home validator
sudo chown -R validator /datadrive/.lighthouse/validators/

Then, create your service file so that your validator starts up on reset.

cat > $HOME/validator.service << EOF
# The eth2 validator service (part of systemd)
# file: /etc/systemd/system/validator.service
[Unit]
Description = eth2 validator service
Wants = network-online.target beacon-chain.service
After = network-online.target
[Service]
User = validator
ExecStart = $(which lighthouse) vc --network mainnet --graffiti "<INSERT GRAFFITI HERE>" --metrics --datadir /datadrive/.lighthouse
Restart = on-failure
[Install]
WantedBy = multi-user.target
EOF

Move your validator to your systemd directory and enable it

sudo mv $HOME/validator.service /etc/systemd/system/validator.servicesudo chmod 644 /etc/systemd/system/validator.service
sudo systemctl daemon-reload
sudo systemctl enable validator
sudo systemctl start validator

View logs here! Before you make any deposits of eth, make sure your validator is fully synced

journalctl --unit=validator -f

Step 6: Sign Up to be a Validator at the Launchpad

Navigate to the Launchpad website in order to become a validator. Make sure you’re on the website for Mainnet

Make sure to back up your keystore file and deposit file offline somewhere before attempting the next steps.

Enter the number of validators you’d like to run, and MAKE SURE you have a hard copy of your mnemonic. If you lose your mnemonic, you will NOT be able to generate your withdrawal keys in the future.

When you get to this step of the process, upload the Deposit Data file you generated in Step 3. You will need 32 eth in your personal wallet in order to make the final deposit.

Connect your wallet, and click “Initiate The Transaction”

Note: Before confirming the transaction, MAKE SURE that the url of the deposit contract is https://launchpad.ethereum.org and MAKE SURE the address being sent to is:

0x00000000219ab540356cBB839Cbe05303d7705Fa

You can view the address on etherscan here. Make sure you see 32 eth Deposit Contracts in its history, and cross check with other websites that this address is the correct address. If you send your eth to the wrong address, you will NOT be able to retrieve it.

Voila! Your validator should now be in the queue, and in a few days will be operational.

Step 7: Monitoring, System Utilization, etc.

Set up chrony to ensure your validator doesn’t get clock skew
Follow instructions here

Setup email alerts on Validator

  1. Visit https://beaconcha.in/
  2. Sign up for an account
  3. Verify your email
  4. Search for your validator’s public address (found in the keystore file, and the deposit_data file)
  5. Add validators to your watchlist by clicking the bookmark symbol

Check CPU Utilization

Navigate to “Metrics” under “Monitoring” on the VM’s side panel

Select “Percentage CPU” under Metric

Make sure that it hovers below 80% utilization. If it’s going above 80%, consider upgrading your VM instance size.

Check Disk Utilization

Run this command on /datadisk and /home to check amount of disk space utilized. Make sure both are well below the size of the Data Disk and OS Disk.

sudo du -sh /datadrive
sudo du -sh /home

Check memory usage

free -m

Full Disclaimer

The material and information contains on this guide is for general information purposes only. You should not rely upon the material or information on the website as a basis for making any business, legal, or any other decisions.

While the author endeavors to keep information up to date and correct, the author makes no representations or warranties of any kind, expressed or implied about the completeness, accuracy, reliability, suitability, or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such material is therefore strictly at your own risk.

--

--

Randy

I like software and food and helping people learn