Guide: Setting up RPC failover with proxyd

This guide will go over how to set up proxyd on the same machine as your Orchestrator. Steps will differ if you’re setting up proxyd for remote access (you’ll need to configure your firewall).

OS: Linux
Difficulty level: Pretty freakin’ easy

Why use proxyd for your Livepeer node?
Arbitrum RPC’s aren’t perfect and tend experience downtime. Proxyd allows you to specify multiple RPC’s and will automagically failover to an alternative RPC if something goes wrong with your primary endpoint.

Big thanks to @jjassonn69 for coming up with this and @musicmank545 for the help getting everything set up.

Let’s begin!

Install Redis (database/cache):
sudo apt update
sudo apt install redis-server

Check to make sure it’s working:
sudo systemctl status redis-server

Working? Good. Moving on.

Clone repo:
sudo git clone https://github.com/ethereum-optimism/optimism.git

We’ll need go-lang for this:
sudo apt install golang-go

Optional: You can move the proxyd directory out of the downloaded repo and remove the repo (we only need the proxyd directory):
sudo mv /<directory>/optimism/proxyd /<chosen directory>
sudo rm -R </directory>/optimism

Navigate to the directory you just moved proxyd to. For example, my proxyd files are located in /etc/proxyd.

Run makefile:
sudo make proxyd

Create proxyd config file:
sudo nano <your config file name>.toml

We’ll use the example config listed here (paste into your newly created config file):

 #List of WS methods to whitelist.
 ws_method_whitelist = [
 "eth_subscribe",
 "eth_call",
 "eth_chainId",
 "eth_getBlockByNumber",
 "eth_gasPrice",
 "eth_getBlockByHash",
 "eth_getLogs",
 "eth_getBalance",
 "eth_getCode",
 "eth_estimateGas",
 "eth_getTransactionCount",
 "eth_sendRawTransaction",
 "eth_getTransactionReceipt",
 "net_version"
 ]
# Enable WS on this backend group. There can only be one WS-enabled backend group.
ws_backend_group = "main"
[server]
# Host for the proxyd RPC server to listen on.
rpc_host = "local node ip"
# Port for the above.
rpc_port = 8181
# Host for the proxyd WS server to listen on.
ws_host = "local node ip"
# Port for the above
ws_port = 8282
# Maximum client body size, in bytes, that the server will accept.
max_body_size_bytes = 10485760

[redis]
# URL to a Redis instance.
url = "redis://localhost:6379"

[metrics]
# Whether or not to enable Prometheus metrics.
enabled = true
# Host for the Prometheus metrics endpoint to listen on.
host = "localhost"
# Port for the above.
port = 8383

[backend]
# How long proxyd should wait for a backend response before timing out.
response_timeout_seconds = 5
# Maximum response size, in bytes, that proxyd will accept from a backend.
max_response_size_bytes = 5242880
# Maximum number of times proxyd will try a backend before giving up.
max_retries = 5
# Number of seconds to wait before trying an unhealthy backend again.
out_of_service_seconds = 60

[backends]
# A map of backends by name.
[backends.infura]
# The URL to contact the backend at. Will be read from the environment
# if an environment variable prefixed with $ is provided.
rpc_url = "your local node RPC"
# The WS URL to contact the backend at. Will be read from the environment
# if an environment variable prefixed with $ is provided.
ws_url = "your local node WSS"
#username = ""
# An HTTP Basic password to authenticate with the backend. Will be read from
# the environment if an environment variable prefixed with $ is provided.
#password = ""
max_rps = 1000
max_ws_conns = 1
# Path to a custom root CA.
#ca_file = ""
# Path to a custom client cert file.
#client_cert_file = ""
# Path to a custom client key file.
#client_key_file = ""

[backends.alchemy]
rpc_url = "your alchemy RPC"
ws_url = "your alchemy WSS"
#username = ""
#password = ""
max_rps = 1000
max_ws_conns = 100

[backend_groups]
[backend_groups.main]
backends = ["infura","alchemy"]

# If the authentication group below is in the config,
# proxyd will only accept authenticated requests.
#[authentication]
# Mapping of auth key to alias. The alias is used to provide a human-
# readable name for the auth key in monitoring. The auth key will be
# read from the environment if an environment variable prefixed with $
# is provided. Note that you will need to quote the environment variable
# in order for it to be value TOML, e.g. "$FOO_AUTH_KEY" = "foo_alias".
#secret = "test"

# Mapping of methods to backend groups.
[rpc_method_mappings]
eth_call = "main"
eth_chainId = "main"
eth_blockNumber = "main"
eth_getBlockByNumber = "main"
eth_gasPrice = "main"
eth_getBlockByHash = "main"
eth_getLogs = "main"
eth_getBalance = "main"
eth_getCode = "main"
eth_estimateGas = "main"
eth_getTransactionCount = "main"
eth_sendRawTransaction = "main"
eth_getTransactionReceipt = "main"
net_version = "main"

Let’s make some necessary edits:

rpc_host = "<ip where proxyd is running. For example: localhost>"
ws_host = "<this is the same as the rpc_host>"

Scroll down to [backends].

We can see there are already 2 groups in the example config file, [backends.infura] and [backends.alchemy].
We can change the name of each group to whatever we’d like, e.g. [backends.communitynode] and we can add more groups as we see fit.

In each group we only need to specify a couple of values.
Change the rpc_url and the ws_url to point to your endpoint.
For example, an Alchemy rpc url looks like https://arb-mainnet.g.alchemy.com/v2/***

Note: If you’re using an RPC that doesn’t have a ws_url available, you can simply paste the rpc_url into the ws_url field.

If you change the group names or add any new ones, make sure to adjust them in the [backend_groups] section (where it says backends = [“infura”,“alchemy”], adjust the names as necessary).

Save the file.

Now we’ll create a service file to automate proxyd:

Create a service file in /etc/systemd/system.
sudo nano proxyd.service

Configure file:

[Unit]
Description=Proxyd

[Service]
user=root
WorkingDirectory=</path/to/proxyd directory>
Restart=always
RestartSec=5
ExecStart=</proxyd directory>/bin/proxyd </proxyd directory></your config file name>.toml
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=proxyd


[Install]
WantedBy=multi-user.target

Save the file.

Start the service:
sudo systemctl start proxyd

Check to make sure it’s working:
sudo systemctl status proxyd

Edit your Livepeer config and point the ethUrl flag to the proxyd RPC you created. For example:
ethUrl http://localhost:8181

If all went well your Orchestrator will start up without any errors and boom, you’re good to go!

7 Likes

Great guide, @Authority_Null!

1 Like

Thanks for this guide! I was able to extend this as a fail-over mechanism between L2 and L1 nodes. The benefit is you can actually swap out L1 endpoints on your running arbitrum L2 instance without restarting arbitrum (which saves 2 hours), and can help with troubleshooting L1 endpoint issues.

You just need to add “eth_getTransactionByBlockHashAndIndex” to the whitelist, and eth_getTransactionByBlockHashAndIndex = “main” to the bottom of the script.

I used a timeout of 5 seconds and retry count of 2 to keep the interruption quick and found that it did not drop any streams. The Arbitrum node provides enough of a buffer to swap L1 endpoints with proxyd during runtime.

1 Like

Oh wow, very cool. How are you using proxyd for the Livepeer RPC and your L1 endpoint at the same time? Like how does proxyd know when to switch?

I have not yet tried to run proxyd on both L2-L1 and Livepeer-L2 at the same time.

I think you could do so by configuring it as a 2nd service and 2nd configuration file, ensuring that you have no port conflicts. I will test this out later, but I did find that Livepeer would drop streams when it was failing over between L2 endpoints.

1 Like