Documentation
Running a Node
AWS

Running a Polaris Ethereum Cosmos-SDK Chain on AWS

The recommended way to run fault tolerant nodes that can scale to millions of requests per second would be with orchestration tool like Kubernetes (opens in a new tab). Another option is to run the node on an EC2 (opens in a new tab) instance.

AWS offers Elastic Kubernetes Service (EKS) (opens in a new tab) which is a manged service to run Kubernetes in the cloud It allows us to run lightweight pods in which you can deploy container images. On a failure, a new container automatically comes up. Each pod can also be attached to a volume that can provide elastic disk storage. It also allows us to scale horizontally by adding more nodes/pods/containers on demand to the network.

Example EKS architecture

Currently, the application runs in 1 container so we can deploy 1 container in each pod. Pod is the smallest resource that Kubernetes can manage. It can have 1 or more than 1 containers.

Before we can set up pods, we need to set up our EKS cluster -

Install AWS CLI and configure AWS Credentials

  1. Install AWS CLI for your local machine from docs (opens in a new tab)

  2. Configure CLI credentials from docs (opens in a new tab)

  3. Verify credentials and installation

    aws sts get-caller-identity

    It should return the following -

    {
    "Account": "123456789012",
    "UserId": "AR#####:#####",
    "Arn": "arn:aws:sts::123456789012:assumed-role/role-name/role-session-name"
    }

Install eksctl and kubectl

eksctl allows us to manage our EKS cluster and kubectl allows us to manage kubernetes resources.

  1. Install eksctl from docs (opens in a new tab)

  2. Verify installation

    eksctl version
  3. Install kubectl from docs (opens in a new tab)

  4. Verify installation

    kubectl version

Set up EKS cluster

EKS is a managed Kubernetes cluster and AWS will manage our nodes for us. Clusters can also have controllers/add-ons that help control and incorporate cloud specific resources

  1. We can create a sample cluster by using following command -

    eksctl create cluster --name my-cluster --region region-code

    Note: you need to provide a cluster name and AWS region (Ex. us-west-2)

    This will create managed default AWS nodes for you to deploy your workloads. You can change the type of workload as well. ekstcl will also create a kubectl config file at ~/.kube

  2. Setup AWS load balancer controller from docs (opens in a new tab). This will help kubernetes manage our AWS load balancer.

View Kubernetes resources

  1. View your cluster nodes (which were deployed by eksctl)

    kubectl get nodes -o wide

    The response will be as follows -

    NAME                                           STATUS   ROLES    AGE   VERSION                INTERNAL-IP      EXTERNAL-IP      OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
    ip-192-168-3-224.us-west-2.compute.internal    Ready    <none>   45h   v1.24.10-eks-48e63af   192.168.3.224    192.168.3.224    Amazon Linux 2   5.10.165-143.735.amzn2.x86_64   containerd://1.6.6
    ip-192-168-36-235.us-west-2.compute.internal   Ready    <none>   45h   v1.24.10-eks-48e63af   192.168.36.235   192.168.36.235    Amazon Linux 2   5.10.165-143.735.amzn2.x86_64   containerd://1.6.6

Create kubernetes resources

Kubernetes allows us to deploy a few types of workloads and services. Workloads control the pods and Services control the access to the pod. For our simple case, we will use workload of type Deployment and service of type LoadBalancer. Our container is deployed inside the pod. More information can be found in docs (opens in a new tab)

  1. Setup namespace

    Namespaces ensure that we can use the same cluster to deploy multiple types of applications. We will create a namespace testnet for our chain

    kubectl create namespace testnet
  2. Create Deployment

For our deployment, we would create a deployment.yaml file which looks something like this --

apiVersion: apps/v1
kind: Deployment
metadata:
name: node
namespace: testnet
spec:
replicas: 1
selector:
    matchLabels:
    app: node
template:
    metadata:
    labels:
        app: node
    spec:
    containers:
        - name: node
        image: <container-registry>/<container-image>
        ports:
            - name: apphttp
            containerPort: 1317
            - name: tendermintgrpc
            containerPort: 9090
            - name: tendermintrpc
            containerPort: 26657
            - name: tendermintpp
            containerPort: 26656
 

We are creating a pod with 1 replica and it has 1 container. The container is created using the image tag.

For more information on deployments, refer to this doc (opens in a new tab)

  1. Create Service

For our service, as we want our node to be access from the public internet, we will create it as a LoadBalancer. It will create an AWS Network Load Balancer (NLB)

apiVersion: v1
kind: Service
metadata:
name: node-nlb
namespace: testnet
annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
    - name: apphttp
    port: 80
    targetPort: 1317
    protocol: TCP
    - name: tenderpmintrpc
    port: 26657
    targetPort: 26657
    protocol: TCP
    - name: tendermintpp
    port: 26656
    targetPort: 26656
    protocol: TCP
    - name: tendermintgrpc
    port: 9090
    targetPort: 9090
    protocol: TCP
type: LoadBalancer
selector:
    app: node

We are creating a service of type LoadBalancer. This will create a public load balancer and will route requests received on port 80 to port 1317 where we are running our JSON RPC server.

  1. Verify workload and service

    We should validate that the resources are set up as expected. You can verify running pods by following -

    kubectl get pods -n testnet -o wide

    The expected status should be RUNNING for the pod

    You can verify that the service is running by following -

    kubectl get services -n testnet

    You should also verify in AWS console if the Network Load Balancer has been provisioned

  2. Test RPC endpoints

    You can test the public rpc endpoint by getting the NLB URL

    curl -X POST <load_balancer_url>/eth/rpc -H 'Content-Type: application/json' -d '{ "jsonrpc":"2.0", "method":"eth_blockNumber", "params":[], "id":1}'