Running Calico on EKS

Jeremy Cowan
4 min readFeb 13, 2019

--

In an earlier post, I described how to run the flannel CNI plugin on an EKS cluster. In this post, I’m going to explain how to run Tigera’s Calico CNI plugin on EKS.

Why Calico

Unlike overlays, which can add overhead and complicate troubleshooting, Calico utilizes a lightweight encapsulation mechanism when routing packets between AZs. It also comes with its own policy engine which you can use to create tenant and stage separation or for reducing the attack surface within microservice-based applications.

Preparing your environment

Like flannel, the first step to getting Calico running is to create a node-less cluster. When the cluster ready, delete the aws-node daemonset.

kubectl delete ds aws-node -n kube-system

Next, create a single node etcd “cluster”. Instructions for creating the EKS and etcd clusters are covered in the Running Flannel on EKS post.

Once your etcd node is running, SSH to the instance and pull the calico/ctl image from Dockerhub.

docker pull calico/ctl:v3.5.1

Create an alias for running the Docker image, replacing <etcd_ip> with the private IP address of your etcd server.

alias calicoctl='docker run -e ETCD_ENDPOINTS=http://<etcd_ip> -v /home/core:/calico calico/ctl:v3.5.1'

Create a configuration manifest for the IpPool.

cat > ~/ippool.yaml << EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: ippool-1
spec:
cidr: 192.168.0.0/16
ipipMode: CrossSubnet
natOutgoing: true
EOF

Then create the IpPool using calicoctl.

calicoctl create -f /calico/ippool.yaml

There are a few things worth mentioning about the configuration file you created. The first is the ipipModekey. By setting its value to CrossSubnet you’re telling Calico that you want to use encapsulation when you traverse a VPC subnet boundary. This setting is necessary in environments where worker nodes are deployed onto subnets in different availability zones.

The value of thecidrkey specifies the CIDR range of your pod network. When setting this value, be sure that it doesn’t overlap with an existing CIDR in your VPC.

Finally, AWS will only perform outbound NAT on traffic which has the source address of an EC2 instance. By setting natOutgoing key to true you’re essentially telling Calico to NAT all outbound traffic from containers hosted on your worker nodes which is necessary if your containers are going to access resources outside of AWS.

Installing Calico

Now that your network configuration is stored in etcd, you’re ready to install the Calico plugin. Begin by applying the necessary RBAC policies to the cluster.

kubectl apply -f \
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/rbac.yaml

Then download the manifest for Calico to your local machine.

curl \
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/calico.yaml \
-O

Make the following edits to the file:

  • In the ConfigMap named calico-config, set the value of etcd_endpointsunder the data object to the IP address and port of your etcd server, e.g. http://10.96.232.136:2379.

Save the changes to the manifest and then apply it to your cluster.

kubectl apply -f calico.yaml 

By default, AWS performs source/destination checks against the traffic in your VPC and will drop packets when addresses are not recognized. In order for Calico to work properly you’ll need to disable these checks on each of your workers. However, rather than manually disabling src/dst checks, you’re going to update the instance userdata for the node/autoscaling group’s launch configuration.

Find the launch configuration for your worker nodes in the EC2 console and insert the following into instance userdata:

INSTANCEID=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
aws ec2 modify-instance-attribute --instance-id $INSTANCEID --no-source-dest-check \
--region ${AWS::Region}

Since this “script block” calls the EC2 ModifyInstanceAttribute API, you’ll need to add that action to the IAM policy assigned to the worker nodes.

Once you’ve completed this step, increase the maximum and desired counts for your node autoscaling group to add nodes to your cluster.

Congratulations! You are now running the Calico CNI on EKS!

The latest version of Calico no longer requires you to use calicoctl to configure the network. Instead, all of the configuration can be added to the Calico manifest file. When the Calico daemon starts, the configuration will be persisted to etcd. If you’d like to use this method to configure Calico, begin by downloading Calico v3.5.

curl \
https://docs.projectcalico.org/v3.5/getting-started/kubernetes/installation/hosted/\
calico.yaml \
-O

To configure the pod network, run the command below. Be sure to replace the POD_CIDR with the CIDR range of your pod network.

POD_CIDR="100.64.0.0/16" \
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml

Open the file for editing and make the following changes:

  • In the daemonset named calico-node, change the CALICO_IPV4POOL_IPIP environment variable's value from Always to CrossSubnet.
  • In the daemonset named calico-node, add the following environment name CALICO_IPV4POOL_NAT_OUTGOING and set it to true, for example:
- name: CALICO_IPV4POOL_NAT_OUTGOING
value: "true"
  • In the ConfigMap named calico-config, replace the etcd_endpoints value with the private IP and port for your etcd server, for example:
etcd_endpoints: "http://<etcd_ip>:2379"

Save your changes and apply the manifest to your cluster.

kubectl apply -f calico.yaml

Disclaimer

Although these instructions will guide you through installing and configuring Calico on EKS, they are not suitable for production use. For a production deployments you will want to have at least 3 etcd nodes for redundancy and high availability. You’ll also want to configure SSL/TLS and certificate authentication for etcd.

--

--

Jeremy Cowan
Jeremy Cowan

Written by Jeremy Cowan

Jeremy Cowan is a Principal Container Specialist at AWS

No responses yet