Provisioning an EKS cluster with CAPI

In an earlier post I described how to use KinD to run EKS-Distro on your local machine. In this post, I will explain how to use that cluster to bootstrap an EKS cluster in the AWS cloud using the Cluster API (CAPI) and a few things I learned along the way.

Why you should care about the Cluster API

At this point, you may be thinking to yourself, “Why do I need another way to provision and manage the lifecycle of an EKS cluster? I can already use eksctl, Terraform, CloudFormation, Pulumi, the Cloud Development Kit (CDK), and so on.” It really boils down to consistency. The Cluster API provides a consistent, declarative way to deploy and manage Kubernetes clusters across a variety of different environments. This is largely possible because the Cluster API provider establishes a common set schemas (CRDs) and a controller framework that are applicable across providers. Furthermore, EKS Anywhere will likely leverage CAPI to bootstrap clusters into VMware and bare metal environments. And having a consistent, repeatable way to deploy and manage clusters across these different environments will ultimately help simplify operations. For example, imagine using GitOps for cluster lifecycle management in addition to configuration.

Getting started

In October 2020, Weaveworks published a blog that walked through how to create an EKS cluster using the Cluster API Provider for AWS (CAPA). The steps have largely stayed the same with a couple of minor exceptions which I will describe henceforth. I am providing the instructions here simply for convenience. If you want additional information about each step, please read the Weaveworks blog.

  1. Create a KinD cluster. See my previous post on running EKS-D with KinD.
  2. Create a file called eks.config with the following contents:
apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1alpha1
kind: AWSIAMConfiguration
spec:
bootstrapUser:
enable: true
eks:
enable: true
iamRoleCreation: true # Set to true if you plan to use the EKSEnableIAM feature flag to enable automatic creation of IAM roles
defaultControlPlaneRole:
disable: false # Set to false to enable creation of the default control plane role
export AWS_REGION=us-east-2 # This is used to help encode your environment variables
export AWS_ACCESS_KEY_ID=<access-key-for-bootstrap-user>
export AWS_SECRET_ACCESS_KEY=<secret-access-key-for-bootstrap-user>
export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
clusterawsadm bootstrap iam create-cloudformation-stack  --config eks.config
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:AllocateAddress",
"ec2:AssociateRouteTable",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateInternetGateway",
"ec2:CreateNatGateway",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:CreateVpc",
"ec2:ModifyVpcAttribute",
"ec2:DeleteInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteRouteTable",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DeleteVpc",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",
"ec2:DescribeInternetGateways",
"ec2:DescribeImages",
"ec2:DescribeNatGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVolumes",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:DisassociateAddress",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ModifySubnetAttribute",
"ec2:ReleaseAddress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"tag:GetResources",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:RemoveTags"
],
"Resource": [
"*"
],
"Effect": "Allow"
},
{
"Condition": {
"StringLike": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
},
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing"
],
"Effect": "Allow"
},
{
"Condition": {
"StringLike": {
"iam:AWSServiceName": "spot.amazonaws.com"
}
},
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot"
],
"Effect": "Allow"
},
{
"Action": [
"iam:PassRole"
],
"Resource": [
"arn:*:iam::*:role/*.cluster-api-provider-aws.sigs.k8s.io"
],
"Effect": "Allow"
},
{
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:DeleteSecret",
"secretsmanager:TagResource"
],
"Resource": [
"arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*"
],
"Effect": "Allow"
},
{
"Action": [
"ssm:GetParameter"
],
"Resource": [
"arn:aws:ssm:*:*:parameter/aws/service/eks/optimized-ami/*"
],
"Effect": "Allow"
},
{
"Action": [
"iam:GetRole",
"iam:ListAttachedRolePolicies"
],
"Resource": [
"arn:aws:iam::*:role/*"
],
"Effect": "Allow"
},
{
"Action": [
"iam:GetPolicy"
],
"Resource": [
"arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
],
"Effect": "Allow"
},
{
"Action": [
"eks:DescribeCluster",
"eks:ListClusters",
"eks:CreateCluster",
"eks:TagResource",
"eks:UpdateClusterVersion",
"eks:DeleteCluster",
"eks:UpdateClusterConfig",
"eks:UntagResource"
],
"Resource": [
"arn:aws:eks:*:*:cluster/*"
],
"Effect": "Allow"
},
{
"Condition": {
"StringEquals": {
"iam:PassedToService": "eks.amazonaws.com"
}
},
"Action": [
"iam:PassRole"
],
"Resource": [
"*"
],
"Effect": "Allow"
}
]
}
aws cloudformation describe-stack-resources --stack-name cluster-api-provider-aws-sigs-k8s-io --region us-east-2 --output table
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)

Enabling CAPA

Run the following commands to install the Cluster API Provider for AWS with EKS support:

export EXP_EKS=true
export EXP_EKS_IAM=true
export EXP_EKS_ADD_ROLES=true
clusterctl init -b kubeadm:v0.3.19 -c kubeadm:v0.3.19 --core cluster-api:v0.3.19 --infrastructure=aws
$ kubectl get providers -A
NAMESPACE NAME TYPE PROVIDER VERSION WATCH NAMESPACE
capa-eks-bootstrap-system bootstrap-aws-eks BootstrapProvider aws-eks v0.6.6
capa-eks-control-plane-system control-plane-aws-eks ControlPlaneProvider aws-eks v0.6.6
capa-system infrastructure-aws InfrastructureProvider aws v0.6.6
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.19
capi-system cluster-api CoreProvider cluster-api v0.3.19
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: capa-eks-control-plane-system-capa-eks-control-plane-manager-role
labels:
cluster.x-k8s.io/provider: control-plane-aws-eks
clusterctl.cluster.x-k8s.io: ''
rules:
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- ''
resources:
- secrets
- verbs:
- get
- list
- watch
apiGroups:
- cluster.x-k8s.io
resources:
- clusters
- clusters/status
- verbs:
- create
- delete
- get
- list
- patch
- update
- watch
apiGroups:
- controlplane.cluster.x-k8s.io
resources:
- awsmanagedcontrolplanes
- verbs:
- get
- patch
- update
apiGroups:
- controlplane.cluster.x-k8s.io
resources:
- awsmanagedcontrolplanes/status
- verbs:
- create
- get
- list
- patch
- watch
apiGroups:
- ''
resources:
- events
- verbs:
- get
- list
- watch
apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- awsclustercontrolleridentities
- awsclusterroleidentities
- awsclusterstaticidentities
- verbs:
- get
- list
- watch
apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- awsmachinepools
- awsmachinepools/status
- verbs:
- get
- list
- watch
apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- awsmachines
- awsmachines/status
- verbs:
- get
- list
- watch
apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- awsmanagedclusters
- awsmanagedclusters/status
- verbs:
- get
- list
- watch
apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- awsmanagedmachinepools
- awsmanagedmachinepools/status
kubectl apply -f control-plane-manager-role.yaml

Creating the EKS cluster

  1. Run the following to generate the yaml for the eks flavor. Ensure that you set the environment variables accordingly:
export AWS_REGION=us-east-1
export AWS_SSH_KEY_NAME=default
export KUBERNETES_VERSION=v1.17.0
export WORKER_MACHINE_COUNT=1
export AWS_NODE_MACHINE_TYPE=t2.medium

clusterctl config cluster managed-test --flavor eks > capi-eks.yaml
kubectl apply -f capi-eks.yaml
clusterctl config cluster capi-eks-quickstart --flavor eks-managedmachinepool --kubernetes-version v1.17.3 --worker-machine-count=3 > capi-eks-quickstart.yaml
kubectl --namespace=default get secret managed-test-user-kubeconfig \
-o jsonpath={.data.value} | base64 --decode \
> managed-test.kubeconfig
kubectl --kubeconfig managed-test.kubeconfig get pods -A

Conclusion

The Cluster API (CAPI) is yet another way to provision and manage the lifecycle Kubernetes clusters across different environments. Its provider model allows for different constituencies to independently add support for managed Kubernetes services such as EKS. As we’ve seen CAPI is leveraging Kubernetes primitives, such as CRDs and reconciliation loops, to manage Kubernetes itself. With CAPI, you could conceivably use RBAC to restrict who can create clusters! And finally, given that EKS-Anywhere (EKS-A) is likely to use CAPI in the future, having knowledge of how CAPI works will help those who are planning to adopt EKS-A down the road.

Additional Resources

Jeremy Cowan is a Principal Container Specialist at AWS