Modulo 2

View Original

Deploy ArgoCD on AWS

Introduction

This tutorial covers how to handle the setup of ArgoCD in a multicluster AWS environment. Since it took me quite some time figuring out how to get this to work, I would like to share a working approach with you.

If you copy and paste every command in this tutorial, you should end up with a working setup. I’ll try to explain the mechanics a little bit along the way, but to truly understand everything I recommend reading the AWS and ArgoCD documentation.

The setup we’ll be creating is the following:

  • A management cluster that will host ArgoCD.

  • An AWS IAM role “role/ArgoCD” that ArgoCD will assume.

  • A testing cluster that we’ll deploy a guestbook application into.

  • An AWS IAM role “role/Deployer” that has permissions to deploy applications in your testing cluster.

EKS Clusters hosted by AWS do cost money, so please be careful to clean up any resources you end up creating. At the end of the tutorial I’ll provide a couple of commands that will clean everything up.

I’ve tested all the commands I use on OSX, they will probably also work on Linux. If you use this on Windows you will have to rework some of the templating commands.

Prerequisites

Steps to take

Before we start I want to give you a brief overview of the steps we will follow and the estimated time they will take to complete.

Create management cluster

  • Provision the management cluster (15 minutes)

  • Create AWS IAM Role (5 minutes)

Setting up ArgoCD

  • Install ArgoCD (5 minutes)

  • Patch ArgoCD (5 minutes)

Creating testing cluster

  • Provision the testing cluster (15 minutes)

  • Create AWS IAM Role (5 minutes)

Deploy Guesbook application 

  • Register testing cluster (2 minutes)

  • Register guestbook application (5 minutes)

Cleanup resources (15 minutes)

Create management cluster

Provision the management cluster

We’ll create two clusters in a single AWS account. The configuration will also work with cross-account and multi-cluster setups, by setting the proper trust relations between IAM roles.

See this content in the original post

The above command creates the management cluster and everything it needs to function (VPC, security groups, EC2 nodegroup, etc). You will understand this command can take quite some time to complete (10 to 20 minutes, so go get yourself a cup of coffee) but don’t worry the rest of this tutorial won’t take too long.

We’ve created the management cluster with an oidc provider. The AWS Authenticator packaged with ArgoCD will use this provider to acquire a token. With this it can assume the AWS IAM role “role/ArgoCD” that we’ll create next.

Creating AWS IAM role

Now lets create an AWS IAM role that ArgoCD can use. AWS has detailed documentation on this subject at: https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html. We'll use the part described at “To create your IAM role with the AWS CLI”  and adjust it to our needs.

First set our AWS account ID and the OIDC_PROVIDER of the management cluster as environment variables:

See this content in the original post

Create a trust.json file with the variables you set. Herein we also reference the ArgoCD namespace and make it so all argocd clusterroles can assume the AWS IAM role we’ll create in the next step (system:serviceaccount:argocd:*)

See this content in the original post

Make sure trust.json includes the proper AWS_ACCOUNT_ID and OIDC_PROVIDER. We will use trust.json while creating the AWS IAM Role:

See this content in the original post

Finally we’ll create an inline policy that gives the IAM role the ability to assume other roles. We need this so ArgoCD can assume the Deployer role we’ll create later.

(We could also use the ArgoCD role directly for that purpose, but I find having a separate role to use to Deploy resources into the testing cluster to be more flexible and more secure.)

See this content in the original post

You can use the AWS web console to check the role we just created. The main thing to note is that we created this role so that the service accounts (kubernetes) that ArgoCD uses in the management cluster have an AWS IAM Role to assume through the OIDC provider.

Setting up ArgoCD

Install ArgoCD

Create a namespace for argocd in the management cluster

See this content in the original post

After the install is completed you should see several pods running:

See this content in the original post

Note that the name of the argocd-server pod is important because this is also the initial password of the admin user that is created by ArgoCD automatically. So be sure to copy and save the name. An easy command to get the name:

See this content in the original post

For the purpose of this tutorial we won’t set up any ingress controllers. You are free to do that yourself. In order to use the admin interface we will set up a local proxy.

Open up an extra terminal and run the command:

See this content in the original post


Now you can navigate your browser to https://localhost:8080 and login with username admin and the password you copied earlier.

Patch ArgoCD

In order to instruct ArgoCD to use the role we defined earlier, we need to annotate the kubernetes service accounts ArgoCD used with the ARN of the role. The kubectl patch command provides us with an easy way to adjust a kubernetes resource:

See this content in the original post

It is important that the annotations show the correct ARN of the ArgoCD Role otherwise ArgoCD won’t know which AWS IAM role to assume. Check the service accounts are changed correctly with:

See this content in the original post

Patch the deployments to set the securityContext/fsGroup to 999 so the user of the docker image can actually use IAM Authenticator. You need this because the IAM Authenticator will try mount a secret on /var/run/secrets/eks.amazonaws.com/serviceaccount/token. If the correct fsGroup (999 corresponds to the argocd user) isn’t set, this will fail.

See this content in the original post

After the patching of the deployment and the statefulset you should see the application-controller and argocd-server pods restart.

Take a look in the extra terminal you opened up earlier. The proxy server should be broken because the Pod restarted. Please restore it:

See this content in the original post

Creating testing cluster

Provision the testing cluster

Now we’ve got a working management cluster with ArgoCD installed we need to start setting up our testing cluster where ArgoCD will deploy to. We’ll create another cluster with everything it needs again, so execute this command and go get yourself a cup of coffee:

See this content in the original post

The management cluster we created earlier was created with an oidc provider. We won’t need that on the testing cluster. We do however need an AWS IAM Role capable to deploy applications inside this cluster. 

Create AWS IAM Role

We want the ArgoCD role to be able to assume the Deployer role.Thats why we also create a trust relationship for the ArgoCD role. (In a multi account setup you would change this trust relationship to reference the ArgoCD role in the account that holds the management cluster, and you would place the Deployer role in the same account as the testing cluster.)

See this content in the original post

Make sure the AWS_ACCOUNT_ID is the correct account for the ArgoCD Role using the following:

See this content in the original post

You should now see something like:

See this content in the original post

Deploy Guestbook application

Register testing cluster

We need to register the testing cluster with ArgoCD to be able to create applications that will deploy to it. We’ll do that by registering a secret with all the specific cluster details and deploying that secret into the management cluster.

To be able to do this we need:

  • Server: the http endpoint of the kubernetes api of the server

  • caData: the corresponding public certificate if the kubernetes api

  • roleArn: the role ArgoCD will have to assume to be able to deploy on this cluster.

See this content in the original post

Confirm that the file shows the proper cluster roleArn, certificate and endpoint.

Switch the kubectl context back to the management cluster:

See this content in the original post

Finally register the cluster with ArgoCD:

See this content in the original post

You should be able to find it in the UI of ArgoCD now, with a status of ‘unknown’.

Register Guestbook application

We’ll use a publicly available guestbook application to try out our setup. We can simply add the application and it’s public repository with a single argocd command:

Be sure the ArgoCD admin interface is available by opening up your extra terminal and running:

See this content in the original post

Next login to ArgoCD with the CLI and create the guestbook application:

See this content in the original post

If you visit the UI you should see a guestbook application that is out of sync. Sync it to let ArgoCD create the kubernetes resources inside the testing cluster.

Now this is the basic working setup, you can finetune this in many ways. For example in the trust relationship of role/ArgoCD we’ve set it up so that all roles inside the argocd namespace are allowed to assume role/ArgoCD. You can change that so that only the specific argocd-server and application-controller roles are trusted. That however is an exercise left for the reader.

Clean up

See this content in the original post

Some resources may take a while to delete themselves, even after the delete commands return. This is due to the nature of how AWS cleans up resources like EC2 instances and security groups. You may want to check that any EKS clusters are deleted and any EC2 nodes are in the terminated state.

Congratulations if you made it all this way. I hope you found this tutorial useful.

Timothy Kanters - Modulo 2