ArgoCD
is a gitops tool to perform the deployments in kubernetes environment and it consists of three main components.
- API Server
- Repository Server
- Application Controller
Prerequisite
- You should have a EKS cluster configured
- Kubectl command line tool should be available
- AWS cli should be available
To install ArgoCD
1
2
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.2.1/manifests/install.yaml
Latest release version can available in the offcial documentation.
In-order to communicate with API server, we have to configure ArgoCD CLI.
1
2
3
sudo curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v2.2.1/argocd-linux-amd64
sudo chmod +x /usr/local/bin/argocd
Expose argocd-server
By default argocd-server is not publicaly exposed. So we have to create a service and an ingress to expose the endpoint. Hence we have to deploy an AWS ALB ingress controller. Ref Link
Steps to follow
- Download an IAM policy for the AWS Load Balancer Controller. You can view the policy document on GitHub.
1
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.1/docs/install/iam_policy.json
- Create an IAM policy using the policy downloaded in the previous step.
1 2 3
aws iam create-policy \ --policy-name AWSLoadBalancerControllerIAMPolicy \ --policy-document file://iam_policy.json
- Create an IAM role and annotate the Kubernetes service account that’s named aws-load-balancer-controller in the kube-system namespace for the AWS Load Balancer Controller.
Using the AWS Management Console and kubectl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
3a. Open the IAM console at https://console.aws.amazon.com/iam/.
3b. In the navigation panel, choose Roles, Create Role.
3c. In the Select type of trusted entity section, choose Web identity.
3d. In the Choose a web identity provider section:
For Identity provider, choose the URL for your cluster.
For Audience, choose sts.amazonaws.com.
Choose Next: Permissions.
3e. In the Attach Policy section, select the AWSLoadBalancerControllerIAMPolicy policy that you created in a previous step to use for your service account.
3f. Choose Next: Tags.
3g. Choose Next: Review.
3h. For Role Name, enter a name for your role, such as AmazonEKSLoadBalancerControllerRole, and then choose Create Role.
3i. After the role is created, choose the role in the console to open it for editing.
3j. Choose the Trust relationships tab, and then choose Edit trust relationship.
3k. Find the line that looks similar to the following line:
`"oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E:aud": "sts.amazonaws.com"`
Change the line to look like the following line. Replace EXAMPLED539D4633E53DE1B716D3041E with your cluster's OIDC provider ID and if necessary, replace region-code with the Region code that your cluster is in. Change aud to sub and replace sts.amazonaws.com with the value in the following text.
`"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E:sub":"system:serviceaccount:kube-system:aws-load-balancer-controller"`
3l. Choose Update Trust Policy to finish.
3m. Note the ARN of the role for use in a later step.
3n. Save the following contents to a file that's named aws-load-balancer-controller-service-account.yaml, replacing 111122223333 with your account ID.
=============================================
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: aws-load-balancer-controller
name: aws-load-balancer-controller
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/AmazonEKSLoadBalancerControllerRole
Create the service account on your cluster.
=============================================
3o. kubectl apply -f aws-load-balancer-controller-service-account.yaml
- Add the following IAM policy to the IAM role created in a previous step. The policy allows the AWS Load Balancer Controller access to the resources that were created by the ALB Ingress Controller for Kubernetes.
Download the IAM policy. You can also view the policy.
1
curl -o iam_policy_v1_to_v2_additional.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.1/docs/install/iam_policy_v1_to_v2_additional.json
Create the IAM policy and note the ARN that is returned.
1
2
3
aws iam create-policy \
--policy-name AWSLoadBalancerControllerAdditionalIAMPolicy \
--policy-document file://iam_policy_v1_to_v2_additional.json
Attach the IAM policy to the IAM role that you created in a previous step.
1
2
3
aws iam attach-role-policy \
--role-name your-role name \
--policy-arn arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerAdditionalIAMPolicy
Install the AWS Load Balancer Controller using Helm V3.
Helm V3 or later
Add the eks-charts repository.
1
helm repo add eks https://aws.github.io/eks-charts
Update your local repo to make sure that you have the most recent charts.
1
helm repo update
Install the AWS Load Balancer Controller.
1
2
3
--set region=region-code
--set vpcId=vpc-xxxxxxxx
If you’re deploying to any Region other than us-west-2, then add the following flag to the following command
1
--set image.repository=account.dkr.ecr.region-code.amazonaws.com/amazon/aws-load-balancer-controller
1
2
3
4
5
6
7
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=cluster-name \
--set region=region-code \
--set vpcId=vpc-xxxxxxxx \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Verify that the controller is installed.
1
kubectl get deployment -n kube-system aws-load-balancer-controller
Output
1
2
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 84s
Now we have to point a domain and In order to create service and ingress, we can follow the Sample mainfest files
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
apiVersion: v1
kind: Service
metadata:
name: argogrpc
namespace: argocd
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
labels:
app: argogrpc
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd
namespace: argocd
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/certificate-arn: <ACM ARN>
alb.ingress.kubernetes.io/inbound-cidrs: x.xx.xx.xx/32, xx.xx.xx.xx/32
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-2019-08
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: <public subnets>
alb.ingress.kubernetes.io/target-type: ip
# Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
spec:
rules:
- host: "argocd.demo-angel.com"
http:
paths:
- backend:
serviceName: argogrpc
servicePort: 443
pathType: ImplementationSpecific
- backend:
serviceName: argocd-server
servicePort: 443
pathType: ImplementationSpecific
Get Admin credentials
The initial password is autogenerated with the pod name of the ArgoCD API server. You can get it by running following command
1
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
You can use the credentials retrived in above step and login as admin in the dashboard.
or from CLI
1
2
kubectl config use-context arn:aws:eks:us-east-1:XXXXXXXX:cluster/eks-argocd
argocd login --port-forward --port-forward-namespace argocd
Note: By default, the cluster in which ArgoCD is deployed is added as a local cluster.
To add repositories.
- Create a dedicated SSH Key Pair for ArgoCD < > Github Authentication
- Add the SSH Public Key under the Github deployment bot account.
- Go to ArgoCD Console > Settings > Repositories > Connect Repo using SSH > Provide your repository link and SSH Private Key to connect.
- Once added it whill be listed in Repositories tab and verify the Connection Status.
Note: We can include cluster autoscaler and metrics server in the EKS environment for node scaling and for metrics collection.
Reference for Cluster AutoScaler Reference for Metrics Server
That’s it