Advanced End-to-End DevSecOps Kubernetes Three-Tier Project using Terraform AWS EKS, ArgoCD, Prometheus, Grafana, and Jenkins

In the End-to-End DevSecOps Kubernetes Project, we crafted a robust Three-Tier architecture featuring a React.js frontend, Node.js backend, and MongoDB database tier. Leveraging Terraform on AWS, we meticulously erected this architecture. Jenkins served as the cornerstone of automation, seamlessly fetching code from GitHub, conducting code analysis via Sonarqube, and performing OWASP dependency checks. Trivy scans fortified our files, ensuring robust security. Subsequently, Docker images were created and pushed to AWS ECR private repositories, culminating in deployment on Kubernetes orchestrated by ArgoCD. Throughout this journey, Grafana and Prometheus vigilantly monitored the entire process, ensuring reliability and performance at every step.

Project Overview: Our project unfolds across multiple phases, each meticulously designed to cultivate skills and understanding in DevSecOps practices:

  1. IAM User Setup: We begin by crafting an IAM user on AWS, endowed with tailored permissions to orchestrate deployment and management activities seamlessly.

  2. Infrastructure as Code (IaC): With Terraform and AWS CLI at our disposal, we erect the foundation of our architecture by provisioning the Jenkins server (an EC2 instance) on AWS.

  3. Jenkins Server Configuration: Next, we infuse our Jenkins server with the essential toolset, including Jenkins itself, Docker, Sonarqube, Terraform, Kubectl, AWS CLI, and Trivy, ensuring a potent platform for automation.

  4. EKS Cluster Deployment: Leveraging eksctl commands, we manifest an Amazon EKS cluster—a managed Kubernetes service on AWS—furnishing the bedrock for our containerized applications.

  5. Load Balancer Configuration: We then fashion an AWS Application Load Balancer (ALB) to facilitate traffic routing and load distribution within the EKS cluster, optimizing performance and reliability.

  6. Amazon ECR Repositories: Private repositories for frontend and backend Docker images are meticulously crafted on Amazon Elastic Container Registry (ECR), safeguarding our artifacts while enabling seamless deployment.

  7. ArgoCD Installation: ArgoCD, a stalwart of continuous delivery and GitOps practices, is integrated into our ecosystem, streamlining the deployment process with efficiency and agility.

  8. Sonarqube Integration: The integrity of our codebase is fortified with Sonarqube, furnishing invaluable insights into code quality and vulnerabilities, ensuring robustness in our DevSecOps pipeline.

  9. Jenkins Pipelines: Our DevOps journey reaches new heights as we forge Jenkins pipelines, orchestrating the end-to-end deployment process—from fetching code from GitHub to code analysis, dependency checks, image creation, and deployment on Kubernetes through ArgoCD.

  10. Monitoring Setup: A vigilant eye is cast over our environment with Helm, Prometheus, and Grafana, instilling a culture of proactive monitoring and rapid response to deviations.

  11. ArgoCD Application Deployment: ArgoCD emerges as our trusted ally, ushering in continuous deployment of the Three-Tier application—comprising database, backend, frontend, and ingress components—with precision and reliability.

  12. DNS Configuration: We extend accessibility to our application by configuring DNS settings, paving the way for seamless access via custom subdomains.

  13. Data Persistence: Data integrity is paramount as we implement persistent volume and persistent volume claims for database pods, ensuring resilience and continuity.

  14. Conclusion and Monitoring: As we draw the curtains on our project, we reflect on key achievements and embrace the power of Grafana for holistic monitoring, ensuring the sustained performance of our EKS cluster.

Prerequisites:

Before starting the project, ensure you have the following prerequisites:

  • An AWS account with the necessary permissions to create resources.

  • Terraform and AWS CLI installed on your local machine. (i used VScode)

    To installed terraform on local

    Create vim script1.sh

    chmod 777 script1.sh

    sh script1.sh

# Installing Terraform
#!/bin/bash
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt install terraform -y

AWS CLI : Configuring AWS Access Keys on Your Local Machine

Ensure you have the AWS CLI installed before proceeding.

Step-by-Step Instructions:

  1. Install AWS CLI:

    vim script2.sh

    chmod 777 script2.sh

    sh script2.sh

     # Installing AWS CLI
     #!/bin/bash
     curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
     sudo apt install unzip -y
     unzip awscliv2.zip
     sudo ./aws/install
    
  2. Obtain AWS AccessKeys: Have your AWS access key ID and secret access key ready. These can be generated from the AWS Management Console under IAM Identity and Access Management. (explain in step2 )

  3. Configure AWS CLI: Open your terminal or command prompt and run the following command to configure the AWS CLI with your access keys:

     aws configure
    

    Enter the following details when prompted:

    • AWS Access Key ID: Enter your access key ID.

    • AWS Secret Access Key: Enter your secret access key.

    • Default region name: Enter your preferred AWS region (e.g., us-east-1).

    • Default output format: Enter your preferred output format

Step 1 Creating an IAM (Identity and Access Management) user on AWS involves several steps. Here's a concise guide to help you through the process:

Open the IAM console: From the services menu, locate and select "IAM" under the Security, Identity, & Compliance category.

In permission gave addministrative access by Attach policies directly.

Select newly created user and go to Security credentials -> create Access key (dont disclose your access key and secret access key)

Configure this access keys on your local , make sure you have downloaded AWS CLI on your local:

aws configure

Step 2. First Create Terraform files to create entire infrastructure on AWS:

Github link: https://github.com/Pardeep32/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git

Go to application Codein this repo and clone to local, make sure to change the bucket name and region in backend.tf file. Create a bucket with same name in your account.

Update that bucket name and region in your backend.tf file.

Now Deploy jenkins EC2 server on AWS using terraform files that are cloned to local using following commands:

terraform init

terraform plan

terraform validate

terraform apply --auto-approve

AWS EC2 instance is created in ca-central-1 region.

connect to ec2 instance through ssh.

Certainly! Here's a rewritten version that explains how to check the versions of all tools installed on an EC2 server, which were installed during resource creation by Terraform:

Step 3 Configure jenkins

sudo systemctl status jenkins

To unlock jenkins copy the password and paste on jenkins.

install plugins

Create user

Step 4. We will deploy the EKS Cluster using eksctl commands

Got o manage jenkins -> plugins -> available plugins

AWS steps

pipeline AWS steps

After downloading plugins restart your jenkins

Now, we have to set our AWS credentials on Jenkins

Go to Manage Plugins and click on Credentials->global-> add credentials

Select AWS Credentials as Kind and add the ID same as shown in the below snippet except for your AWS Access Key & Secret Access key and click on Create.

Now, We need to add GitHub credentials as well because currently, my repository is Private. This thing, I am performing this because in Industry Projects your repository will be private.

So, add your GitHub username and personal access token (since you can't use your GitHub password directly, use a token instead).

STEP 5 Create an eks cluster using the below commands.

eksctl create cluster --name Three-Tierrrr-EKS-Cluster --region ca-central-1 --node-type t2.medium --nodes-min 2 --nodes-max 2
aws eks update-kubeconfig --region ca-central-1 --name Three-Tierrrr-EKS-Cluster

Once your cluster is created, you can validate whether your nodes are ready or not by the below command

kubectl get nodes

Step 6 Now, we will configure the Load Balancer on our EKS because our application will have an ingress controller.

Download the policy for the LoadBalancer prerequisite.

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json

Create the IAM policy using the below command

aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json

Create OIDC Provider

eksctl utils associate-iam-oidc-provider --region=ca-central-1 --cluster=Three-Tierrrr-EKS-Cluster --approve

Create a Service Account by using below command and replace your account ID with your one

eksctl create iamserviceaccount --cluster=Three-Tierrrr-EKS-Cluster --namespace=kube-system --name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-policy-arn=arn:aws:iam::654654392783:policy/AWSLoadBalancerControllerIAMPolicy --approve --region=ca-central-1

Run the below command to deploy the AWS Load Balancer Controller

sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=Three-tierrrr-EKS-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

After 2 minutes, run the command below to check whether your pods are running or not.

kubectl get deployment -n kube-system aws-load-balancer-controller

Step 7. We need to create Amazon ECR Private Repositories for both Tiers (Frontend & Backend)

Select private repo-> threetierfrontend->view push commands and paste these commands one by one to jenkins server's terminal.

Do Same with backend

Step 8 Install & Configure ArgoCD

We will be deploying our application on a three-tier namespace. To do that, we will create a three-tier namespace on EKS

kubectl create namespace three-tier

As you know, Our two ECR repositories are private. So, when we try to push images to the ECR Repos it will give us the error Imagepullerror.

To get rid of this error, we will create a secret for our ECR Repo by the below command and then, we will add this secret to the deployment file.

Note: The Secrets are coming from the .docker/config.json file which is created while login the ECR in the earlier steps.

kubectl create secret generic ecr-registry-secret \
  --from-file=.dockerconfigjson=${HOME}/.docker/config.json \
  --type=kubernetes.io/dockerconfigjson --namespace three-tier
kubectl get secrets -n three-tier

Now, we will install argoCD.

To do that, create a separate namespace for it and apply the argocd configuration for installation.

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml

All pods must be running, to validate run the below command

kubectl get pods -n argocd

Now, expose the argoCD server as LoadBalancer using the below command

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

You can validate whether the Load Balancer is created or not by going to the AWS Console.

To access the argoCD, copy the LoadBalancer DNS and hit on your favorite browser.

You will get a warning like the below snippet.

Click on Advanced.

Now, we need to get the password for our argoCD server to perform the deployment.

To do that, we have a pre-requisite which is jq. Install it by the command below.

sudo apt install jq -y

export ARGOCD_SERVER=$(kubectl get svc argocd-server -n argocd -o json | jq -r '.status.loadBalancer.ingress[0].hostname')
export ARGO_PWD=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)
echo $ARGO_PWD  #it will provide password for argocd

Enter the username and password in argoCD and click on SIGN IN.

Step 9. Now, we have to configure Sonarqube for our DevSecOps Pipeline

To do that, copy your Jenkins Server public IP and paste it on your favorite browser with a 9000 port

The username and password will be admin

Click on Log In.

Click on Administration -> Security->Users

Now generate token by clicking on box below Tokens

Click on generate

Copy this token and save for later.

Now, we have to configure webhooks for quality checks in Sonarqube.

Click on Administration then, Configuration -> Webhooks -> create. add name of project and

http://<jenkins-server-public-ip>:8080/sonarqube-webhook/ (in ss url is wrong just use this instead)

Here, you can see the webhook.

Now, we have to create a Project for frontend code

Click on Projects ->Manually.

Now go to Locally

Select Other and Linux as OS.

After performing the above steps, you will get the command which you can see in the below snippet.

Now, use the command in the Jenkins Frontend Pipeline where Code Quality Analysis will be performed.

Now, we have to create a Project for backend code**.**

Click on Create Project.

Go to Dashboard -> Manage Jenkins -> Credentials

Select the kind as Secret text paste your Sonar-token in Secret and keep other things as it is.

Click on Create

Step 10. Now, we have to store the GitHub Personal access token to push the deployment file which will be modified in the pipeline itself for the ECR image.

Add GitHub credentials

Select the kind as Secret text and paste your GitHub Personal access token(not password) in Secret and keep other things as it is.

Click on Create

Note: If you haven’t generated your token then, you have it generated first then paste it into the Jenkins

Now, according to our Pipeline, we need to add an Account ID in the Jenkins credentials because of the ECR repo URI.

Select the kind as Secret text paste your AWS Account ID in Secret and keep other things as it is.

Click on Create

Now, we need to provide our ECR image name for frontend which is threetierfrontend and threetierbackend.

Select the kind as Secret text paste your frontend repo name in Secret and keep other things as it is.

Click on Create

All credentials will be like this

Step 11. Install the required plugins and configure the plugins to deploy our Three-Tier Application

Install the following plugins by going to Dashboard -> Manage Jenkins -> Plugins -> Available Plugin

s

Docker
Docker Commons
Docker Pipeline
Docker API
docker-build-step
Eclipse Temurin installer
NodeJS
OWASP Dependency-Check
SonarQube Scanner

Now, we have to configure the installed plugins.

Go to Dashboard -> Manage Jenkins -> Tools

We are configuring jdk

Search for jdk and provide the configuration like the below snippet.

Sonarqube

Nodejs

Now, we will configure the OWASP Dependency check

Search for Dependency-Check and provide the configuration like the below snippet.

Now, we will configure the docker

Search for docker and provide the configuration like the below snippet.

Now, we have to set the path for Sonarqube in Jenkins

Go to Dashboard -> Manage Jenkins -> System

Search for SonarQubeinstallations

Provide the name as it is, then in the Server URL copy the sonarqube public IP (same as Jenkins) with port 9000 select the sonar token that we have added recently, and click on Apply & Save.

Now, we are ready to create our Jenkins Pipeline to deploy our Backend Code.

Go to Jenkins Dashboard

Click on NewItem

Step 12. This is the Jenkins file to deploy the Backend Code on EKS.

Copy and paste it into the Jenkins

https://github.com/Pardeep32/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project/blob/master/Jenkins-Pipeline-Code/Jenkinsfile-Backend

Now, we are ready to create our Jenkins Pipeline to deploy our Frontend Code.

Go to Jenkins Dashboard

Click on NewItem

Provide the name of your Pipeline and click on OK.

My pipeline was not shown so i downloaded Blue Ocean plugin and shows output in blue ocean.

it will automaically push image to ECR in private repo.

Jenkins backend pipeline pushed image with recent build updated to github repo too.

when i run build no. 8 it will updated automatically to build no. 8 from 7.

and Do same with frontend.

Sonarqube Output and Dependency-check-Trend

**Step 13.**Create a new item for Frontend:

https://github.com/Pardeep32/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project/blob/master/Jenkins-Pipeline-Code/Jenkinsfile-Frontend

Step 14 Monitoring of EKS Cluster

a. Install prometheus: Prometheus is an open-source monitoring and alerting toolkit designed for collecting and storing time-series data efficiently. It provides a powerful query language (PromQL) for flexible metric analysis, integrates seamlessly with Grafana for visualization, and includes built-in alerting capabilities. It's highly scalable, supports various service discovery mechanisms, and is ideal for monitoring applications, infrastructure, and containerized environments.

First of all, let’s create a dedicated Linux user sometimes called a system account for Prometheus. Having individual users for each service serves two main purposes:It is a security measure to reduce the impact in case of an incident with the service. It simplifies administration as it becomes easier to track down what resources belong to which service.

To create a system user or system account, run the following commands one by one:

sudo useradd \
    --system \
    --no-create-home \
    --shell /bin/false prometheus

wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz
tar -xvf prometheus-2.47.1.linux-amd64.tar.gz
sudo mkdir -p /data /etc/prometheus
cd prometheus-2.47.1.linux-amd64/
sudo mv prometheus promtool /usr/local/bin/
sudo mv prometheus.yml /etc/prometheus/prometheus.yml
sudo chown -R prometheus:prometheus /etc/prometheus/ /data/
cd
rm -rf prometheus-2.47.1.linux-amd64.tar.gz
prometheus --version

We’re going to use Systemd, which is a system and service manager for Linux operating systems. For that, we need to create a Systemd unit configuration file.

sudo vim /etc/systemd/system/prometheus.service

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/data \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --web.listen-address=0.0.0.0:9090 \
  --web.enable-lifecycle
[Install]
WantedBy=multi-user.target

To automatically start the Prometheus after reboot, run enable.

sudo systemctl enable prometheus
sudo systemctl start prometheus
sudo systemctl status prometheus

IP_address of ec2 instance:9090 (add 9090 in security group of ec2 insatnce)

b. Now install node exporter on ec2 instance:

Node Exporter is an essential tool for Prometheus, designed to collect hardware and OS-level metrics from Linux systems. It provides valuable insights into system performance, including CPU, memory, disk usage, and network statistics. By installing Node Exporter, you can monitor your infrastructure more effectively, ensuring better resource utilization and early detection of potential issues.

sudo useradd \
    --system \
    --no-create-home \
    --shell /bin/false node_exporter

wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz
tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz

sudo mv \
  node_exporter-1.6.1.linux-amd64/node_exporter \
  /usr/local/bin/

rm -rf node_exporter*
node_exporter --version
sudo vim /etc/systemd/system/node_exporter.service

[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/node_exporter \
    --collector.logind
[Install]
WantedBy=multi-user.target

sudo systemctl enable node_exporter
sudo systemctl start node_exporter
sudo systemctl status node_exporter

At this point, we have only a single target in our Prometheus. There are many different service discovery mechanisms built into Prometheus. For example, Prometheus can dynamically discover targets in AWS, GCP, and other clouds based on the labels. In the following tutorials, I’ll give you a few examples of deploying Prometheus in a cloud-specific environment. For this tutorial, let’s keep it simple and keep adding static targets. Also, I have a lesson on how to deploy and manage Prometheus in the Kubernetes cluster.

To create a static target, you need to add job_name with static_configs.

sudo vim /etc/prometheus/prometheus.yml

## in prometheus.yaml add this job and do same with jenkins.
- job_name: node_export
    static_configs:
      - targets: ["localhost:9100"]

By default, Node Exporter will be exposed on port 9100.

Since we enabled lifecycle management via API calls, we can reload the Prometheus config without restarting the service and causing downtime.

Before, restarting check if the config is valid.

promtool check config /etc/prometheus/prometheus.yml

c. Install Grafana on Ubuntu 22.04

To visualize metrics we can use Grafana. There are many different data sources that Grafana supports, one of them is Prometheus.

First, let’s make sure that all the dependencies are installed.

sudo apt-get install -y apt-transport-https software-properties-common
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt-get update
sudo apt-get -y install grafana
sudo systemctl enable grafana-server
sudo systemctl start grafana-server

Add data source first, select data source

select promethus

go to dashbaoard ->New -> import dashboard

Click on Import Dashboard paste this code 1860 and click on load

enter 1860 in find and import dashboards, and then Load

save the dashboard

d. Install the Prometheus Plugin and Integrate it with the Prometheus server

Let’s Monitor JENKINS SYSTEM

Need Jenkins up and running machine

Goto Manage Jenkins –> Plugins –> Available Plugins

Search forPrometheus and install it

Once that is done you will Prometheus is set to/Prometheuspath in system configurations

Nothing to change click on apply and save

To create a static target, you need to add job_name with static_configs. go to Prometheus server

sudo vim /etc/prometheus/prometheus.yml

Let’s add Dashboard for a better view in Grafana

Click On Dashboard –> + symbol –> Import Dashboard

Use Id9964and click on load

Import new dashboard and add 9964 .

Step 15. We will deploy our Three-Tier Application using ArgoCD.

As our repository is private. So, we need to configure the Private Repository in ArgoCD.

Click onSettingsand select**Repositories

Click on REPO and then**CONNECT REPO USING HTTPS

Now, provide the repository name where your Manifests files are present.

Provide the username and GitHub Personal Access token and click on**CONNECT.

If yourConnection StatusisSuccessfulit means repository connected successfully.

Now, we will create our first application which will be a database.

Click on**CREATE APPLICATION.

While your database Application is starting to deploy, We will create an application for the backend.

Provide the details as it is provided in the below snippet and scroll down.

While your backend Application is starting to deploy, We will create an application for the frontend.

Provide the details as it is provided in the below snippet and scroll down.

While your frontend Application is starting to deploy, We will create an application for the ingress.

Provide the details as it is provided in the below snippet and scroll down.

Once your Ingress application is deployed. It will create an**Application Load Balancer

You can check out the load balancer named with k8s-three.

Now, Copy the ALB-DNS and go to your Domain Provider in my case Hotinger is the domain provider.

Go toDNSand add aCNAMEtype with hostnamebackendthen add yourALBin the Answer and click on**Save

***Note:***I have created a subdomain

Now, hit your subdomain after 2 to 3 minutes in your browser to see your application is up and running.

ingress in ArgoCd

backend

frontend

database

If you observe, we have configured the Persistent Volume & Persistent Volume Claim. So, if the pods get deleted then, the data won’t be lost. The Data will be stored on the host machine.

To validate it, delete both Database pods.

Now, the new pods will be started.

Delete the cluster.

Thank you reading my article, if you have any doubt feel free to reach out me