End-to-End Deployment of Nginx on Kubernetes Cluster Using kubeadm

End-to-End Deployment of Nginx on Kubernetes Cluster Using kubeadm

To deploy Nginx on a Kubernetes cluster, begin by provisioning two EC2 instances: one designated as the master node and the other as a worker node. After creating these instances, proceed to install the necessary tools and commands on both nodes. Once the setup is complete and all prerequisites are in place, you can move forward with deploying Nginx on the Kubernetes cluster.

On worker node:

mkdir projects

cd projects

mkdir nginx

cd nginx

kubectl create namespace nginx

kubectl get namespace

create a pod file:

vim pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  namespace: nginx
spec:
  containers:
    - name: nginx-container
      image: nginx:latest
      ports:
      -containerPort: 80

kubectl apply -f pod.yml

kubectl get pods -n nginx

vim deployment.yml

kubectl apply -f deployment.yml

if you do kubectl get pods it shows no resource found in default namespace. as we created namespace with the name of nginx.

kubectl get pods -n nginx

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
        - name: nginx-container
          image: nginx:latest

kubectl get pods -n nginx -o bind in order to show ips:

kubectl get deployment -n nginx

kubectl describe deployment nginx-deployment -n nginx

there are two option of scaling deployment:

1. in deployment.yml replicas= (desired_state ) 2

  1. write kubectl scale deployment nginx-deployment --replicsa=5 -n nginx

on worker node: as there is replicas=5 , therfore on worker node 7 pods are running.

vim service.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: nginx
spec:
  selector:
    app: nginx-app   # Use the appropriate label to match your pod
  ports:
    - protocol: TCP
      port: 80       # Port in the service
      targetPort: 80 # Port in the pod
      conatinerPort: 30007 #port exposes for external users
  type: NodePort

kubectl apply -f service.yml

To enable external access to services running on your EC2 instances (both the master and worker nodes, which share the same security group), you need to add an inbound rule that allows traffic on port 30007. This will permit external systems to communicate with your cluster over port 30007.

In a Kubernetes cluster, it's common to use a range of ports for services known as "NodePort" services. The default NodePort port range is 30000-32767. This range allows you to expose services externally by mapping a port within this range to a service within your cluster.

here, pods deployment scales to 2.

docker ps it shows all running conatiners.(on worker nodes)

docker kill container_id

on master node:If a Docker container within a pod is terminated (e.g., it crashes or is manually killed), the Kubernetes Deployment controller will automatically detect that the pod is no longer running and initiate the creation of a new pod to replace it. This behavior is a fundamental feature of Deployments in Kubernetes.

The Deployment controller ensures that the actual state (the number of running pods) matches the desired state specified in the associated Deployment manifest (such as your deployment.yml). If the desired state is not met (e.g., the number of pods falls below the specified replica count), the controller takes corrective action to bring the cluster back to the desired state by creating new pods.

Thank you