❤️ ABS is one of the five Certified Kubernetes Service Providers in India ❤️

Deploying Your First Application on Kubernetes

Kubernetes, often abbreviated as K8s, is a powerful container orchestration platform that has become the industry standard for deploying, managing, and scaling containerized applications. It offers a robust and flexible framework for automating the deployment, scaling, and management of containerized workloads. If you’re new to Kubernetes, getting started may appear daunting, but this step-by-step guide is here to demystify the process and make your first application deployment a breeze.

Introduction to Kubernetes

Before diving into the nitty-gritty of deploying your first application on Kubernetes, let’s take a moment to understand what Kubernetes is and why it has become such a crucial tool in the world of container orchestration.

Kubernetes, originally developed by Google, is an open-source platform designed to automate the deployment, scaling, and management of containerized applications. It provides a highly efficient and scalable solution for running containers in production environments. Kubernetes abstracts the underlying infrastructure, whether it’s on-premises, in the cloud, or a hybrid, making it possible to run applications consistently across different environments.

Kubernetes is all about orchestrating containers. Containers package applications and their dependencies into a single unit, ensuring that they run consistently in any environment. Kubernetes takes these containers and automates their deployment, scaling, and management. It handles tasks like load balancing, self-healing, and rolling updates, making it a robust choice for organizations of all sizes.

The key concepts in Kubernetes are pods, services, and deployments:

  • Pods: These are the smallest deployable units in Kubernetes. A pod can contain one or more containers that share the same network IP, storage, and options. Pods are used for running application components.

  • Services: Services are used to expose pods to the network. They provide a stable network endpoint to access the pods. You can think of services as an abstraction layer that load-balances traffic to pods.

  • Deployments: Deployments allow you to declare an application’s desired state and manage its lifecycle. They ensure that the specified number of pod replicas are running and can perform rolling updates and rollbacks.

With this basic understanding of Kubernetes, let’s proceed to deploying your first application on a Kubernetes cluster.

Prerequisites

Before we begin, you’ll need a few prerequisites:

  1. Kubernetes Cluster: You should have a Kubernetes cluster up and running. If you don’t already have one, you can set up a local cluster using Minikube or use a cloud-based solution like Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (EKS).

  2. kubectl Installed: Make sure you have the kubectl command-line tool installed and configured to connect to your Kubernetes cluster.

Now that you have the necessary background, let’s start the journey of deploying your first application on Kubernetes.

Step 1: Creating a Deployment

In Kubernetes, a Deployment is a resource that manages a set of identical pods, ensuring that a specified number of them are running at any given time. For our first application, we’ll create a simple NGINX web server deployment. This will allow us to run multiple instances of NGINX in our cluster, providing high availability and load distribution for our web service.

Understanding Deployments

A Deployment is a crucial Kubernetes resource for managing applications. It abstracts the underlying infrastructure and provides a declarative way to define how many replicas of a pod should be running. In our case, we’re creating a Deployment for NGINX with three replicas, ensuring high availability and load distribution.

YAML Configuration (nginx-deployment.yaml):
				
					apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest

				
			

This YAML file defines the configuration for our NGINX Deployment, including the desired number of replicas, the label selector, and the pod template.

Applying the Deployment:

To create the NGINX Deployment, use the kubectl apply command, passing in the YAML file as an argument.

				
					kubectl apply -f nginx-deployment.yaml

				
			

This will send the configuration to your Kubernetes cluster, instructing it to create and manage the NGINX Deployment.

By creating a Deployment, you’re ensuring that the desired number of NGINX pods are always running, and Kubernetes takes care of distributing them across the cluster, providing fault tolerance and resilience.

Step 2: Verifying the Deployment

After you’ve created the deployment, you might want to check its status to ensure that everything is running smoothly. You can do this by using the following command:

Using kubectl get

The kubectl get command is a versatile tool for retrieving information about various Kubernetes resources. By running kubectl get deployments, you can see a list of deployments in your cluster, and you should see your nginx-deployment listed. The columns named “DESIRED,” “CURRENT,” and “UP-TO-DATE” should all show the number of replicas you specified in your deployment configuration (which is 3 in our case).

This command provides a quick overview of the state of your Deployment and the number of replicas that are currently running.

				
					kubectl get deployments

				
			

This kind of insight into your application’s state is one of the many benefits of using Kubernetes. You can easily check the health and status of your deployments and pods with a simple command.

Step 3: Exposing Your Application

While your NGINX deployment is up and running, you need a way to access it from outside the cluster. In Kubernetes, you can use a Service to expose your application.

Understanding Services

A Service is a Kubernetes resource that provides network access to a set of pods. Services abstract the way pods are accessed by providing a stable IP address and DNS name for them. They also load-balance traffic to the pods they serve.

YAML Configuration (nginx-service.yaml):

				
					apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

				
			

This YAML file defines the configuration for our NGINX Service. It specifies how the service should route incoming traffic to the NGINX pods. It sets up a stable IP address and port for external access.

Applying the Service Configuration:

To create the Service, use kubectl apply again, this time applying the nginx-service.yaml configuration 

				
					kubectl apply -f nginx-service.yaml

				
			

This will send the configuration to your Kubernetes cluster, creating a Service that routes traffic to your NGINX deployment.

Creating a Service is an essential step in exposing your application to the outside world. It provides a stable endpoint for your application, abstracting the complexity of routing traffic to the correct pods.

Step 4: Accessing Your Application

With your service in place, you can access your NGINX web server. To find the external IP of your NGINX service, you can use the following command:

Using kubectl get svc

The kubectl get svc command provides information about services in your cluster. When you run this command, you’ll see a list of services, including the nginx-service. You can find the external IP under the “EXTERNAL-IP” column.

				
					kubectl get svc nginx-service

				
			

This external IP is your gateway to accessing the NGINX web server from a web browser or any other HTTP client.

By executing these commands, you can easily retrieve the information you need to access your application. Kubernetes abstracts the complexities of networking and load balancing, providing a simple and consistent way to expose your services to the world.

Congratulations!

Congratulations! You’ve successfully deployed your first application on a Kubernetes cluster. This accomplishment marks the beginning of your journey with Kubernetes.

As you continue to explore the world of Kubernetes, you’ll discover more advanced concepts and features, such as managing configuration, implementing rolling updates, and orchestrating complex microservices applications. Kubernetes offers a wide array of tools and capabilities to help you manage containerized applications effectively in various scenarios.

Cleaning Up

Once you’re done experimenting and no longer need the NGINX deployment and service, you can clean up your resources by deleting them.

Deleting Resources

To delete the Deployment and Service, run the following commands:

				
					kubectl delete deployment nginx-deployment
kubectl delete service nginx-service

				
			

This will remove the NGINX deployment and the associated service from your cluster, freeing up resources for other projects.

Conclusion

This step-by-step guide has introduced you to the basic process of deploying an application on a Kubernetes cluster. You’ve created a Deployment, exposed it using a Service, and accessed your application. With this knowledge, you’re on your way to becoming proficient in managing containerized applications in a Kubernetes environment.

Kubernetes offers many advanced features and configurations to explore as you become more comfortable with container orchestration. Whether you’re running a simple NGINX server or a complex microservices application, Kubernetes provides the tools and capabilities to manage your applications effectively.

Happy deploying!