Kubernetes – an Overview

Kubernetes is a open source system which automates deployments and manages the containerized applications.

Containerization –  is an alternative to Virtual Machines where application is encapsulated in its own operating environment. Docker is an open source container that is designed to achieve containerization.

Containerization helps packaging software to serve frequent deployments and 24X7 availability , enabling applications to get deployed in an easy and fast way without downtime.

Kubernetes helps in managing the containerized applications – tools, resources and  environment.

Basic Modules in Kubernetes –

1. Creating a Kubernetes cluster

Group of computers that are available and connected to work as a single unit form a cluster. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.

Kubernete cluster consists of two types of resources

  • Master Node
  • Worker Nodes

Master Node is responsible for managing the cluster and working nodes handles container operations and communicate with master node.

Communication is done between worker node and master node with the help of Kubernetes API. End Users can also communicate with cluster with this API.

To Start with Kubernetes development, we use Minikube.

Minikube is a lightweight implementation of Kubernetes that creates a VM on our local machine  and deploys a simple cluster with one node.The Minikube CLI provides basic bootstrapping operations for working with your cluster, including start, stop, status, and delete.

Run the cluster by using this minikube command

minikube start

2. Deploying Application

Once we have the kubernetes cluster running, we can deploy containerized applications on top of it. We can use kubectl to deploy our application. Kubectl uses the Kubernetes API to interact with the cluster. After deployment,the Kubernetes master schedules mentioned application instances onto individual Nodes in the cluster.

3. Exploring

After creating a deployment, Kubernetes created a Pod to host your application instance.

A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers.

Those resources include:

  • Shared storage, as Volumes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use

Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion.

In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster. A Pod always runs on a Node.

4. Exposure to End Users

Once application is deployed and explored, we need to expose the application to the end users and services help in this.

Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:

  • ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
  • NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
  • LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
  • ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns

5. Scaling of App

Scaling is accomplished by changing the number of replicas in a Deployment.Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with available resources. Scaling will increase the number of Pods to the new desired state.

6. Updating of App

Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments’ update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.

 

Thus Kubernetes is gaining lots of exposure in recent world and is proving very useful in the cloudy environment.

Leave a Reply

Your email address will not be published. Required fields are marked *