Introduction to Kubernetes
Application containerization has gained popularity over the past few years. With increasing popularity of containerized applications, managing these containers became very difficult. Various systems and platforms were introduced to help with the container orchestration. Kubernetes is one of them.
Kubernetes (commonly known as K8s) is an open-source system which provides a platform for automated deployment. Kubernetes helps to manage containerized applications. It is highly scalable and ideal for production-grade large applications.
Kubernetes was designed by Google. The platform was open sourced in 2014 and is now being managed by Cloud Native Computing Foundation.
To get a better understanding of the content in this article, I would recommend you to get familiar with containers first. Read about containers…
Now, let’s have a look at the architecture and some common terms to help us understand the Kubernetes better.
Architecture and terminology:
The container cluster (see the above diagram) is the base of the Kubernetes. All the applications or objects run on top of this cluster. Following components come together to form a cluster –
– Cluster Master:
A cluster master can be described as a decision maker and process manager for the cluster objects. Scheduling, handling authorization and authentication, scaling, handling configurations, handling anomalies etc. are some of the jobs of cluster master.
The cluster master has several components and each component helps to fulfill these jobs. These components are kube-apiserver, etcd, kube-scheduler, Kube-control-manager and cloud-control-manager.
– Node (Worker Machine):
Worker machines or nodes are slave nodes controlled by cluster master. These nodes are used to run containerized applications.
A node or worker machine is a part of the container cluster. It is used to run our containerized applications. There can be more than one node for a single cluster master. Kubelet, kube-proxy and container-runtime are the main components of a node. Each node represents a Container Engine VM Instance. A node can have one or more pods. More about Node…
Pods are a group of containers that share the same namespace and volumes. A pod can have one or more containers. It is the smallest deployable unit in Kubernetes. If two or more services are tightly coupled then they can be deployed within a single pod else each component of an application can be deployed in a separate pod. For example: If an application uses a Tomcat server, a database server and a proxy server, all of them can be loosely coupled by using separate pods for each of them. These pods can interact with each other using services. Kubernetes manages these pods instead of managing the containers directly. Multiple containers within a pod can interact using localhost server. More about Pod…
A namespace can be used to separate your resources or group them into smaller virtual clusters within a large cluster. Each resource can be assigned to a different namespace and can only interact with other resources present in that namespace. More about Namespace…
Replication controllers are used to control pod’s behavior. A user can specify how many replicas of a pod should be running at a time. Replication controller ensures that a predefined number of replicas of a pod is always running in the system. Whenever a pod encounters an error and gets evicted, a new copy of this pod is created using the user-defined settings in our replication controller. This ensures high availability. More about Replication Controller…
Deployments are somewhat similar to replication controllers. They can be considered as the successor of replication controllers. They inherit all the properties of a replication controller and has some advancements in it. The features of Deployment are listed over here. More about Deployments…
Services help to interact between pods. Pods can not use IP address to interact with each other since replication controller manages these pods and there is no guarantee that if the pod is regenerated, same IP will be assigned to it. Services use labels to keep track of these pods. When a request comes to the service, it knows exactly where the pod is and sends the data to the pod for further processing. More about services…
Security is a prominent aspect of production level code. Keeping keys, credentials and other sensitive information in application code is never a good practice. Kubernetes secrets can be used to store these sensitive pieces of information securely. Secrets are stored in the key-value pair with value in an encoded format so that user does not have to type it in CLI or put it in the configuration file. Another advantage of using secrets is that you don’t have to deploy your application again when there is a configuration change. We can mount these secrets as volumes and use wherever required. More about Secrets…
Now that you are familiar with the terminology, the next step would be to create your first Kubernetes cluster and deploy an application. But for that, you will need an overview of kubectl.
Learn to create a cluster here.
Learn to deploy your application here.
Now let’s have a look at some of the features that Kubernetes provides.
Features of Kubernetes:
Kubernetes has several features, some of which are listed and explained below:
Kubernetes provides a containerized environment management. Since containers are independent of host configurations and the isolation takes place at the kernel level, they can easily be ported from one machine to another without sweating about issues due to environmental changes. The use of containerized application development is very popular due to its advantages such as lightweight, efficient and fast.
In simple words, orchestration can be defined as the execution of a defined workflow. It can be seen as a process of reaching to the desired state from the current state. Kubernetes not only provides storage orchestration but also manages a cluster of containers very efficiently.
You can create replicas of pods with the increase in load without having to worry about balancing the load. Ingress handles the load balancing for you.
Kubernetes monitors for failures in pods and recreates another if one fails using a replication controller. It manages the scheduling of pods to nodes, restarts the containers that fail.
Communication between containers within the same pod takes place through localhost whereas any external communication to a pod is handled by services.
Service discovery and load balancing:
Kubernetes pods are created by replication controllers and are scheduled to the node with sufficient resources. A pod can be scheduled to any node. Kubernetes provides pods their own IP and a DNS name. DNS based service discovery is available as a cluster-addon.
These are some of the very useful features of Kubernetes which makes it very interesting and easy to use. However, in my view, the most useful feature of Kubernetes is that it is very well documented which makes it easier to understand and implement.
I know I have barely scratched the surface with these descriptions but if you want to dig deeper please browse through the docs here or visit the playground for the interactive tutorials here. We will be trying to add more blogs to cover some crucial aspects of Kubernetes in detail. Let us know about your preferences in the comments section below.
About CauseCode: We are a technology company specializing in Healthtech related Web and Mobile application development. We collaborate with passionate companies looking to change health and wellness tech for good. If you are a startup, enterprise or generally interested in digital health, we would love to hear from you! Let's connect at firstname.lastname@example.org