Kubernetes: A Deployment Voyage

Coming to Newicon as an intern for three weeks I was offered several isolated projects to dive into – to my great pleasure almost all of them revolving around topics I had very little experience with. One of them: the container orchestrator, Kubernetes. Why Kubernetes? Deployment in its traditional form is not the most fun […]

Astra Baker

Chief Marketing Officer (CMO)
·4 min read (1107 words)
A team taking part in an innovation workshop

Coming to Newicon as an intern for three weeks I was offered several isolated projects to dive into – to my great pleasure almost all of them revolving around topics I had very little experience with. One of them: the container orchestrator, Kubernetes.

Why Kubernetes?

Deployment in its traditional form is not the most fun thing to do even if everything works out as planned. Setup costs time and money, and as requirements become more complex – scaling, microservices frameworks, load balancing – solutions become more difficult to manage predictably.

One of the approaches to managing this complexity is stretching the concept of project encapsulation to the deployment level, and modelling a project’s components and the interactions between them in a declarative way. A container solution – Docker, or Rocket, for example – can encapsulate applications, and the dependencies between these containers, and their relation to the outside world, can then be described in configuration.

The vision of a guaranteed-to-run and easy-to-manage deployment is something that Kubernetes promises to put into reality.

In the following I will try give a rundown on how this is achieved.

Containers

A container describes an application image in which the application and all its dependencies are packaged. Unlike a similar configuration running outside of a container, an application running in a container cannot see anything outside the package contents unless specified otherwise, creating the desired isolation with a minimum of resources.

As a result, applications no longer have to be deployed on a preconfigured system. Instead, a container – the application and its dependencies, which can be described in configuration and tested – is added to the system. It becomes possible to widen the concept of version control to the actual runtime environment, giving companies powerful tools to roll out software.

Container Orchestration

Enterprise-level applications are often composed of multiple components that interact with each other in order to achieve the desired result. If these components are containerised, container orchestrators come into play.

On first sight, it seems that a container orchestrator is simply supposed to group different containers into one unit that delivers the same functionality as a non-containerised deployment of an application stack. By telling the container orchestrator to deploy the unit it basically takes the role of an all-in-one installer.

Container orchestrators are capable of doing a lot more than that though.

Kubernetes is one of the most prominent container orchestrators. In the following part I will look into how this container orchestrator works and what it can do for you.

Kubernetes Concepts

Kubernetes is traditionally set up as a cluster of various machines called ‘nodes’.

Functionally you differentiate between worker-nodes where your containers run on and master-nodes that hold the configuration of your cluster and apply it to the worker-nodes.

Containers managed by Kubernetes aren’t directly run on a worker-node but instead are wrapped in so-called pods. Pods provide the infrastructure to enable easy network access within the cluster and can hold multiple containers and associated resources like storage.

A deployment of various pods on a single worker-node is displayed in the following graphic taken from the Kubernetes Basics tutorial:

Each pod possesses a unique IP within the cluster under which it can be addressed quite easily.

The kubelet daemon running on each node of the cluster provides an interface for the Kubernetes API and ensures that the pods assigned to its node are running as configured.

Additionally the daemon of the container provider has to be run. Kubernetes currently supports Docker and rkt.

The configuration of a pod consists of the definitions of containers and volumes as well as metadata related to the pod. This metadata consists of a name as an identifier and an arbitrary amount of labels.

Labels are configured as key:value-pairs and may be used to arrange pods into logical groups that can be identified through the label. For example a pod containing a WordPress container and a different pod holding the associated database container may both be labeled as app:wordpress.

Pods can be configured on their own but in order to have Kubernetes manage your pods it is mandatory to create a configuration file for a ‘deployment’.

A deployment includes a template for the pod and its content, as well as an own metadata-section in which you can define labels.

The advantage of configuring a deployment is that it allows you to manage all pods belonging to the deployment instead of managing each pod on its own.

This includes defining and scaling the amount of pods that should generated, as well as automatically rolling out any update to the pod definition itself.

Most notably though, defining a deployment will cause Kubernetes to make sure that the definition is met and reschedule pods on node-failure, granting high availability in clusters where pod-replicas are spread over nodes.

The possibility to scale the number of pods of a kind up and down creates a communication problem between related pods as the IP of the pods will change over time.

In order to circumvent this problem, Kubernetes implements services that address pods by their labels and therefore circumvents any networking issues caused by pods being destroyed and recreated.

Conclusion

It should be noted that this is a very brief overview of Kubernetes’ core concepts. There are a lot of other things to be taken into consideration, for example defining and accessing storage across the cluster or even configuring the network in a way that Kubernetes runs at all.

Fortunately major cloud providers already provide Kubernetes clusters out of the box to let you focus on mapping your actual server landscape to Kubernetes.

This however still seems to be quite a task – for example, the assignment of sensible labels throughout the cluster should be planned carefully ahead of time.

Once this hurdle has been taken, though, Kubernetes promises to be a highly reliable and convenient environment for deploying software in the cloud.

Diving into this topic was quite exciting for me personally. Using the testing and development platform called minikube I was able to follow tutorials on a virtual machine, giving me the opportunity to experiment with configurations – and there are plenty of options. I imagine it being difficult to keep an overview of all of these options, but it is something that definitely sets Kubernetes apart from other container orchestrators. like Docker Swarm. It might not be as complex if you have a running system you can use as reference.

Of course, the first step to move to Kubernetes is containerising your existing applications. And that is certainly promising to be a valuable exercise, whether you end up using a container orchestrator or not.


I'm Astra Baker

Chief Marketing Officer (CMO) at Newicon

Join the newsletter

Subscribe to get our best content. No spam, ever. Unsubscribe at any time.

Get in touch

Send us a message for more information about how we can help you