Kubernetes: Container by service or orchestrating it yourself?

vom 15.06.2020

20.06.2020 I by Henrik Hasenkamp

Kubernetes often serves as a basis for container management. This is due, among other things, to the fast integration and the possibility to automate applications.

Where in the past the focus was on pronounced monolithic offerings, today the focus is on solutions that are quickly implemented and easy to use. The services offered from the cloud can now meet these demands. This includes infrastructure, platform and software-as-a-service (Iaas, PaaS, SaaS) solutions. The focus has also shifted to microservices offerings. This development method for software systems is designed to develop applications from a collection of small utilities. Through well-defined interfaces, each utility or module can be used and processed independently from the others. Companies can work more agile - even a small team can be sufficient - and the software is much more scalable.

Making microservices executable with containers

The fact that microservices are developed independently of each other and of the entire system reduces complexity. Each module can be modified and edited by different teams at the same time - without downtime. The entire development process becomes more agile and requires less coordination. Even the programming language can vary between microservices.

What is missing, however, are parameters and dependencies, such as system libraries and runtime environments, so that the individual services can be executed with each other. This is where container technology comes into play. The application and all the system components required are combined in one file, the container. This makes deployment easier and allows the application to run on any platform that supports container technology.

This means that developers can use the respective applications on different environments and move them back and forth as they wish. Because containers use the host operating system instead of their own, they are significantly leaner and start faster than virtual machines.

Systems for container orchestration

Since the container ecosystem of a company usually develops rapidly and the number of containers increases, orchestration becomes more important. In addition, the containers of most applications are distributed across different virtual and physical hosts. They can be located in their own data center or in a cloud environment. The use of an orchestration platform for automatic setup, scaling and operation is recommended in these cases.

One of the best known orchestration solutions is the open source system Kubernetes. Originally published by Google, the technology has been continuously developed by numerous developers. The management environment provided by Kubernetes makes it possible to automate and simplify the deployment, operation, maintenance and scaling of containerized applications.

In this case, the so-called Kubernetes Master is the system responsible for controlling the container clusters (see Figure 1). For efficient management, containers can be grouped together in Pods and thus share resources such as the IP address or storage space.

The Pods are executed on physical or virtual machines called Nodes. Automation is done by the master receiving the administrator's commands and passing them on to the nodes. It is automatically decided which node is suitable for the task at hand.

Service offerings for Kubernetes integration

The Kubernetes technology offers development teams the possibility to set up their own installation and is freely accessible due to its open source character. A large number of tutorials on the topic are available on the official website, among other places. The GitHub development platform also contains numerous entries for creating your own Kuberneter installation.

However, not every company has a large development team, and not every development team has the necessary expertise. Especially when it comes to specific problems or individual development, inexperienced developers can reach their limits.

Container-as-a-Service offers from cloud providers can help. The service provider is responsible for the scaling and availability of the Kubernetes infrastructure. However, the offers can vary greatly and companies must select according to their individual needs. From complete automated solutions that include coordinated tools in one package, to resellers where the company itself is responsible for the configuration and selection of tools. In addition to large corporations like Amazon, smaller providers also support container orchestration.

This includes the Service Managed Kubernet of the IaaS and PaaS provider gridscale. Load Balancer, persistent storage and scheduler are among the most important settings in a Kubernetes architecture and are preselected according to the workload in Managed Kubernetes. Depending on employee skills, openness to cloud offerings and the available economic resources, different approaches may be the best choice. Especially when resources are scarce, an aspect that prevents many companies from integrating, Managed Services can be a good investment and a simple solution.

The original article in german can be found here.

    Back to overview