16.06.2020 I by Henrik Hasenkamp
As an open source technology, Kubernetes has ultimately become the standard for orchestrating microservices and containerized applications and rapidly deploying cloudnative workloads. What can Kubernetes do and what are Kubernetes-as-a-Service offerings good for?
The paradigms in software development have changed fundamentally: On the one hand, mature infrastructure, platform and software-as-a-services offerings (IaaS, PaaS, SaaS) ensure that the required architecture and the corresponding services are available from the cloud faster and more predictably. From the abstraction of highly complex IT infrastructures to intelligent services such as load balancing, databases and other high-traffic applications, companies can book anything as a service. On the other hand, today hardly any applications are designed as large monolithic releases; instead, micro service-oriented models are spreading. Smaller software modules provide API (Application Programming Interface) endpoints to make data and metrics available to other modules. This breaks the software down into many small parts that can function independently and interact with other applications.
Everything you need in one container
While microservice-style software development reduces overall complexity - multiple teams can work simultaneously on different modules with fewer functions and deliver results more quickly - it also reduces the need for a new approach to the development of software. However, for these individual modules to be executed later, they must provide the appropriate parameters - such as the infrastructure required for execution and dependencies on other services. This paradigm, often referred to as "cloudnative", can be implemented through containerization. A container is a standard unit of software that, in addition to the code, also contains the dependencies such as runtime, system tools, system libraries and settings in a package. This way, the software can be provided in an uncomplicated way. At their virtualization level, containers do not use their own operating system, as virtual machines do, so the containers are smaller and start faster.
So that containers play together: the orchestration
In practice, the number of containers is usually growing rapidly. To manage, scale and deploy them, an orchestration platform will soon be not only useful but necessary. With Kubernetes, a technology has become established in recent years that solves some of the central challenges in the orchestration of containers. The market offers numerous other platforms - each with advantages and disadvantages. Some are very natively based on a specific container technology, such as Docker Swarm, which was developed primarily for the management of docker containers, and are therefore subject to restrictions. Other tools, such as Nomad from software vendor HashiCorp, are more suitable for smaller and medium-sized workloads.
Kubernetes supports several container technologies and owes its popularity to the open source approach. Originally developed by Google, the company handed over the project to the Cloud Native Computing Foundation in 2015. Since then, a broad community has been working on further development, new interfaces and services alongside well-known manufacturers. It is often the logical step for development teams to set up their own Kubernetes installation. Especially to gain first experiences with container orchestration, this is a practical way. As open source software, Kubernetes is available to everyone, tutorials are available from numerous sources as well as the large number of entries on the developer platform GitHub. However, this is not a trivial way
and requires some know-how. At the latest when disruptions occur during operation, the storage systems are to be scaled horizontally (scale-out) or the principles of continuous integration and delivery are to be further automated, in-house operation becomes a challenge.
An alternative is to use a container-as-a-service offer from a managed hosting or cloud provider. The advantage of this approach is that the service provider takes care of all infrastructure-related issues, including scaling and availability. The offerings of the various providers differ: Some of them act more like resellers, where the customer selects and configures his own tools. Other providers offer highly automated complete packages that already combine several coordinated tools into one package. Depending on requirements, the portfolios range from simple container technologies to managed Kubernetes solutions. Cloud market leader Amazon, for example, relies on its own development, the Amazon Elastic Container Service (ECS).
Most providers provide their customers with a wide range of configuration options. From an economic perspective, too, managed complete offers are usually a good choice. At least if only the infrastructure resources used are billed for the use of the offer according to intensity of use, thus eliminating the need for separate acquisition and licensing traps. This is the rule with almost all cloud native providers.
Which solution approach is ultimately more suitable or more economical depends on many factors: The skills of your own employees play just as much a role as the available resources and the basic attitude towards using services from the cloud. But especially when the first two are in short supply, those responsible should examine managed services in the area of containers and their orchestration.
The original article in german can be found here.