Cloud infrastructures and micro-service-oriented development paradigms have fundamentally changed software development in recent years. Instead of extensive monolithic releases, much more emphasis is now placed on fast executability and easy distribution. Container technologies as a service offer the advantages of cloud computing here. the offers of cloud providers are manifold. The question arises: Do managed complete offers make sense or are small tools sufficient to operate container-based applications?
Every application needs a runtime environment, system libraries and the like. Deployment has already become much easier thanks to virtualization: a virtual machine (VM) emulates the required physical IT environment, the virtualization layer (hypervisor) forwards the requests for CPU power, storage, network resources, etc. to the underlying hardware. The advantage: An application can be started directly within the VM without the need for the own host to meet the necessary requirements. The disadvantage: The VM contains the entire system, usually several GBytes in size. Every application that works under different conditions also needs its own VM. Because the operating system has to be started each time, this approach requires comparatively large hardware resources.
This is where container technology comes in. The applications are packaged with their configurations and dependencies in a standard format - the container image. In contrast to the VM, the container contains only parts of an operating system and essentially uses the operating system of the host. For this reason, containers are usually much slimmer than VMs. Nevertheless, container images also need an abstraction layer, similar to the hypervisor of a VM, to access the host operating system. In the best-known and probably most widely used container technology Docker, the Docker Engine takes over this part. It must be installed on the host on which the containerized application is to run, since it controls the starting and stopping of the containers. By encapsulating the application from the underlying infrastructure, it can be run on any platform that supports container technology. Because an operating system does not have to be booted first, containers also start much faster than VM applications.
The Docker Engine is, at least as a Community Edition, easily available from the Docker homepage and installed in just a few steps - a good start to gain initial experience. In practice, however, the container ecosystem usually develops quickly. Dependencies between containers arise and the demands on the infrastructure grow. The orchestration of the containers - i.e. the automatic setup, operation and scaling of container-based applications - becomes more important here, especially if the applications consist of several containers and run on different hosts or clusters, i.e. if a large application has been broken down into micro-services.
In the meantime, several orchestration tools with partly very different approaches have become established. While Docker Swarm and Hashicorps Nomad, for example, are particularly suitable for orchestrating small and medium-sized workloads on a manageable number of servers, Apache Mesos and Kubernetes have also proven themselves in the environment of larger workloads. Many experts currently regard Kubernetes in particular as the de facto standard. The open source tool supports several container-based virtualization systems - an advantage over most specialized tools. A large active Kubernetes developer community is constantly improving the tool.
CaaS (Container as a Service) offerings combine this. Behind this are offers from cloud providers who offer container-based virtualization as a service via their platform and thus scalable from the cloud. As usual in cloud computing, these services enable users to use container technology without having to provide infrastructure themselves. In concrete terms this means. The provider provides the runtime environment including libraries and configurations as well as the underlying infrastructure resources and often also the orchestration tool. The communication between user and provider takes place via an API or a graphical user interface (GUI).
Those who do not want to operate their container platform themselves are faced with countless offers that are difficult to compare. There are still hardly any standards in the field of container orchestration and many providers have several solutions for the same problem. For example, some providers from the managed hosting environment act as resellers of mostly American hyperscalers. In this case, the Hyperscaler container platforms are not only sold by the Hyperscaler itself via its own cloud offers, but also by other providers. This concerns Google Container Engine (GKE), Amazon EC2 Container Services (ECS) and Microsoft Azure Container Service (ACS).
At the same time, companies can put together their own platform with such resellers or book each service individually as a cloud service from the tool manufacturers themselves. For example, an environment consisting of Docker as container technology, Kubernetes as an orchestration tool and the Kubermatic web interface from the Hamburg-based company Loodse is a good basis for the development and management of container-based applications. Putting together individual components instead of choosing a managed complete package requires know-how as well as experience and therefore requires careful consideration.
Although this allows a user to consider his own preferences and choose the most powerful tools, it takes time to select, configure and integrate them. In addition, specialized companies usually have a little more practical experience and can point out the one or other stumbling block of the future right from the start. In practice, however, these IT and development specialists are rather scarce. If you book each tool as a single service, you will quickly find yourself facing a confusing multi-cloud environment that requires increasing management effort. The advantages of containerization - fast, cost-effective and uncomplicated provision of applications - are soon wiped out.
Native cloud providers (i.e. providers of so-called "cloudnative" services specially developed for the public cloud) such as the aforementioned providers Hyperscaler naturally also offer their CaaS solutions on their own platforms. With their managed complete solutions, they are by no means alone on the market. Cloud service providers specializing in the requirements of medium-sized companies also offer powerful managed services for containers. For example, they offer free, ready-to-use Kubernetes environments - only the IaaS resources required for the container workloads are charged by the minute and usage-based.
What all native cloud providers have in common is that their solutions fit into their respective service worlds and the individual tools are well integrated. The providers take care of the continuous development and updating of the offering, operations, capacity management, platform monitoring and necessary security updates. The only difference is how operational a solution is. Sometimes, there are still some decisions to be made and additional services such as an authentication service, a load balancer or persistent storage have to be integrated.
Equally significant differences are evident in the user interface. Most native cloud providers do without an easy-to-use GUI. The user has numerous setting options for configuring the platform and controlling the workloads. In practice, however, it proves to be difficult to find the optimal configuration, especially since individual support is naturally not a matter of course with hyperscalers. Other providers take advantage of this gap: they provide Kubernetes users, for example, with a user-friendly, intuitive interface. Important default settings are already made so that the most frequently selected modules such as load balancer, persistent storage and scheduler are pre-selected in a sensible way. This allows the user to get started within minutes and to fully enjoy the benefits of a managed solution.
The original article in german can be found here.