How to: connect gridscale Kubernetes Cluster & PaaS

Date: 15.05.2020

Connection PaaS and Kubernetes

We recently released our Managed Kubernetes solution, which gives our customers completely new opportunities in orchestration of container infrastructures. To use the full possibilities of gridscale-Kubernetes a connection with our PaaS makes sense. In the following article, we will show you how to make this connection efficiently and easily and what advantages this procedure has.

But why do you have to connect manually in the fully automated gridscale universe? There are several reasons for this: First, it is purely technically that our PaaS communicate exclusively via IPv6, our Kubernetes solution so far only via IPv4. In addition, our Kubernetes is not yet feature complete (don’t worry, we are working hard on it) and until then we need a temporary solution. As soon as this workaround is no longer necessary, we will of course notify you immediately.

So be careful! This type of connection is only necessary until we have integrated the connection of the two platforms into our product.

Establishing a connection

But now in the concrete implementation. A little tip: You need a Sidecar VM! At this point, we assume that a GSK cluster and a PaaS service have already been created.

The cluster contains a private network by default, the PaaS is automatically connected to the default security zone. The newly created VM is connected to both networks and can then act as a proxy between the two networks. For this, the HAProxy software is installed on an Ubuntu VM.

If a pod of the cluster wants to access a PaaS service, the IP of the proxy is used as the connection IP, with the standard port of the desired service.

Creating the VM

A new VM can now be easily created using the public panel. This can be configured minimally, depending on the workload, a CPU core and 2GB RAM are sufficient. Ubuntu 18.04 is used as the storage template.

Once the VM has been created, a connection to the two networks can be easily created using drag and drop. The name of the cluster private network is based on the name of the cluster. There must be a connection to the two networks

  • the private network of the GSK cluster and
  • the security zone.

If both networks are set up, the VM can be started.

Configuration of the network within the VM

The MAC address of the individual interfaces can be determined in the public panel. In this way, the interface names can be assigned to the two networks. Example:


interconnect-vm ~ [255] # ip -c a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 06:5d:92:0a:c2:01 brd ff:ff:ff:ff:ff:ff
    inet 45.12.48.205/24 brd 45.12.48.255 scope global dynamic ens16
       valid_lft 2203sec preferred_lft 2203sec
    inet6 2a06:2380:2:1::6c/128 scope global dynamic noprefixroute
       valid_lft 2442sec preferred_lft 2442sec
    inet6 fe80::45d:92ff:fe0a:c201/64 scope link
       valid_lft forever preferred_lft forever
3: ens17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 06:5d:92:0a:c2:02 brd ff:ff:ff:ff:ff:ff
    inet 10.244.31.1/19 brd 10.244.31.255 scope global ens17
       valid_lft forever preferred_lft forever
    inet6 fe80::45d:92ff:fe0a:c202/64 scope link
       valid_lft forever preferred_lft forever
4: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 06:5d:92:0a:c2:03 brd ff:ff:ff:ff:ff:ff
    inet6 fcfc::1:45d:92ff:fe0a:c203/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 86399sec preferred_lft 14399sec
    inet6 fe80::45d:92ff:fe0a:c203/64 scope link
       valid_lft forever preferred_lft forever
interconnect-vm ~ #

In addition to lo, there is also the ens16interface, which is connected to the Internet. ens17 and ens18 are the connections to the cluster private network (IPv4 only) and the security zone (IPv6 only).

The MAC address can be found directly behind link / ether; This is compared with the information in the public panel to determine which interface is brought into the GSK Cluster Private Network via IPv4. In this example it is ens17. It is then clear that ens18 is configured on the PaaS side for the security zone.

Set up IPv4

The IPv4 connection can now be set up as follows. We first configure the static IP address for the interface: 10.244.31.1/19.

For this we fill the file /etc/netplan/99_config.yaml with the following content:


network:
  version: 2
  renderer: networkd
  ethernets:
    ens17:
      addresses:
        - 10.244.31.1/19

The configuration with netplan apply is then applied. With a ping 10.244.0.1 we make sure that the connection is made.

Set up IPv6

Thanks to SLAAC, IPv6 is automatically configured for the security zone so that a ping against the PaaS service works without any problems. We take the IP of the service from the details of the PaaS service.

Installation and configuration of HAProxy

HAProxy is created using apt update && apt -y install haproxy. After installation, we copy the standard configuration of HAProxy and replace it with a special version that is adapted to this purpose:


cd /etc/haproxy/
mv haproxy.cfg haproxy.cfg.backup
cat >> /etc/haproxy/haproxy.cfg << CONFIGEND
global
    log /dev/log    local0
    log /dev/log    local1 notice
    user    haproxy
    group   haproxy
    daemon

listen ipv6redis
    bind    10.244.31.1:
    mode    tcp
    timeout connect     4000
    timeout client      180000
    timeout server      180000
    server  paasredis   :
CONFIGEND
systemctl restart haproxy

That was it! If a pod of the GSK cluster now accesses a PaaS service (via the IP 10.244.31.1 of the proxy server), HAProxy will forward this request to the PaaS service and send the response back to the pod.

If additional services are to be connected, the list block can simply be copied. E.g. is done with the following config …


cat >> /etc/haproxy/haproxy.cfg << CONFIGEND

listen ipv6pgsql
        bind    10.244.31.1:5432
        mode    tcp
        timeout connect         4000
        timeout client          180000
        timeout server          180000
        server  paaspgsql       :5432
CONFIGEND

… a PostgreSQL-PaaS connected to the Cluster.

Conclusion

You have now established a working connection between our gridscale Kubernetes cluster and a PaaS. The system shown here in the example can of course not only be used for PostgreSQL, but also for all other gridscale services.

But again as a reminder: this process is a workaround and only necessary until our Managed-Kubernetes feature is complete. We will inform you about this in a timely manner!

Back to overview