Getting started gridscale Load Balancer
Everything you need to know about the gridscale Load Balancer
The gridscale Load Balancer is a highly available service managed by us, with which you can use your infrastructure even more economically. A cloud load balancer takes care of the efficient distribution of client requests to several application systems. In modern infrastructures, it is best practice to add more servers in order to be able to react to new requirements such as high traffic. The advantages of a Cloud Load Balancer lie above all in the simple operability Your Cloud with Load Balancer is characterized in particular by an increased performance, flexibility & reliability of your Cloud.
In this article, I will show you a first use case “Hello, Load Balancer” and then introduce you to all the individual features and configuration options of the gridscale Load Balancer in detail.
For a quick start, you can jump directly to the Load Balancer Configuration section via the table of contents or click on the respective configuration step to learn more about it.
I’ll start by answering two questions with you:
– Which features distinguish the gridscale Load Balancer?
– What are the application areas of a load balancer?
gridscale Load Balancer Feature Overview
Like our gridscale panel, our load balancer also convinces with its simplicity. You can define the forwarding rules to your backend servers with just a few mouse clicks. Connecting to your target servers is just as easy.
Our load balancer works perfectly with the well-known SSL certificates from Let’s Encrypt. Simply set the DNS of your domain to the IP address of the load balancer and let Let’s Encrypt create a certificate automatically with one click. We will also automatically renew the certificate for you.
You can also connect external servers to your load balancer via IP address or hostname. This gives you maximum freedom in connecting your resources and allows you to set up your infrastructure even more flexible.
Weighting of backend servers
In the Expert Panel View, you can determine the weighting with which your load balancer distributes the load to the backend servers.
Load Balancer Use Cases
One of the use cases in which a load balancer is used very frequently is the distribution of HTTP requests to a group of application servers. Load balancing across multiple servers improves performance and minimizes the risk of failure. A classic scenario is, for example, the interception of traffic peaks (e.g. through the placement of TV advertising) without performance losses. Thanks to the simplicity of the cloud, IT can scale the infrastructure individually and adapt it to the level of demand.
But this is not the only use case for a load balancer. In the following I will introduce you to further useful use cases of a load balancer:
High availability of systems has become a fundamental factor in the cloud environment – and not only when it comes to critical systems.
By performing regular health checks on the connected cloud servers, availability increases. Servers that cannot be reached are automatically removed from the server pool by the load balancer.
Blue-Green Deployment is an application development approach. New applications or changes to the software can thus be tested within the productive cloud environment. In this case, the load balancer is used to redirect the traffic to the new version of the development environment. The advantage is that the load balancer can simply route back to the old environment should unexpected problems occur.
Use-Case Hello, Load Balancer!
In the following application example, you create a load balancer that distributes the HTTP request to the underlying application servers.
What you need:
- 2 active Cloud Server in your gridscale Panel
- active nginx Webserver on both Cloud Server
The use case example requires two cloud servers, which are the backend servers for the load balancer. I created the two cloud servers with a few clicks via gridscale template with Ubuntu 18.04 LTS. Another requirement on both cloud servers is an installed nginx web server.
I have given nginx_server1 and nginx_server2 as host names.
After you have created the cloud servers start with bring your servers up to date.
apt update -y && apt upgrade -y
Then install nginx on your cloud servers with the following command:
apt install nginx
Afterward, a different HTML file must be stored in the Web root directory of both servers so that the servers can be distinguished during the later demonstration.
Change to your Web root directory with:
In the directory you create a new file index.html and write the following content with an editor of your choice (nano, vim etc.) and save it:
<!DOCTYPE html> <html> <head> <title>Hello, Load Balancer!>/title> </head> <body> <h1>Server 1: Hello, Load Balancer!/h1> </body> </html>
On your second host you write, instead of “Server 1:…” – Server 2: …” into the HTML file.
Optionally, you can delete the default Welcome HTML file from nginx from the web root:
That’s it. Now all server preparations for the use case demonstration are done and you can start to configure the load balancer.
2. Open the Load Balancer menu
You can access the Load Balancer menu – like the other functions – via the side menu within your gridscale panel.
In the Load Balancer area click on “Add Load Balancer”.
This will open a configuration menu for your new load balancer.
3. Load Balancer configuration
In the configuration menu you define your settings for the load balancer. First enter a unique name here. Then select an IPv6 and IPv4 IP address for the load balancer.
Next in section (1) under Forwarding Rules you check HTTP.
For simplicity’s sake, we leave it in our use case at HTTP and do not deposit an SSL certificate.
In section (2) under Target server, select the cloud servers to connect to your load balancer. Check this box for your previously prepared hosts.
Under “Advanced Settings” you select “Round Robin” under Algorithm. With Round Robin, the load is evenly distributed to the connected servers via a queue. In our use case, this means that incoming requests are received alternately by Server 1 and Server 2.
Now the configuration for our use case “Hello, Load Balancer” is finished, click on “Create Load Balancer” to create your Load Balancer.
This is the finished load balancer in the gridscale panel:
4. Testing the new Load Balancer
Ready for the Load Balancer Test?
In the last step, you can test whether your new load balancer distributes the incoming requests to your two backend servers.
Call the IP address of the load balancer in your browser and then open another browser window; but this time in incognito mode or private mode, here you also call the IP address.
If everything went smoothly, you will now see how the first browser session is forwarded to server 1 and the second to server 2.
Load Balancer Configuration:
In this section I explain everything about the configuration of the gridscale Load Balancer.
First we go step by step through the configuration in the easy Panel view. Afterwards I will go into the special features and additional functions that are available in the Expert Panel.
You can access the Load Balancer menu from the side menu within your gridscale panel. Click on “Add Load Balancer” to open the configuration menu.
In the configuration menu, you define the settings for your load balancer.
To create, you first assign a unique name. In the next section, you can create an IPv4 and IPv6 address for the load balancer. The selection of an IPv6 address must be done in order to create the load balancer. You can optionally select an IPv4 address.
In the Forwarding Rules section, you can choose between HTTP and HTTPS. You can also choose from other protocols under “Custom” and define your own port rules.
If you have routed a domain via DNS rule to the IP address used by the load balancer, you can automatically create a Let’s Encrypt certificate. We also ensure that it is regularly renewed. Unfortunately, it is not possible to add your own certificates.
Adding Backend Servers
In the Target Server section, select the cloud servers to connect to your load balancer.
All servers from your gridscale panel are available here. Furthermore, by clicking on “Set your own IP addresses”, you can connect your servers individually to the gridscale load balancer via the IP address and hostname.
Under Advanced settings, you define which algorithm the load balancer should use for the distribution of connections. You can choose between Least Connection and Round Robin. You can also define HTTP to HTTPS forwarding.
Expert Panel Specials
The configuration in Expert Panel View gives you full control over the configuration of your load balancer.
Additional features available only in Expert Panel View is the weight of server connections and labels.
Under Weighting, you can set a value between 0 and 255 for each of your servers and thus set up an individual load distribution. At 0, the target server is deactivated and does not receive any traffic.
You can use labels to uniquely identify your load balancer.
Tell us how you liked this Getting started!
We welcome feedback of any kind: praise, criticism, suggestions, tips.
Everything you need to know about the gridscale Load Balancer The gridscale Load Balancer is a highly available service managed by us, with which you can use your infrastructure even more economically. A cloud load balancer takes care of the efficient distribution of client requests to several application systems. In modern infrastructures, it is best […]
Vielen Dank für dein Feedback!
Wir melden uns bei dir, sobald der Artikel zu deinem Wunschthema fertig ist.