gPXE – a clever alternative

Im Durchschnitt wird dieses Tutorial gPXE – a clever alternative mit 5 bewertet, wobei 1.0 die schlechteste und 5.0 die beste Bewertung ist. Es haben insgesamt 779 Besucher eine Bewertung abgegeben.
779 0

gPXE – a clever alternative

Networking
gPXE - a clever alternative
gPXE - a clever alternative
gridscale tutorial gPXE a clever alternative

PXE at gridscale?

A few days ago I was asked if gridscale supports PXE. The answer is yes. What many providers prohibit is allowed at gridscale. For example, using the console in the browser can be used at any time in the boot process – it can even be called the BIOS 🙂

Gimmick, I know. The question about PXE was anything but gimmicks. A completely automated environment is to be created. All components are configured at a central location.

After talking a little about the use case, I decided against a standard legacy PXE as I wanted to have something “cooler”. For my one-board computer, I use a ROM that always reboots with getting the latest OS image from my web server and boots with it. I have already applied this principle to gridscale, so if it works with gridscale’s platform, why should it not work in your cloud as well?

What’s waiting for you

The ingredients: A DHCP server. A web server. An internal network. A few cloud servers without hard drives. And a little open source software.

The result is a simple, scalable and, above all, automatically configured Cloud Server Setup. Best suited to start extremely fast, do tasks and then stop again. Scaled guarantees faster than anything you know.

Installing a DHCP server

Create a private network for your servers and install a DHCP machine on this network. The DHCP server also has a link to the Internet (practical reasons, no need).

I use mostly the
ISC DHCP A standard software over which I have not thought yet. If you know better DHCP servers, I’m glad about a tip.

So:

 apt-get install isc-dhcp-server

In ‘/etc/dhcp/dhcpd.conf’ is the configuration for the DHCP server. There you are now configuring the DHCP to listen to the internal network interface (eth1) and IP addresses for the clients. To do this, you can declare the following subnet:

subnet 172.16.10.0 netmask 255.255.255.0 {
    range 172.16.10.50 172.16.10.100;
    option broadcast-address 172.16.10.255;
}

For the DHCP server to start, it must be able to identify the correct network interface. You can make this configuration in the /etc/default/isc-dhcp-server file. At the end, simply enter the eth1 interface.

Now the interface eth1 has to be configured. To do this, edit the /etc/network/interfaces file and add the following:

auto eth1
iface eth1 inet static
    address 172.16.10.1
    netmask 255.255.255.0
    broadcast 172.16.10.255

Move the eth1 interface up (e.g. via /etc/init.d/networking restart) and restart the DHCP server (with ‘/etc/init.d/isc-dhcp-server restart’).

Delivery of boot images

In order for you to push a boot image, a service is required to deliver the image. Previously, one usually used a TFTP (e.g., tftpd-hpa) and client-side normal PXE. This was for my taste but always a little fiddly.

GPXE has a great advantage (similar to newer UEFI BIOS’e) that a bootimage no longer has to be provided via tftp. Instead, higher protocols such as FTP and HTTP are available. Through a web server and an appropriate “asset management” it is then without problems possible to completely remote control the infrastructure. Hence, I do not want to say that this is not possible with the legacy method. On the contrary, we have already built something like this in a grand style. But it is and is simply fiddly 🙂

Short excursion – a practical application

For large and distributed infrastructures like gridscale, such a setup is incredibly handy. We provide signed boot images in our infrastructure. There are different images for different applications: management, monitoring, burn-in, production, development, staging and customer-specific environments. Depending on the role assigned to a server, the correct image is played.

A targeted updating of a computing node is e.g. Only two steps:

  • “Evacuate” the host: It takes between three and ten seconds for all virtual instances within gridscale to move to other hosts
  • Reboot the host: We reboot the system. The boot image is then automatically loaded with the latest boot image.

In most cases, however, this is not necessary either. 🙂 In order to save energy, we switch off our arithmetic nodes when they are not needed. When reactivated, a compute node is ready anyway with the most recent boot image.

Install Webserver

The easiest way is to deliver an image via HTTP. So you have a working setup and can continue to test.’apt-get install nginx’ does everything that is needed for the moment. By default, nginx runs on port 80 and has its DocRoot in ‘/var/www/html/’.

What you need to test now are two files. A ramdisk, for example, from which, e.g. An ISO you start from and an ISO you want to load. Drop both into the DocRoot of the webserver.

For example, download ISO as. Netinstall from Debian. If you do not want to create your own ramdisk, you can first take this here.

Build gPXE Loader and start Cloud Server

If you do not want to use finished images, I can recommend you rom-o-matic. You can use the Rom Builder Directly install a little script in the loader, s.d. Your system can be further automated.

A very simple variant looks like this:

#!gpxe
dhcp any
kernel http://172.16.10.1/memdisk
initrd http://172.16.10.1/debian.iso
boot

From the IP, you can quickly see that everything is very static. It is also more for illustrative purposes. Download your ROM and put it in the DocRoot of your webserver. For me it is directly in ‘/gpxe.iso’. Now a small feature of gridscale: We need the gpxe.iso as a CD drive for your cloud server. Therefore, simply use the Smart-ISO function of gridscale and enter the URL of your webserver, under which the created image can be downloaded.

Smart ISO gPXE

If you get an error while downloading, check the size of the ISO-file. If it is less than 1 MB, we reject it. If in doubt, simply add 10 MB from /dev/zero to the ISO.

gPXE Cloud Server

The configuration in the panel you can see in the screenshot. You can see, for example, that the cloud server “gpxe 2” is only connected to the internal network “pxe internal network”, has no storage attached, and the previously created ISO image “gpxe.iso” into the server’s virtual CD drive Is inserted.

If you now start the server, you can open the console directly and watch the boot process. The first is the ISO Boot menu for the attached ISO image. After a moment, the cloud server starts from this ISO image and loads the content you specify. The ISO image inserted into the CDROM drive of the server contains the gPXE, which has been extended with the four lines of script code (see above). The instruction is stored:

  • Get an IP address via DHCP (the any is in the case for all available interfaces – it is to optimize that)
  • Download a kernel of http://172.16.10.1/memdisk (the ramdisk)
  • Download an initrd from http://172.16.10.1/debian.iso (the Debian Netinstall ISO)
  • Boot the cloud server

If everything works, your console should now look like this:

Console gPXE Cloud Server

After successful loading you will be rewarded with the following picture 🙂
After successful loading you will be rewarded with the following picture

Congratulations, you have a working CloudSetup, which can serve as a basis for scaling highly automated workloads.

A marginal note that may be important for you. Our infrastructure remembers when you want to turn off a cloud server. So you are not forced, like other vendors, to submit an API call to turn off a cloud server. As soon as you send us a PowerOff via ACPI, we will terminate your process and send you the corresponding billing stop event. Why do I mention this … imagine the following workflow in your infrastructure:

  • Your application creates a “compute job” to be assigned to a worker node
  • Your application starts via our API a diskless worker node with a gPXE
  • The Worker Node logs in the control center after the boot process and retrieves the “Rechenjob”
  • After successful calculation, the worker node returns the result
  • The worker node goes down again, in order to produce no further costs

Such a setup then uses the concentrated power of gridscale and at the same time works at maximum cost-efficiency.

PXE at gridscale? A few days ago I was asked if gridscale supports PXE. The answer is yes. What many providers prohibit is allowed at gridscale. For example, using the console in the browser can be used at any time in the boot process – it can even be called the BIOS 🙂 Gimmick, I […]

Schade, dass dir der Artikel nicht gefallen hat.
Was sollten wir deiner Meinung nach besser machen?

Thank you for your feedback!
We will get back to you as soon as the article is finished.

Übrigens: kennst du schon unser Tutorial zum Thema Linux Firewall – An overview?

×

Developer?

Get the latest gridscale developer tutorials here.
And don’t worry - we won’t spam you