Georedundancies: Sensible, too expensive or prescribed?

vom 24.08.2019

24.08.2019 l by Henrik Hasenkamp

With georedundant IT infrastructures, losses - such as those caused by natural disasters - are to be absorbed. What does this mean in practice?

For georedundancies, two or more fully functional data centers are used at different locations. The data is mirrored and can therefore be used simultaneously. The purpose of such protection is on the one hand to provide uninterrupted operation even in the event of failure of components or even of an entire data center. On the other hand, it is a matter of safely intercepting external influences such as a fire in a data center. This requires spatial distance. But what exactly is behind the georedundancy concept? After all, the Federal Office for Information Security (BSI) only recently tightened up its recommendations and thus the technical requirements considerably.

What georedundant means exactly

The redundant design of data centers plays an important role not only in the area of high availability. A second data center with identical equipment is sufficient for maintenance work to be carried out, power failures to be cushioned and defective hardware to be replaced. Service providers often set up several chambers with one data center each in their huge data centers for this purpose, structurally separated by fire protection walls and the like. If a data center is to be highly or even highly available, further criteria play a role: For example, the BSI designates "places of particular danger" such as nuclear facilities, airports, railways and roads, as well as rivers or forests to which a data center should have certain distances.

And even if a green meadow could be found far away from any infrastructure and nature for the construction of a data center, georedundancy demands even more: a wide spatial separation of the locations, for example. The concept of georedundancy justifiably assumes that the greater the distance between the two sites, the lower the probability that an event will paralyze both data centres. The classic example of such feared events is natural disasters. For example, heavy rainfall and a subsequent rise in the nearby river can cause considerable damage to the data centre. If the redundancy DC is next door, there is a good chance that it will also be affected.

More distance has consequences

In order to prevent precisely such scenarios, the redundant data center should be located at a sufficient distance. The BSI has recently redefined exactly what this means. Previously, 10 to 15 kilometers were considered sufficient, but now it recommends a distance of at least 100, better 200 kilometers. In this way, according to the BSI, not only are incidents better protected by regionally limited events, but damage caused by major natural disasters such as floods or earthquakes can also be better cushioned. The greater the distance, the better. But as the distance increases, so do the technical requirements for data transmission. The BSI is aware of this and instructs the companies to explain in detail in a security concept which of the two aspects is given priority for which reason.

On the one hand, the greater the distance, the greater the probability that interference or damage will occur on the long lines. This can reduce the transmission rate. On the other hand, the latency times for data transmission increase, even if only in the millisecond range. This is too long for asynchronous mirroring processes or distributed apps that need to communicate with each other. For example, companies have to consider whether asynchronous replication with several minutes of data loss in the event of damage or a data center that is closer to the customer poses the greater risk.

Adaptation of the infrastructure concept

The question of georedundancy therefore has an influence on the entire data center operating concept. Long transaction paths and the associated latency times also decide whether a second data center location is operated as a pure emergency data center or whether the load is distributed to both data centers simultaneously. Suddenly it makes a difference on which server a virtual machine is running and with which other applications it has to communicate. The less data that needs to be synchronized, the better. Especially in the course of more and more modularization and the use of microservicies, the location of the applications and the associated data should be well planned.

If maximum availability is to be achieved, the redundant data center must also be secured, for example with additional infrastructure. Otherwise, in the event of damage at one location, the IT would be temporarily non-redundant. If national borders are crossed in order to maintain the recommended distance, data protection concerns may have to be taken into account.

Only obligatory for some

Operators of critical infrastructures - such as waterworks or power utilities - are bound by the BSI specifications regarding georedundancy. For other authorities and companies with high or very high requirements, the Office recommends a redundancy data center as an emergency precaution. It is undisputed that redundant infrastructure design makes sense. After all, how much "geo" plays a role in this depends on how business-critical an IT failure would be for the company. If you keep costs in mind, it's clear that there can't be a hedge for every scenario. In the vast majority of cases this is not necessary: For example, applications can be designed from the outset to withstand downtime and latency without causing major problems.

The original article in german can be found here.

    Back to overview