To deliver cutting-edge technology, we are specialized in building our architecture with deep-learning algorithms. By collating countless data, constantly comparing targets and actuals, we can detect anomalies and prevent failures that would otherwise occur.
Find out more from our CEO Henrik.
We translate the high complexity in a data center (resulting from locations, power supply, cooling, storage systems, computing capacities, network connection and so on) into a user-friendly user interface, on the basis of which a user can comfortably and intuitively configure and start any cloud infrastructure. In our architecture, we not only operate distributed and highly secure data storage, but also deal with the extraordinarily complex software operation of databases and load balancers, key value stores and container virtualization. All this takes place in a fully automated operation with an uncompromising promise: 100 percent availability.
This promise cannot be kept with conventional approaches to reactive IT operations. Reactive operation means that a failure must first occur and then be rectified as quickly as possible. In order to keep our promise, we must therefore be able to take preventive action.
we have specialised in interpreting certain signals in the IT landscape and intervening preventively in the event of an imminent disturbance… we evaluate a large number of hardware and software sensors using central algorithms in real time.
In recent years we have specialised in interpreting certain signals in the IT landscape and intervening preventively in the event of an imminent disturbance. In this approach, we evaluate a large number of hardware and software sensors using central algorithms in real time. We mainly use data that is already generated during operation (ambient temperatures, voltage levels, latency times, power consumption, even the behavior of software components) and that allows us to identify anomalies at an early stage.
focusing our technology on data-based decisions and continuously improving our self-learning algorithms
By consistently focusing our technology on data-based decisions and continuously improving our self-learning algorithms, we can now correctly interpret more than nine out of ten events in an above-average complex cloud infrastructure.
In addition to the higher availability of our services, there are further advantages for us as operators. For example, we were able to install what is known as dynamic capacity management, which helps us to use our resources much more efficiently. CPU nodes are shut down when no load is expected and are restarted on time when a dynamic workload of one or more customers is expected. Software updates are now installed almost autonomously. A possibly necessary restart of individual components is planned and executed algorithmically. Thanks to the possibility of transferring the infrastructure used by our customers to other CPU nodes during operation, such an update is no longer even noticeable for the customer and thus becomes a routine activity.”
Create a development clone of your production service within seconds. (coming soon)
Sign up and create your first cloud server in seconds