Back From The Clouds: Trends, Reasons, And Backgrounds
It is now apparent: In many companies, the cloud journey is on the “return flight” from the shadows to the ground of their own data center. This so-called cloud repatriation has many good reasons. The turbulent economic climate is just one of them.
After analysts sang the supposed virtues of cloud migration ad nauseum year after year, many organizations gradually realized that having their own data center was not all that bad. For one thing, they had the costs under control here – unlike in the cloud. On the other hand, the performance was better – and almost infallible, compared to the cloud.
The unlimited visibility of your own IT on-premises also created planning security. And when it comes to cyber security, your data center has always been ahead of the game. So it’s time to get out of the clouds and back into your own four walls, after all?
Cloud Homeland Broke Over The Knee – And Now What?
Eighty-eight percent of companies in a recent survey by Veeam, a provider of data protection and backup software, admitted they would have moved at least some of the workloads they hosted in the cloud back to the on-premises data center (see: “Cloud Protection’s Trend Report 2023”).
This result is by no means an outlier. According to a study by Virtanen, provider of the hybrid cloud management platform of the same name, 72 percent of those companies that had moved applications to the cloud have brought at least one of them back to their own on-site data center (“State of Hybrid Cloud: February 2021”). Virtanen surveyed 350 IT decision-makers for the study published in 2021.
A full 95 percent of the participants had migrated to the public cloud at the time of the survey and were thus able to report first-hand. A study by Supermicro came to similar conclusions. According to this, 71 percent of the decision-makers surveyed wanted to move at least part of their workloads, which currently run in public clouds, back to private IT environments in the following two years. Only 13 percent said they could run all their workloads in the cloud.
The Reasons Differ
According to Virtanen market analysts, the most important reason for this change of direction was that they had migrated applications to the cloud that should have remained in the on-premises data center, 41 percent of those affected. Almost as many, 36 percent, have struggled with provisioning workloads in the public cloud.
Twenty-nine percent wanted to avoid putting up with the drop in performance in the cloud. The market researchers should have investigated how many cases the applications in question were not cloud-capable from a technical point of view. Many organizations seem to be venturing into the cloud with their workloads without even considering it’s possible.
Even More Reasons
For around a fifth (20 percent), the hidden cost of cloud deployment was the driving factor behind moving home to their own data center. According to a recent Dell Technologies internal survey of IT decision-makers in the Dell ecosystem, 96 percent of 139 organizations found the drive for greater cost-efficiency to be the key reason they were moving their workloads or applications back from the cloud to their own data center.
Forty percent of participants in the Dell study cited security and compliance as the top reasons for repatriating workloads. Some respondents expressed concerns about the geographic location of their cloud services, while others mentioned better security for data outside of the work environment. In the May 2022 Virtanen study, three-quarters of respondents (75 percent) acknowledged shortcomings in cloud governance that made managing their cloud infrastructures difficult.
The Fascination Of The Cloud
However, cloud repatriation does not mean the end of cloud fascination. More than three-quarters of respondents view their public cloud deployments as a strategic investment and want to continue their initiatives as they are or accelerate them. At the time of the survey, around half had intended to increase the number of cloud instances by the end of this year (2022)
However, the latter is not synonymous with the provision of higher computing capacities since existing powers are generally redistributed to a more significant number of more minor instances as part of the “cloudification” of applications (e.g., through so-called refactoring). Veeam’s numbers indicate that over the next two years, there should be a balance between on-premises IT and hyper-scale cloud or MSP deployments.
Custom Made Hardware
To explore deeper reasons companies are investing in dedicated and self-managed IT infrastructures and what tangible benefits they are realizing, IDC interviewed companies such as Intel, Twitter, and Preferred Networks, a provider of deep learning software. These companies have one thing in common: they have partnered with Supermicro to custom-optimize computing hardware for the needs of their core workloads. This allowed these organizations to retain complete control over their infrastructure.
Intel is not exactly the typical data center operator, quite the opposite. The chip giant operates its own infrastructure with the aim, among other things, of empathizing with its products’ users. The company has a total of 16 data center locations consisting of 56 data center modules. They house more than 380,000 servers with over 3.6 million cores, 787 petabytes of storage capacity, and more than 725,000 network connections.
For Example, At Intel
About 95 percent of the servers within this massive infrastructure are “burnt” for the chip design in high-performance computing. At 3 percent; Intel handles traditional enterprise and office workloads. In comparison, the remaining 2 percent is dedicated to manufacturing computing, which includes manufacturing and assembly test facilities.
Over the past two years (2021 and 2022), Intel has observed increasing growth rates in its infrastructure needs. Previously, the number of cores grew by around 21 percent yearly. In the past two years, growth has accelerated to 38 percent. As measured by EDA MIPS (Electronic Design Automation Million Instructions Per Second), substantial computing demand growth was previously held at 31 percent year-on-year and has accelerated to 43 percent over the past two years.
Since 2003, Intel has reduced the number of its data center modules from 152 to 56. Inefficient old data centers have been closed, and modern, high-density, and highly energy-efficient data center facilities and hyper-scale sites have been built. In fact, with this move, Intel has more than doubled the energy consumption of its data centers.
With up to 43 kilowatts (kW) of power consumption per rack, Intel boasts the highest power density in the industry. The chip giant had to develop many hardware and other infrastructure components in-house and learned a few lessons.
Cherry Picking Upgrades
When hardware is replaced every four to five years, traditional data center operators replace “everything including the racks,” an unnamed Intel official revealed to IDC. However, there is no reason to do this. Because “if you look closely at a server, there are many things such as power supply units, fan units, drives and many other components” that would not change technologically and could therefore be replaced less frequently. That is the logic behind disaggregation.
Intel has been working with Supermicro on developing disaggregated servers since 2016. This disaggregated server architecture accounts for most server deployments in Intel’s data centers. Intel develops a so-called CPU complex. This hardware unit is designed to be replaced – i.e., upgraded – more frequently than the other hardware components. Most performance benefits can be traced to these point-in-time CPU module upgrades.
Read Also: Safely On The Move In The Hybrid Cloud