Why Hyperscale Modular Data Centers Improve Efficiency

This article is part of a special issue of VB. Read the full series here: Smart Sustainability.

As the world transitions from Web 2.0 to Web3 – which is taking shape for us to roll out later this decade – the powerhouses that will provide new and expanded services are undergoing major upgrades to handle everything users will need. . They will provide more bandwidth than we have ever seen before, but they will use less power from the wall.

How is it possible? This is because we opt for modularity: we can replace different parts of a data center much more quickly and efficiently than in previous years. We also don’t see the high number of data bottlenecks as was common in the past. That’s because we now have more efficient networking pipelines, better/lighter software, more solid-state data storage, newer, faster, and cooler processors, and a couple dozen other improvements.

All of these components can now be dragged in or out of data centers at any time when they are not doing work. Previously, data center hardware upgrades or enhancements would take weeks or months to complete. This means that we will always have the best and fastest components running in our data centers at all times.

New super data centers and telecom interconnects are also replacing entire first-generation facilities at an increasing rate. Some model data centers stand out as prescient examples of scalable energy use, lower-tier energy consumption, reduced carbon footprint, and carefully planned sustainability using natural energy sources. Data center builders can learn a lot from these installations as examples of how to provide great computing power while respecting the environment.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, California.

register here

Much more power, bandwidth will be needed for Web3

We will need much more power and bandwidth to run web3 and metaverse-like applications that require much higher power envelopes, including applications involving cryptocurrency, high-end gaming, analytics big data and machine learning, 3D video and images, and augmented applications. reality.

AWS, Google, Alibaba, IBM, Microsoft, Dell EMC, Apple, Facebook, VMware, Oracle, AT&T, Verizon and other industry leaders are building new large-scale modular data centers around the world that will provide essential power for the computing demands of the future. They all use new federal and state energy consumption guidelines, provide carbon footprint metrics, and incorporate natural energy sources (primarily hydroelectric, wind, and solar). They all have exemplary PUE (power utilization efficiency) ratings.

PUE is a measure – or score – used to determine the energy efficiency of a data center; it is determined by dividing the total amount of energy entering a data center by the energy used to operate the IT equipment in it. For example, Facebook’s data center in Prineville, Oregon ran an exemplary PUE of 1.078; Google’s many data centers average less than 1.20 across its entire global system. Generally, a PUE of less than 1.50 is considered high end.

A conventional data center can take about two years to install, from conceptualization to deployment in functional use. By contrast, implementing a modular data center is much faster, often taking 50-75% less time – and, as CFOs like to note, that equates to a lot of capital saved.

Facebook’s Exemplary Prineville Modular Data Center Campus

Being able to install a data center in a shorter time frame is a major competitive advantage.

This is precisely what Meta is doing. In Prineville, Oregon, a small town on the western edge of the state’s Eastern Desert, 80 miles south of the Columbia River, there are 11 huge buildings on a single sandy-ground campus, comprising a total whopping 4.6 million square feet of space. Each of these buildings is the size of two large Walmarts, and they look awfully out of place in an area known more for hunting and ranching than anything else. These 11 data centers were all built in 10 years.

Each of the data centers has a single job, such as managing the main Facebook application, the company’s corporate sites, WhatsApp, Instagram, applications for Quest AR and other services; several are the holders of stored images. Some of the data centers contain up to 15,000 servers, and most of these sliding units are custom designed and built by Facebook itself. Several staff members are deployed to do one thing day in and day out: search for red lights on server stacks, then remove them and replace them with new units.

Modest Prineville was the chosen location for Facebook’s first and largest large-scale data center development, and it continues to operate efficiently 24/7 as required by Meta.

The Prineville Data Center is backed by 100% renewable energy, including two solar projects located in Oregon. The facility, one of the most energy efficient in the world, features an innovative cooling system created for the unique climatic characteristics of central Oregon.

These facilities are designed to take advantage of the prevailing southerly wind which blows there, is cooled through large water-covered screens, is directed into the central server room, and then blown out of the building through vents on the other side. . Little or no air conditioning is needed, even when the desert environment reaches over 100 degrees.

These precise design features, along with the use of alternative power sources throughout campus, are what distinguish a modern modular data center from first-generation facilities built 10 to 30 years ago – which still account for approximately 90% of all data centers in operation. . So there’s a long way to go to modernize much of the cloud and enterprise computing, all of which is housed in nondescript data centers.

How can a modular data center promote sustainability?

Modular data centers provide flexibility by allowing enterprise customers who rent colocation space for their servers to start with small installations and grow in size as needed. They can use any type of hardware they need for their use cases: standard servers, storage and networking or hyperconverged hardware that includes multiple functions in a single device. The latter has been a huge trend for over a decade; In general, hyperconverged infrastructure (HCI) models have delivered more energy-efficient performance than separate-footprint server/storage/network setups because all functions are included in one unit using a single power source .

Speed ​​of deployment, supply chain disruptions, and limited availability of skilled IT workers are three frequently cited reasons for companies to adopt modular data center solutions. Owners of colocation facilities are also influenced by four specific industry trends: edge computing, expanding remote workforces, shrinking CapEx and OpEx, and increasing sustainability and respect for the environment.

Gartner Research predicts that by 2025, 75% of enterprise data will be processed at the edge, with many of these new data centers handling the continuous influx of data from cloud applications. For colocation facilities, this means now is the time to establish a presence in emerging edge markets using modular data center components.

By 2025, 85% of infrastructure strategies will incorporate on-premises, colocation, cloud and edge delivery options in modular data centers, up from 20% in 2020, according to Gartner.

More computing processed, less energy used

Industry thought leaders estimate that by the end of the decade, approximately 75% of the world’s data centers will derive more than half of their power from natural renewable sources, such as wind, solar and hydroelectricity. Because that number is only around 10% now, it means the IT industry still has a long way to go.

However, data center efficiency is steadily improving, largely due to modular data centers whose components can be easily and quickly replaced when they are not performing well. Currently, industry experts estimate that the storage and transmission of data in and from data centers uses 1% of the world’s electricity. This share has hardly changed since 2010, even though the number of Internet users has doubled and global Internet traffic has since multiplied by 15, according to the International Energy Agency.

The data center industry’s goal is that the use of coal, natural gas and petroleum products to power these large IT vendors will be largely a thing of the past by the start of the next decade. And the industry is well on its way to achieving this goal.

The GamesBeat credo when covering the video game industry is “where passion meets business”. What does it mean? We want to tell you how much the news means to you, not only as a decision maker in a game studio, but also as a game fan. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about and engage with the industry. Discover our Briefings.

Ramon J. Espinoza