What are data centers? How they work and how they change in size and range

A data center is the physical facility providing the computing power to run applications, the storage capabilities to process data, and the networking to connect employees to the resources needed to do their jobs.

Experts predict that the on-premises data center will be replaced by cloud-based alternatives, but many organizations have concluded that they will still have applications that must live on-premises. Rather than die, the data center evolves.

It is becoming increasingly distributed, with edge data centers springing up to process IoT data. It is being modernized to run more efficiently through technologies such as virtualization and containers. It adds cloud-like features such as self-service. And the on-premises data center integrates with cloud resources in a hybrid model.

Once reserved for large enterprises that could afford the space, resources, and personnel to maintain them, today’s data centers come in many forms, including co-located, hosted, cloud, and edge. In all of these scenarios, the data center is a locked, noisy, and cold space that keeps your application servers and storage devices running safely around the clock.

What are the components of a data center?

All data centers share a similar underlying infrastructure that enables reliable and consistent performance. Basic components include:

Power: Data centers must provide clean, reliable power to keep equipment running 24 hours a day. A data center will have multiple power circuits for redundancy and high availability, providing backup through battery power without interruption (UPS) and diesel generators.

Cooling: Electronics generate heat which, if not mitigated, can damage equipment. Data centers are designed to remove heat while supplying cool air to eliminate equipment overheating. This complex balance between air pressure and fluid dynamics involves the even placement of cold aisles where air is pumped and hot aisles to collect it.

Network: In the data center, devices are interconnected so that they can communicate with each other. And network service providers provide connectivity to the outside world, making it easy to access business applications from anywhere.

Security: A dedicated data center provides a layer of physical security far beyond what can be achieved when computer hardware is stored in a wiring closet or other location not specifically designed for security from the start. In a purpose-built data center, equipment is securely hidden behind locked doors and housed in cabinets with protocols ensuring that only authorized personnel can access the equipment.

What are the types of data centers?

On the site : This is the traditional data center, built on the organization’s property with all the necessary infrastructure. An on-premises data center requires an expensive real estate and resource investment, but is suitable for applications that cannot migrate to the cloud for security, compliance, or other reasons.

Flatsharing : A colo is a third-party owned data center that provides physical infrastructure and management for a fee. You pay for the physical space, the power you consume, and the network connectivity within the facility. Physical security is provided by locked data center racks or locked caged areas. Access to the facility requires credentials and biometric data to guarantee authorization. There are two options in the colo model: you can retain full control of your resources, or you can opt for a hosted option where the third-party provider takes responsibility for the physical servers and storage units.

IaaS: Cloud providers such as Amazon Web Services (AWS), Google Cloud Services or Microsoft Azure provide infrastructure as a service (IaaS), allowing customers to remotely access dedicated slices of shared servers and storage via a web-based user interface for creating and managing virtual infrastructure. Cloud services are paid based on resource consumption and you can grow or shrink your infrastructure dynamically. The service provider manages all equipment, security, power and cooling; as a customer, you are never allowed to physically access it.

Hybrid: In a hybrid model, resources can be hosted in multiple places and interact as if they were in the same place. A high-speed network link between sites facilitates faster data movement. A hybrid setup is great for keeping latency- or security-sensitive applications close to you while accessing cloud-based resources as an extension of your infrastructure. A hybrid model also allows for the rapid deployment and deprecation of temporary equipment, eliminating the need to over-provision purchases to deal with peaks in activity.

Edge: Edge data centers typically house equipment that needs to be closer to the end user, such as cached storage devices, which contain latency-sensitive copies of data due to performance requirements. It is common to place backup systems in a data center at the edge, giving operators better access to remove and replace backup media (such as tape) to send to offsite storage facilities.

What are the four data center tiers?

Data centers are built around Service Level Agreements (SLAs) that take into account the potential risk of service disruption over a calendar year. To reduce downtime, a data center will deploy more redundant resources for greater reliability (for example, there may be four geographically different power circuits in the facility instead of two). Uptime is expressed as a percentage, often referred to as a nine, reflecting the number of the digit 9 in the uptime percentage, as in “four nines” or 99.99%.

Data centers are measured in 4 levels:

  • Level 1: No more than 29 hours of potential downtime in a calendar year (99.671% availability).
  • Level 2: no more than 22 hours (99.741%).
  • Level 3: No more than 1.6 hours (99.982%).
  • Level 4 no more than 26.3 minutes (99.995%).

As you can see, there is a big difference between Tier 1 and Tier 4 classifications, and as you’d expect, there can be huge cost differences between tiers.

What is hyperconverged infrastructure?

The traditional data center is built on a three-tier infrastructure with discrete blocks of compute, storage, and network resources allocated to support specific applications. In a hyperconverged infrastructure (HCI), all three tiers are combined into a single building block called a node. Multiple nodes can be grouped together to form a pool of resources that can be managed through a software layer.

Part of the appeal of HCI is that it combines storage, computing, and networking into a single system to reduce complexity and streamline deployments in data centers, remote branch offices, and edge locations.

What is Data Center Modernization?

Historically, the data center was viewed as a separate set of equipment serving specific applications. As each application required more resources, equipment was purchased, downtime was required to deploy it, and ever-increasing use of physical space, power, and cooling.

With the development of virtualization technologies, our perspective has changed. Today, we view the data center holistically as a pool of resources to be logically partitioned and, as a bonus, used more efficiently to serve multiple applications. As with cloud services, application infrastructures containing servers, storage and networks can be configured on the fly from a single screen. More efficient use of hardware enables more efficient and greener data centers, reducing the need for more cooling and power.

What is the role of AI in the Data Center?

Artificial intelligence (AI) enables algorithms to perform the traditional role of data center infrastructure manager (DCIM), monitoring power distribution, cooling efficiency, server workload and real-time cyber threats and automatically make efficiency adjustments. AI can move workloads to underutilized resources, detect potential component failures, and balance resources in the pool. It does all of this with little human interaction.

Future of the data center

The data center is far from obsolete. CBRE, one of the largest commercial real estate services and investment firms, says the North American data center market grew capacity by 17% in 2021, largely driven by hyperscalers like AWS and Azure , as well as social media giant Meta.

Businesses are generating more data every day, be it business process data, customer data, IoT data, OT data, data from patient monitoring devices, and more. And they seek to perform analytics on that data, either at the edge, on-premises, in the cloud, or in a hybrid model. Companies may not be physically building new centralized data centers, but they are upgrading their existing data center facilities and expanding their data center footprint to edge locations.

Going forward, the demand for autonomous vehicle, blockchain, virtual reality, and metaverse technology will only drive the increased growth of data centers.

Join the Network World communities on Facebook and LinkedIn to comment on topics that matter to you.

Copyright © 2022 IDG Communications, Inc.

Ramon J. Espinoza