A data center is a physical facility that private companies and governments use to store and share applications and data. Most businesses depend on the data center’s reliability and security to run their IT operations optimally.
However, not all data centers are the same: their design is based on storage and a network of computing resources that enable the delivery of shared information and applications.
The earliest data centers developed in 1940, such as ENIAC, required a great deal of power and a special environment to function properly. They were expensive and mostly used for military purposes.
The number of data centers across the world drastically increased during the 1990s. In order to establish a presence on the internet, several organizations built large facilities called Internal Data Centers with advanced capabilities such as crossover backup.
Today’s data centers are far different: technology has shifted from conventional physical servers to virtual networks that support applications and workloads on different levels.
The current infrastructure allows applications and data to exist and link across multiple data centers and private/public clouds. When a website/app is hosted in the cloud, it uses data center resources from the cloud provider.
Crucial Components Of A Data Center
A data center comprises numerous key components, such as different types of servers, storage systems, switches, routers, and application delivery controllers. When installed and implemented properly, they provide computing resources, storage infrastructure, and network infrastructure that drive applications.
Basic Standards For Data Center Infrastructure
ANSI/TIA-942-A is a widely adopted standard for data center infrastructure. It complies with four tiers defined by the Uptime Institue standard:
Tier I. Basic Capacity: includes an uninterruptible power supply for power sags, spikes, and outage. It protects against disruption from human errors but not unexpected outages or failure.
Tier II. Redundant Capacity: provides better safety against disruptions and maintenance opportunities. Redundant capacity components for power and cooling can be removed without shutting them down.
Tier III. Concurrently Maintainable: facilitates redundant distribution paths to serve the critical environment. Any parts can be shut down and removed without impacting IT operation.
Tier IV. Fault-Tolerant: allows any production capacity to be insulated from almost all types of failure. The system isn’t affected by disruption from planned or unplanned events. With zero single points of failure, this level of data center guarantees the highest uptime (99.995%).
To better understand this technology, we need to explore different types of data centers and their purposes. Each has its own benefits, limitations, and requirements.
5. Colocation Data Center
China Unicom, one of the largest retail colocation data center
Example: China Unicom’s Global Center data center in Hong Kong
Who uses it: Midrange to large enterprises
Also known as simply colo, a colocation data center is a large facility that rents out rack space to businesses for their servers and other network devices. It is one of the most popular services used by organizations that may not have sufficient resources to maintain their own data center but still want to enjoy all the benefits.
A colocation facility provides space, power, cooling, and physical security for the server. It is responsible for efficiently connecting a variety of networking equipment to different telecommunication and network service providers.
Companies with large geographic footprints can have their hardware located in multiple places. For example, a single organization may have servers in five to six different colocation data center.
There are several benefits of using a colocation data center:
- Scalable: When your company’s reach is expanding, you can quickly add new servers and other crucial equipment to the facility.
- Location preference: You can select the location of your data center that is nearest to your customers.
- Predictable and lower costs: It costs much less to lease a colocation data center than building your own facility. You can sign quarter or annual contracts according to your budget.
- Reliable: Since colocation data centers are designed with a high specification for redundancy, they are extremely reliable.
- Piece of mind: The technical staff takes care of tedious tasks such as managing power, installing equipment, running cables, and other technical processes. This means you don’t need to worry about the maintenance of the server.
People often get confused between a colocation data center and a colocation server rack. They often use this term interchangeably. However, these are two different entities.
The colocation data center refers to the company that rents the entire facility to another company, whereas the colocation rack refers to the company that rents out rack space within the data center to multiple organizations.
4. Managed Data Center
Example: Fully-managed IBM Cloud Services
Who uses it: Midrange to large enterprises
This type of data center model is deployed, managed, and monitored by a 3rd-party service provider. It offers all necessary features through a managed service platform.
The data center can be either completely or partially managed. In the former, all technical details and back-end data are handled or administered by the data center provider. Whereas the latter allows businesses to have a certain level of administrative control over the data center implementation and service.
In general, the service provider performs the maintenance of all network components and services, upgrades operating systems and other system-level programs, and restores data/services in case of disrupting events.
One can source these managed services from a fixed data center hosting site, colocation facilities, or through a cloud-based data center.
For example, IBM’s data center offers a wide range of managed services directly to its clients, such as managed security services, managed network services, and managed mobility and information services.
3. Enterprise Data Center
Google data center in Eemshaven, Netherlands | Image credit: Google
Example: Facebook’s Forest City Data Center in North Carolina
Who uses it: Large enterprises
The enterprise data center is a private facility designed for the sole purpose of supporting a single company. It can be located on-premises or off-premises at a site, as per the customers’ convenience.
For instance, if you are running a website from Canada, and your target audience is students in the United States, you will prefer to build a data center in the US to reduce page-load time.
An enterprise facility is defined more by its ownership and purpose than its size and capacity. It is well suited for organizations with unique network requirements or those that make enough revenue to take advantage of economies of scale.
The enterprise data center usually contains multiple data centers, each with the purpose of sustaining key functions. These sub-data centers can be further classified into three groups:
- Internet data center: supports the devices and servers essential for web applications.
- Extranet: supports business-to-business transactions within the enterprise data center network. Typically, these services are accessed over private WAN links or secure VPN connections.
- Intranet: holds the application and data within the enterprise data center. This data is used for research and development, manufacturing, marketing, and other core business functions.
The major benefit of this type of data center is that it is easy for companies to track crucial parameters (such as bandwidth and power usage) and keep their software (such as monitoring tools) updated. This makes it simpler to estimate upcoming needs and scale appropriately.
However, it all comes with a cost: Developing enterprise data center facilities requires large capital investments, labor, maintenance of equipment, and ongoing expenditures of time.
Read: What Is A Proxy Server and its and Usage?
2. Cloud Data Center
Example: Cloud services offered by Google, IBM, Amazon (AWS), Microsoft (Azure), etc.
Who uses it: Most organizations of any size
With a cloud data center, the actual hardware is run and managed by the cloud company, often with the help of a 3rd party managed services provider. It allows clients to run websites/applications and manage data within a virtual infrastructure running on cloud servers.
The data is fragmented and duplicated across numerous locations as soon as it is uploaded to cloud servers. In case of any unexpected events, the cloud provider makes sure that there is a backup of your backup as well.
A few cloud service companies offer customized clouds, giving clients singular access to their own cloud environment, known as private clouds. The public cloud providers, on the other hand, make resources available to the public via the internet. Amazon Web Services, Microsoft Azure are some of the most popular public cloud players.
Cloud services have several advantages over on-premises data centers. With the cloud, the company pays only for the amount of hardware resources they use. It doesn’t need to worry about the regular server updates, security, cooling costs, etc. These prices are included in the monthly subscription fee structure.
Read: 8 Different Types Of Data Breaches With Examples
1. Edge Data Center
Still in the developing phase
Edge data centers are smaller facilities located close to the population they serve. Characterized by their size and connectivity, these data centers allow organizations to deliver content and services to local users with minimal latency.
They play a crucial role in the edge computing architecture, which brings data storage and computation closer to the location where it is needed. Studies show that edge data centers will support IoT (Internet of Things) and autonomous vehicles to provide additional processing capacity and improve customers’ experience.
While a large portion of data collected by IoT and autonomous vehicles will be processed locally, some of it will be transmitted back to a data center. Edge facilities are connected to various other data centers or a larger central data center.
Compared to enterprise data centers running at full capacity, multiple smaller edge data centers can distribute heavy traffic loads more efficiently. They can cache high-demand content, minimizing the delay between a user’s request and the server’s response. Furthermore, distributing traffic loads also increases network reliability.
For organizations trying to enhance regional network performance or penetrate a local market, these facilities are invaluable.
Read: What Is A Server? 15 Different Types of Server And Their Uses
Market
At present, nearly 70% of enterprise workloads are running in corporate data centers, colocation data centers host 20%, and approximately 9% is in the cloud.
According to Global Market Insights, the edge data center market size stood at $5.5 billion in 2019. It is expected to grow at a CAGR of 23% between 2020 and 2026. The ever-growing demand for high computational resources is encouraging service providers to host data centers at edge locations for low latency and reliable connections.