A guide to today’s data centre ecosystem

10 March 2026 | Strategy, Transaction Support

Alessandro Ravagnolo | Richard Morgan | Daniel Ponte Fernández

Article | Data centres


Data centres play a central role in the digital economy, supporting the systems that process, store and distribute information. As internet traffic, cloud adoption and artificial intelligence (AI) have grown, the industry has developed a range of data-centre facility types that meet increasingly specialised requirements.

Interconnection 230x190.pngHyperscale 230x190.pngEnterprise 230x190.pngAI 230x190.png

This article explains the four main types of data centres. Each category serves different customers and workloads. Understanding these differences helps to clarify how the broader ecosystem has evolved over time and why this matters as investors, data centre operators and other players in the value chain navigate the next phase of digital infrastructure growth.

Figure 1: A snapshot of the four main types of data centres

Interconnection​

Hyperscale cloud ​

Enterprise 
co-location​

AI

Use case

  • Manage and exchange traffic 
    to support the internet ​

  • Support cloud services​

  • Host enterprises’ IT infrastructure into specialised and efficient 
    facilities ​

  • Training of large language models (LLMs) 

  • Inferences ​

When they started developing

  • 1990s​

  • 2000s, mainstream from the 2010s​

  • 2000s

  • 2020s

End users/customers

  • Telecoms operators​

  • Content delivery networks​ (CDNs)

  • Cloud service providers​

  • Tier 1 cloud service providers​

  • Tier 2 cloud service providers​

  • Retail: SMEs, large enterprises ​

  • Wholesale: managed service providers

  • Hyperscalers

  • Neoclouds

  • Enterprises using AI tools​

Number of 
customers per facility

  • 20–100​

  • 1–3​

  • Retail: 20–100​

  • Wholesale: 1–5​

  • 1–3​

Geographical distribution

  • Strategic network crossroads​

  • Mostly within the vicinity of large urban areas and established cloud availability zones​

  • Now spreading to Tier 2 and 
    Tier 3 cities in larger markets​

  • Well-connected outskirts of business and dense urban areas​

  • Business districts ​

  • Energy-rich rural areas​

  • Cloud regions for inferences​

Typical size of facilities

  • 2–20MW​

  • >100MW​

  • 2–20MW​

  • 100–1000MW ​

Contract length

  • 2–5 years ​

  • 8–15 years​

  • Retail: 1–5 years​

  • Wholesale: 3–10 years​

  • 5–10 years for neocloud players​

  • Longer for traditional cloud service providers​

Source: Analysys Mason

Interconnection data centres

Interconnection data centres enable the efficient exchange of traffic between networks and digital service providers. They originated as neutral facilities designed to support the routing of internet traffic when telecoms operators and the academic research community first needed dedicated facilities for peering. Over time, these sites became key points where networks could interconnect, either through internet exchange points (IXPs) or through private peering agreements between carriers.

Historically, major interconnection hubs developed where long-haul networks and high-volume users converged. Europe’s primary hubs are based around major financial centres – Frankfurt, London, Amsterdam, Paris and Dublin (FLAP-D). In the USA, Northern Virginia (Ashburn) stands as the leading example, serving as the core East Coast interconnection hub. Over time, however, demand has expanded far beyond these core cities. As internet traffic volumes rise and distribution becomes more local, new interconnection points are emerging in secondary (Tier 2) markets, often positioned at the end of submarine cables. This decentralisation reduces latency, improves resilience and lowers transport costs.

Cloud providers, CDNs and other digital platforms now rely heavily on interconnection sites to exchange traffic efficiently. The value of these facilities increases as more networks and service providers co-locate within them, creating powerful network effects. As these ecosystems scale, they attract even more interest because the range of potential interconnection partners expands exponentially. A facility can operate successfully with only a few dozen customers if those customers generate enough traffic and the location is hard to replicate. However, the largest hubs, such as those in the FLAP-D markets, now host more than 100 carriers, cloud providers, CDNs and digital platforms.

Interconnection data centres vary in size from a single-digit megawatt to the low tens of megawatts, because their limiting factor is customer density, not total power. These facilities earn steady income from renting space and power, but their strongest profitability comes from high-margin interconnection services such as direct customer-to-customer links (cross-connects), dedicated connections to cloud platforms (cloud onramps), and IXP ports, which allow customers to exchange traffic efficiently within the facility.

Operator-neutral facilities, owned by independent data centre providers, play a key role in this segment by offering equal access to all carriers and cloud-neutral facilities.

Hyperscale cloud data centres

Hyperscale cloud data centres support large-scale computing workloads and underpin the rapid expansion of cloud services globally. Over the past 10–15 years, major cloud platforms – particularly AWS, Microsoft Azure and Google Cloud – have driven substantial investment in dedicated hyperscale infrastructure.

These facilities are designed for significant economies of scale. Early hyperscale data centres had capacities measured in tens of megawatts. Today, however, a single cloud region may include several availability zones with a combined planned capacity in the hundreds of megawatts. Hyperscalers often build and own these facilities, but a meaningful share of capacity is delivered through specialist data centre platforms under long-term wholesale agreements. Even when outsourced, hyperscale deployments are often single tenant and customised to the cloud operator’s specifications for power density, cooling and security.

Cloud providers initially secured capacity near major interconnection hubs to ensure access to diverse networks. However, the combination of growing demand, constraints on land and grid availability, and the need for regional resilience has pushed new cloud regions into Tier 2 and Tier 3 markets. As a result, hyperscale development is becoming more geographically distributed.

From an investor perspective, hyperscale facilities are capital intensive and typically supported by long-term commitments from a small number of very large customers. This provides strong visibility of future occupation, but also results in customer concentration risk and exposure to the strategic decisions of a limited tenant base.

Looking ahead, incremental demand will not come solely from the large US-based hyperscalers. Regional cloud providers are likely to contribute more meaningfully to future capacity needs. These regional players are unlikely to reach the scale of AWS, Microsoft Azure or Google Cloud, but they are expected to complement them and capture a distinct share of demand, particularly in markets where data sovereignty, local compliance or specialised workloads play a decisive role.

Enterprise co-location data centres

Enterprise co-location data centres support organisations that want to outsource the operation of their IT infrastructure while retaining control over their hardware, data and applications. Instead of building and operating their own on-premises data centres, enterprises place their equipment in third-party facilities that offer resilient power, advanced cooling, robust physical security and access to diverse connectivity.

Beyond cost efficiency, enterprise co-location addresses broader requirements, including regulatory compliance, data sovereignty, resilience, cyber security and performance. As enterprise IT structures become more complex, and hybrid cloud strategies become the norm, enterprises increasingly value the combination of control, flexibility and connectivity that co-location provides.

These facilities are geographically widespread to ensure proximity to enterprise customers. Site sizes typically range from a few megawatts to 10–20MW, depending on local demand.

Enterprise co-location generally follows two commercial models: 

  • Retail co-location, serving many customers with smaller deployments. This model generates higher revenue per MW but involves greater operational and commercial complexity.
  • Wholesale co-location, serving fewer customers with larger footprints, often under longer-term agreements. This model offers stable revenue but lower unit pricing.

From an investor perspective, revenue is driven primarily by co-location services, supplemented by connectivity and ancillary services. Power is often structured as a pass-through component, which can include a margin. 

AI data centres

AI is reshaping data centre design and accelerating demand for specialised infrastructure. Major technology companies – including Google, Microsoft, Amazon, Meta and Oracle – are scaling up AI computing capacity within their cloud environments. In parallel, AI native companies such as OpenAI, Anthropic and xAI are contracting substantial graphic processing unit (GPU)-based resources to train and develop their own proprietary models. In some cases, hyperscalers also act as strategic investors in these enterprises. At the same time, a new class of neocloud providers has emerged, specialised in high-performance computing GPU infrastructure optimised for AI training and advanced workloads (GPU as a service). Companies such as CoreWeave, Lambda and Crusoe provide customers with on-demand access to high-performance GPUs without the need to own or manage dedicated hardware.   

AI workloads fall into two broad categories:

  • Training, which involves developing LLMs and other foundation models using massive datasets. Training requires extremely high-performance GPUs, very dense power configurations and advanced cooling systems. These workloads are highly energy-intensive, run over extended periods and are relatively insensitive to end-user latency.
  • Inference, which involves running trained models to generate outputs in real time or near real time. Inference workloads are typically more sensitive to latency, particularly for user-facing applications, and require high availability and continuous operation.

Training clusters are generally less latency-sensitive and therefore tend to be located in large, centralised facilities where favourable conditions exist, with abundant land and access to large-scale power, including renewable energy sources. This is driving the development of very large, purpose-built campuses, often hundreds of megawatts or more, designed around high-density racks and the technical specifications of a single anchor customer. 

Inference infrastructure follows a different logic. As many AI services are sensitive to latency, particularly for user-facing applications, they are therefore deployed close to existing cloud regions or closer to the end user. As AI-enabled services gain traction, aggregate inference capacity is expected to surpass training capacity in many markets. 

Looking ahead, interest is growing in smaller, domain-specific models trained on proprietary data. These workloads may be deployed closer to enterprises – or potentially even on premises – to alleviate concerns over sensitive data. This could renew interest in edge data centres, which sit close to where data is generated, for specific use cases that benefit from ultra-low latency or localised processing. 

Understanding how data centres differ is essential for stakeholders driving the next phase of digital infrastructure growth

The data centre landscape cannot be defined by a single model. Interconnection sites, hyperscale cloud campuses, enterprise co-location facilities and AI optimised data centres all play distinct roles in supporting modern digital services. Understanding these differences is essential for investors, operators and enterprises navigating the next phase of digital infrastructure growth. 

Analysys Mason brings commercial, technical and financial expertise across all four data centre types, supporting more than 400 clients and over 1200 transactions since 2020. Our independence and deep sector knowledge make us a trusted partner for investors assessing opportunities in the data centre market. 

Authors

Alessandro Ravagnolo

Partner, expert in transaction services

Richard Morgan

Partner, expert in transaction support

Daniel Ponte Fernández

Principal, expert in transaction services