Telecoms operators and other enterprises will increasingly opt for private cloud AI infrastructure
Telecoms operators and other enterprises have a range of options for deploying AI workloads using either public or private infrastructure. The least privacy-conscious enterprises can use large language models (LLMs) hosted by generative AI (GenAI) model developers such as OpenAI and DeepSeek. Privacy concerns emerge here due to the potential of these developers to store user prompts, which may then be used to train further LLMs or for whatever purposes one might speculate to be of interest to a Chinese AI company. Enterprises can instead deploy AI models on public cloud infrastructure, but data privacy and security concerns associated with this infrastructure type remain. The most privacy-conscious enterprises may therefore wish to deploy AI models on private cloud AI infrastructure.
This article is based on Analysys Mason’s Private cloud AI infrastructure: requirements and strategies for telecoms operators and other enterprises.
USD549
Log in to check if this content is included in your content subscription.
Author

Joseph Attwood
AnalystRelated items
Strategy report
AI in the network: operator strategies for building AI cloud infrastructure
Podcast
Autonomous networking: the role of cloud-native automation and evolved NFV MANO
Case studies report
GPU-as-a-service (GPUaaS): telecoms operator case studies and analysis