Building an End-to-End AI Network to Enable Comprehensive AI Capabilities Across All Scenarios

During the 7th Future Network Development Conference, Mr. Peng Song, Senior Vice President and President of ICT Strategy and Marketing at Huawei, delivered a keynote speech titled “Building an End-to-End AI Network to Enable Comprehensive AI Capabilities.” He emphasized that network innovation in the era of artificial intelligence will focus on two major goals: “Network for AI” and “AI for Network,” creating an end-to-end network for cloud, network, edge, and endpoint across all scenarios.

Network innovation in the AI era comprises two main objectives: “Network for AI” involves creating a network that supports AI services, enabling AI large models to cover scenarios from training to inference, from dedicated to general-purpose, and spanning the entire spectrum of edge, edge, cloud AI. “AI for Network” uses AI to empower networks, making network devices smarter, networks highly autonomous, and operations more efficient.

By 2030, global connections are expected to reach 200 billion, data center traffic will grow 100 times in a decade, IPv6 address penetration is projected to reach 90%, and AI computing power will increase by 500 times. To meet these demands, a three-dimensional, ultra-wide, intelligent native AI network that guarantees deterministic latency is required, covering all scenarios such as cloud, network, edge, and endpoint. This encompasses data center networks, wide area networks, and networks covering edge and endpoint locations.

Future Cloud Data Centers: Evolving Computing Architectures to Support the AI Large Model Era’s Tenfold Increase in Computing Power Demand

Over the next decade, innovation in data center computing architecture will revolve around general computing, heterogeneous computing, ubiquitous computing, peer computing, and storage-computing integration. Data center computing network buses will achieve fusion and integration from the chip level to the DC level at the link layer, providing high-bandwidth, low-latency networks.

Future Data Center Networks: Innovative Net-Storage-Compute Fusion Architecture to Unleash Data Center Cluster Computing Potential

To overcome challenges related to scalability, performance, stable operation, cost, and communication efficiency, future data centers must achieve deep integration with computing and storage to create diverse computing clusters.

Future Wide Area Networks: Three-Dimensional Ultra-Wide and Application-Aware Networks for Distributed Training Without Compromising Performance

Innovations in wide area networks will revolve around IP+optical from four directions: ultra-large-capacity all-optical networks, optical-electrical synergy without interruption, application-aware experience assurance, and intelligent lossless network-compute fusion.

Future Edge and Endpoint Networks: Full Optical Anchoring + Elastic Bandwidth to Unlock the Last Mile AI Value

By 2030, full optical anchoring will extend from the backbone to the metropolitan area, achieving three-tier latency circles of 20ms in the backbone, 5ms within the province, and 1ms in the metropolitan area. At edge data centers, elastic bandwidth data express lanes will provide enterprises with data express services ranging from Mbit/s to Gbit/s.

Furthermore, “AI for Network” presents five major innovation opportunities: communication network large models, AI for DCN, AI for wide area networks, AI for edge and endpoint networks, and end-to-end automation opportunities at the network brain level. Through these five innovations, “AI for Network” is expected to realize the vision of future networks that are automatic, self-healing, self-optimizing, and autonomous.

Looking ahead, achieving the innovative goals of future networks relies on an open, cooperative, and mutually beneficial AI ecosystem. Huawei hopes to further strengthen cooperation with academia, industry, and research to jointly build the future AI network and move towards an intelligent world in 2030!


Post time: Aug-29-2023