CXL Ecosystem Enabling Memory Fabrics

Compute Express Link, or CXL, dramatically changes the way memory is used in computer systems. Tutorials at the IEEE Hot Chips conference and at the recent SNIA Storage Developers Conference explored how CXL works and how it will change ways of computing. In addition, recent announcements by Colorado startup IntelliProp about its Omega Memory Fabric chips are paving the way for implementing CXL to enable memory aggregation and configurable infrastructure.

Initial implementations of CXL were intended to expand memory for individual CPUs, but CXL will have the greatest impact in sharing many different types of memory technologies (DRAM and persistent memory) between CPUs. The image below (from the CXL Hot Chips tutorial) shows the different way memory can be shared with CXL.

As Yang Seok Ki, Vice President of Samsung Electronics at SNIA SDC said, the CXL is a coherent interconnect that supports processors’ cache, memory expansion, and accelerators. CXL version 1.0 and 2.0 (working with PCIe 5.0) was released and in early August, at Flash Memory Summit, CXL version 3.0 was released that works with faster PCIe 6.0 connectivity. CXL 3.0 also enables multi-level switching, memory textures, and direct peer-to-peer memory access.

The presentation also explained how CXL version 2.0 enables the intermediate memory available to the CPU through a local CXL connection and remote memory through the CXL version 3.0 switch network, as shown below.

Near the memory is directly connected to the CPU. Some of the first CXL products available are intermediate memory extenders that provide additional memory for the CPU. CXL opens the door to memory tiers that offer as similar performance and cost tradeoffs as possible with storage tiers.

IntelliProp just announced the Omega Memory Fabric chips. The chips integrate the CXL standard along with the company’s proprietary fabric management software and Network Attached Memory (NAM) system. IntelliProp has also announced three Programmable Gate Array (FPGA) products that include the Omega Memory Fabric. The company says that its memory-neutral innovation will help the adoption of buildable memory leading to significant improvements in data center power consumption and efficiency. The company says that Omega Memory Fabric has the following features:

Omega memory fabric features, which includes CXL standard

  • Dynamic multipath and memory allocation
  • E2E Security with AES-XTS 256 with Safety Added
  • Supports non-tree peer-to-peer topologies
  • Scale management for large deployments with multiple networks/subnets and distributed managers
  • Direct Memory Access (DMA) allows data to move between memory layers efficiently without locking up CPU cores
  • Memory neutral and up to 10 times faster than RDMA

The three FPGA solutions connect CXL devices to CXL hosts and are a switch, a switch, and a fabric manager. IntelliProp says ASIC solutions will be available in 2023. The company says the solutions connect CXL devices to CXL hosts, allowing data centers to increase performance, scale across tens to thousands of host nodes, and consume less power because data is transmitted with fewer hops and enabling use A hybrid of shared DRAM (fast memory) and shared SCM (slow memory).

CXL is poised to change the way memory is used in computer architectures according to a 2022 Hot Chips tutorial and talks at SNIA SDC. IntelliProp introduced the company’s Open Memory Fabric technology and three FPGA solutions to enable CXL-enabled memory fabrics.