White Paper

Effective resource utilization in PCIe Gen6: Shared flow control

Learn the implementation and verification requirements of shared flow control.

Siemens man looking over pipeline

In PCIe® 6.0, the data rate has doubled from 32 GT/s to 64 GT/s. This technology is a cost-effective and scalable interconnect solution that will continue to impact data-intensive markets and applications while maintaining backward compatibility with all previous generations of PCIe. Data-intensive uses include data centers, artificial intelligence/machine learning computing, high-performance computing accelerators, and data center applications—which include high-end SSDs, automotive, IoT, and mil-aero. The entire protocol stack must be able to utilize the allocated bandwidth fully in compliance with the physical layer speed to deliver the maximum throughput.

More traffic is required to utilize this high-speed link efficiently. The traffic requirement has been doubled, compared to PCIe 5.0, to use the 64 GT/s data rate. As the traffic has increased, to maintain the quality of service (QoS), we need to enable multiple virtual channels (VC). PCIe works on a credit-based flow control mechanism. To accommodate multiple VCs, more buffers need to be allocated per VC. Therefore, the buffer requirement has been doubled in PCIe 6.0, but increasing the buffer space increases the hardware and cost of the design. To solve this problem, the concept of shared flow control was introduced for FLIT mode in PCIe.

All the shared resources of every active VC form a combined, shared pool that can be used per the requirement. If individual buffer space is exhausted, more traffic can be received if resources are

available in the common shared pool.

This paper demonstrates the implementation and verification requirements of shared flow control.

Share

Conteúdo informativo relacionado