x16 slots to reach 128 Gb / s

This morning, the PCI Special Interest Group (PCI-SIG) releases the long-awaited final specification (1.0) for PCI Express 6.0. The next generation of the ubiquitous bus once again doubles the data rate of a PCIe lane, bringing it to 8 GB / second in each direction – and much higher for multi-lane configurations. With the final version of the specification now sorted and approved, the group expects the first commercial hardware to hit the market in 12-18 months, which in practice means it should start appearing on servers in the near future. 2023.

First announced in the summer of 2019, PCI Express 6.0 is, as the name suggests, the immediate sequel to the current generation PCIe 5.0 specification. Having set a goal of continuing to double PCIe bandwidth roughly every 3 years, PCI-SIG almost immediately began work on PCIe 6.0 after spec 5.0 was completed, looking for ways to double the bandwidth again. of PCIe. The product of these development efforts is the new PCIe 6.0 specification, and although the group missed its initial target of a late 2021 release in just a matter of weeks, they are announcing today that the specification has been finalized and is being released. to group members.

As always, the creation of an even faster version of PCIe technology was driven by the insatiable bandwidth needs of the industry. The amount of data transferred by graphics cards, accelerators, network cards, SSDs, and other PCIe devices continues to increase, as do the bus speeds to power those devices. As with earlier versions of the standard, the immediate demand for a faster specification comes from server operators, who already regularly use large amounts of high-speed hardware. But in due course, the technology is expected to spread to consumer devices (i.e. PCs) as well.

By doubling the speed of a PCIe link, PCIe 6.0 is a general doubling of bandwidth rates. X1 links go from 4 GB / second / direction to 8 GB / second / direction, and that scales up to 128 GB / second / direction for a full x16 link. For devices that already suture a link of a given width, the additional bandwidth represents a significant increase in bus limits; Meanwhile, for devices that are not yet saturating a link, PCIe 6.0 provides the ability to reduce the width of a link, maintaining the same bandwidth while reducing hardware costs.

PCI Express Bandwidth
(Full duplex: GB / second / direction)
Slot width PCIe 1.0
(2003)
PCIe 2.0
(2007)
PCIe 3.0
(2010)
PCIe 4.0
(2017)
PCIe 5.0
(2019)
PCIe 6.0
(2022)
x1 0.25 GB / s 0.5 Gb / s ~ 1 Gb / s ~ 2 Gb / s ~ 4 Gb / s 8 GB / s
x2 0.5 Gb / s 1 GB / s ~ 2 Gb / s ~ 4 Gb / s ~ 8 Gb / s 16 GB / s
x4 1 GB / s 2 Gb / s ~ 4 Gb / s ~ 8 Gb / s ~ 16 GB / s 32 GB / s
x8 2 Gb / s 4 Gb / s ~ 8 Gb / s ~ 16 GB / s ~ 32 GB / s 64 GB / s
x16 4 Gb / s 8 GB / s ~ 16 GB / s ~ 32 GB / s ~ 64 GB / s 128 GB / s

PCI Express was first released in 2003, and today’s version 6.0 essentially marks the third major technology review. While PCIe 4.0 and 5.0 were “only” extensions of earlier signaling methods – in particular, continuing to use PCIe 3.0 128b / 130b signaling with NRZ – PCIe 6.0 undertakes a bigger, arguably the most significant overhaul of the history of the standard.

In order to squeeze another doubling in bandwidth, PCI-SIG has completely revolutionized signaling technology, shifting from Non-Return-to-Zero (NRZ) technology used from the start, to amplitude modulation of pulse 4 (PAM4).

As we wrote back when development on PCIe 6.0 was first announced:

AT very At a high level, what PAM4 does in relation to NRZ is take a page from the MLC NAND playbook and double the number of electrical states that a single cell (or in this case, the transmission) will contain. Rather than traditional 0/1 high / low signaling, PAM4 uses 4 signal levels, so a signal can encode for four possible two-bit patterns: 00/01/10/11. This allows PAM4 to carry twice as much data as NRZ without having to double the transmission bandwidth, which for PCIe 6.0 would have resulted in a frequency of around 30 GHz (!).

PAM4 itself is not a new technology, but until now it was the area of ​​ultra-high-end network standards like 200G Ethernet, where the amount of space available for more physical channels is even more limited. As a result, the industry already has a few years of experience working with the signaling standard, and with their own bandwidth requirements continuing to grow, the PCI-SIG has decided to integrate it into the chassis based on the next generation of PCIe on it. .

The trade-off for using PAM4 is of course the cost. Even with its higher bandwidth per Hz, PAM4 is currently more expensive to implement at virtually every level, from the PHY to the physical layer. This is why it has not taken the world by storm and why NRZ continues to be used elsewhere. PCIe’s mass deployment scale will of course help a lot here – economies of scale still matter a lot – but it will be interesting to see where things stand in a few years once PCIe 6.0 is on the rise. Powerful.

Meanwhile, much like MLC NAND in my previous analogy, due to the additional signal states, a PAM4 signal itself is more fragile than an NRZ signal. And that means that with PAM4, for the first time in PCIe history, the standard also benefits from forward error correction (FEC). True to its name, direct error correction is a way to correct signal errors in a link by providing a constant stream of error correction data, and it is already commonly used in situations where the integrity of the links. data is critical and where there is no time for retransmission (such as DisplayPort 1.4 with DSC). While FEC was not required for PCIe until now, the fragility of PAM4 will change that. The inclusion of FEC shouldn’t make a noticeable difference for end users, but for PCI-SIG it’s another design requirement that must be addressed. In particular, the group needs to ensure that their FEC implementation is low latency while still being sufficiently robust, as PCIe users will not want a significant increase in PCIe latency.

It should be noted that FEC is also associated with Cyclic Redundancy Check (CRC) as the final layer of bit error defense. Packets which even after FEC still fail a CRC – and therefore are still corrupted – will trigger a full retransmission of the packet.

The result of switching to PAM4 is that by increasing the amount of data transmitted without increasing the frequency, the signal loss requirements will not increase. PCIe 6.0 will have the same 36 dB loss as PCIe 5.0, which means that even though the trace lengths are not officially defined by the standard, a PCIe 6.0 link should be able to reach a PCIe 5.0 link. Which, coming from PCIe 5.0, is definitely a relief for vendors and engineers.

Besides PAM4 and FEC, the latest major technological addition to PCIe 6.0 is its FLOW control unit (FLIT) encoding method. Not to be confused with PAM4, which is at the physical layer level, FLIT encoding is used at the logical level to split data into packets of fixed size. It is by moving the logical layer to fixed size packets that PCIe 6.0 is able to implement FEC and other error correction methods, as these methods require said fixed size packets. FLIT encoding itself is not a new technology, but like PAM4, it is mainly borrowed from the field of high-speed networks, where it is already in use. And, according to PCI-SIG, this is one of the most important parts of the specification, because it is the key element to enable (continue) low latency operation of PCIe with FEC, while still allowing very minimal overload. All in all, PCI-SIG considers PCIe 6.0 encoding to be a 1b / 1b encoding method, because there is no overhead in the data encoding itself (there is, however, an overhead under the form of additional FEC / CRC packets).

Since this is more of an enabling element than a feature of the specification, the FLIT encoding should be fairly invisible to users. However, it is important to note that the PCI-SIG considered it sufficiently important / useful that the FLIT encoding also be backported in one direction to reduce link rates; Once FLIT is enabled on a link, a link will remain in FLIT mode at all times, even if the link rate is negotiated down. So, for example, if a PCIe 6.0 graphics card were to go from 64 GT / s (PCIe 6.0) to 2.5 GT / s (PCIe 1.x) to save power at rest, the link itself will still be running in FLIT mode, rather than reverting to a full PCIe 1.x style link. This both simplifies the design of the specification (not having to renegotiate connections beyond the link rate) and allows all link rates to benefit from the low latency and low overhead of FLIT.

As always, PCIe 6.0 is backward compatible with previous specifications; So older devices will work in newer hosts and newer devices will work in older hosts. Additionally, current forms of connector remain supported, including the ubiquitous PCIe card edge connector. So while support for the specification will need to be built into newer generations of devices, it should be a relatively straightforward transition, just like previous generations of the technology.

Unfortunately, PCI-SIG hasn’t been able to give us much advice on what this means for implementations, especially in consumer systems – the group is just making the standard, it’s up to the hardware vendors. to implement it. Since the switch to PAM4 means that the amount of signal loss for a given trace length has not increased, conceptually, the placement of PCIe 6.0 slots should be about as flexible as the placement of PCIe 5.0 slots. That said, we’ll have to wait and see what AMD and Intel come up with over the next few years. Being able to do something and being able to do it with a mainstream hardware budget are not always the same thing.

In concluding things, with the PCIe 6.0 specification finally complete, the PCI-SIG tells us that, based on previous adoption timelines, we should start to see PCIe 6.0 compliant hardware hitting the market in 12-18 months. . In practice, that means we should see the first server equipment next year, and then maybe another year or two for consumer equipment.

Leave a Reply

Your email address will not be published. Required fields are marked *