PCIe: a Developer’s Challenge; an Inventor’s Enabler

By Alex Goldhammer, Xilinx Inc.


Intel has done a fabulous service for the electronics industry over the last two decades by enabling PCI and subsequent PCI Express (PCIe) interconnect standards. The PCI standards, managed by the PCI-SIG, have helped an enormous amount of companies mix and match their electronic systems to create an even more enormous amount of products that have profoundly impacted our daily lives. I anticipate that the coming PCIe Gen3 standard will rapidly accelerate the creation of a multitude of new products across a growing number of vertical markets, all interconnected via 10 Gigabit Ethernet, 40 Gigabit Ethernet and even 100 Gigabit Ethernet.

But with the introduction of each new generation of the interconnect standard, new challenges arise for the companies creating devices that will comply with the standard. PCIe Gen2 and Gen3 are certainly no exception.

The biggest advantage of moving to any new PCI Express generation is that each new one typically doubles the speed and bandwidth over the previous generation. This means remarkable things for each new generation of devices but can be quite a challenge for the folks designing systems that comply with the standard.

For IP and device manufacturers, this means each generation of their IC or core’s internal data path must also double. This can be a huge challenge for companies creating next generation devices but even more so for IP companies or companies that maintain their own IP libraries. In making their cores comply with the standard, they must also ensure their cores remain efficient but stay essentially the same size when implemented in silicon. Creating compliant devices and IP is further complicated by the fact that the PCI-SIG and other companies are constantly coming up with new optional features above and beyond the base standard.

On the IC side, companies wishing to create products that comply with each new generation of PCIe must also deal with ever more complex link training and state machines. The link training is the automatic mechanism where systems linked via PCIe negotiate lane widths and lane speeds. These operate autonomously (without need for user intervention) and must be extremely robust for reliable system performance.

With PCIe Gen3, data rates increased to 8Gbps and the encoding has changed to 128b/130b with scrambling. Unfortunately, this isn’t the same encoding used for PCIe Gen1 and Gen2. Thus companies wishing to comply with the standard must ensure their systems can easily switch encoding from Gen3’s128b/130b to a more industry standard 8b/10b used in Gen1 and Gen2 (PCIe Gen3 must also support Gen1 and Gen2 data rates). In addition to requiring encoding switches, Gen3 requires chips to include transceivers that support complex decision feedback equalization (DFE). Many companies will need to add DFE support to their devices, if they don’t have them already.

But from a developer’s perspective, it isn’t all bad news. Both PCI Gen2 and USB 3.0 protocols are wonderful in that that they use the Intel® PIPE 2.0 specification as the basis for the internal interface between the Protocol Layers and the gigabit transceivers (PHY). From a gigabit transceiver development perspective this helps engineers reuse a lot of the verification and testing infrastructure needed for transceivers. If this trend continues, it will also hopefully reduce the numbers of different transceivers in the market, making transceivers easier to test and validate while increasing reliability. This will seemingly help companies like Xilinx bring products based on PCIe Gen3 and subsequent generations to market even sooner.