Bridging PCI Express® to Legacy Peripherals: Requirements to Consider

By Tom Wilson, Tundra

Kontron MicroTCA Chassis


As most readers will know by now, PCI Express (PCIe) is a peripheral connect interface intended to replace PCI, PCI-X, and AGP for connecting I/O in PCs and server architectures. Unlike its predecessors, PCIe is not an arbitrated parallel bus but rather a point-to-point serial interface. PCIe was architected to have no impact on legacy firmware for PCI-based peripherals. At the same time, it offers a higher bandwidth and more scalable interconnect for future generations of PCs and servers. However, there is a significant installed base of legacy I/O peripherals for LAN, storage, etc. This base requires special bridging in order to physically connect to newer PCIe-based motherboard architectures. While the transition might be seamless in software in going from PCI to PCIe, hardware is a whole other issue.

In this article, we will concern ourselves with legacy PCI and PCI-X peripherals and the requirements to bridge these interfaces to PCIe switched architectures. PCI first arrived on the scene in the early 1990s and was scaled over time to a 64-bit, 66-MHz interconnect. PCI-X revised the standard to allow up to 133-MHz clock frequency, doubling the I/O bandwidth across the bus. The bridging requirements for PCI-based peripherals are quite different than those for PCI-X. Because PCIe allows for several different lane implementations (e.g., x1, x4, x8, etc.), it is possible to match PCIe to these different types of legacy I/O. For example, the throughput for PCI peripherals is well matched to x1 PCIe while PCI-X peripherals require x4 PCIe. The bandwidth matching is obvious. But what is not obvious are the bridging challenges associated with x1 PCIe-to-PCI vs. x4 PCIe-to-PCI-X. The system implications are quite different and require specific compromises in power, performance, and cost. Below, we examine x1 PCIe-to-PCI bridging and then x4 PCIe-to-PCI-X bridging within the context of specific examples of peripherals in each case. As we will show, the bridging requirements go beyond simple protocol mapping and extend to meeting real market and application needs.

Forward and Reverse Bridging

PC or server architectures are notably hierarchical with a host CPU controlling the system and all of its I/O. A typical modern motherboard architecture will provide PCI Express expansion slots, which means legacy I/O with PCI or PCIX must be bridged to PCI Express. This type of bridge is referred to as a Forward bridge, in which PCIe is on the host (or primary) side of the bridge. Embedded applications can often require a Reverse bridge, where PCI-X or PCI are on the host side connecting to PCIe-based I/O. Even embedded applications are moving toward PCI-Express-based processors. As a result, Forward bridging in these applications is becoming more commonplace. This article will focus on Forward bridging because it pertains directly to the issue of connecting legacy I/O to newer PCIe-based motherboards and processors.

PCI Express x1 to PCI Bridging

A number of low-bandwidth peripherals still retain PCI interfaces. Examples include: audio cards, USB 2.0 host card adapters, and digital-video-recorder (DVR) cards. DVR is worth examining as an example of the market dynamics for other PCI-based peripherals. Most of the DVR adapter cards available are PCIbased. Motherboards shipping today are increasingly providing PCIe for expansion slots requiring DVR cards native for PCIe. However, the existing DVR designs are often adequate for their function. They simply require PCIe for connectivity, not necessarily for bandwidth enhancement. In the case of DVR--as with many other PCI-based peripherals--there are some key requirements:

1. Motherboard coverage (access as many new motherboards as possible)

2. Fast time-to-market (This is the primary reason that a bridge would be used rather than waiting for PCIe-native DVR silicon.)

3. Low cost (both development and material cost)

4. Performance

Market Coverage

An important consideration is how current the PCIe/PCI bridge is with the PCIe specification. PCIe Base 1.1 is the most current specification and new OS versions like Windows Vista specifically look for 1.1-compliant PCIe peripherals. Therefore, a DVR card with a PCIe/PCI bridge that is PCIe 1.1-compliant will be future-proof for new motherboards. This compliancy is a key attribute to look for in a PCIe/PCI bridging solution. In addition, it is important that the bridge be tested for interoperability with different motherboard manufacturers and different BIOS versions. Interoperability here has a broad context encompassing both hardware and software.

Lowest Cost

Lowering cost goes beyond the pricing of the bridge silicon itself. For example, bridges that remove power-sequencing constraints and limit the number of power rails required can both simplify designs and provide significant cost savings at a system level. When combined with proven hardware and software tools, these aspects lower development costs as well as the cost of materials. In addition, the design cycle is reduced through easier board design and layout.


While performance is not always a primary factor in PCI peripherals, the DVR competitors will carve out more market share by ensuring that there is no performance bottleneck at the bridge interface. Writes from PCI peripherals to the memory on the root complex are typically posted in a write buffer internal to a bridge to overcome the inherent performance penalty imposed by a bridge. However, reads are almost always problematic. But providing an innovative feature whereby PCIe-read data can be cached within the bridge and read by subsequent PCI accesses can assist in solving this problem. In fact, this type of optional “short-term caching” feature can provide up to a 5X increase in read performance from PCI to PCIe as compared to bridges lacking this type of caching feature. Read performance can be further enhanced by minimizing read request latency. Latencies of 200 ns or less should be a benchmark to compare bridges against.

PCI Express x4 to PCI-X Bridging

One key example of PCIe-to-PCI-X bridging is with PCIe riser cards in newer servers. Rack servers provide extra slots for additional I/O add-in cards. With a riser card, add-in cards are plugged in parallel to the motherboard, as opposed to the vertical mounting usually seen in server architectures. So a riser card could have a bridge to interface between x1 or x4 PCIe at the card edge and PCI-X on the card for legacy peripherals (e.g., dual GigE NICs). This is a common application for singleslot and dual-slot riser cards, allowing older PCI-X-based I/O to be plugged into PCIe sockets. As with PCI bridging, PCI-X bridging demands several requirements at the application and market level.


In contrast to the PCI bridge discussion above, performance seems to play a more important role in PCIe-to-PCI-X bridging applications. PCI-X-based peripherals are not just higher bandwidth. They also play important performancelimiting roles in networking and storage applications. Therefore, the performance of the x4 PCIe-to-PCI-X bridge is critical. Of course, bridge performance can be measured upstream and downstream and involves both throughput and latency. There are multiple options in the marketplace for this bridging function. As a result, it is important to focus the selection on the performance criteria. Particular bridges can outperform others by as much as 30% to 40% in mission-critical reads and writes through the bridge (both in latency and throughput).

Other Considerations

As with the x1 PCIe bridge discussed above, market coverage and cost are still vital considerations. Market coverage again pertains to having the most up-todate bridge at the latest PCIe specification level (PCI Express Base 1.1). This allows the broadest access to the newest server platforms. Interoperability with multiple server platforms is also critical in assessing the suitability of different competitive bridges. Cost also goes beyond the component-level pricing and entails innovative approaches to lowering the overall board cost of materials.


A lot is required to overcome the difficulties of designing a bridge. It helps to have expertise in bridging several standards standards to PCI and PCI-X (e.g., RapidIO®, HyperTransport™, Power Architecture™, Intel XScale® Technology, etc.). By focusing on system interconnects, some companies take innovative approaches to improving bandwidth and lowering latency through their bridging and switching solutions. At the same time, they focus on cost and power savings at a system level. The Tsi381™ and Tsi384™ bridges, for example, address x1 PCIe and x4 PCIe, respectively. With only two power supplies required and no power-sequencing constraints, the Tsi381 and Tsi384 promise to deliver significant advantages in cost, power, board space, and ease of design (see the Figure). With superior performance, solution cost, and quality, such devices are playing a critical role in the rollout of new PCIe-adapter card designs.



Tom Wilson has an undergraduate and doctorate degree in science from Carleton University in Ottawa, Canada. For the last 14 years, he has been involved in introducing new system interconnect products at Tundra Semiconductor in a variety of roles. He is currently director of product management and marketing at Tundra Semiconductor.