Keys to Next-Generation Networkin

Next-generation networking applications have placed increasing demands on bandwidth, making the use of optical switching and multi-gigabit transceivers a must

By Sam Sanyal and Mrugendra Singhai

All-optical networks provide the necessary bandwidth needed to support the next-generation of the Internet and its high-demand applications. Optical Burst Switching (OBS) is a potential method by which future optical networks may use the available optical resources more effectively. Key to meeting the increasing demands of applications like OBS are electronic systems with high per-channel bandwidth, channel bonding, low latency, and flexibility in overhead requirements. Multi-gigabit transceivers (MGTs) now offer a solution for this and other next-generation, high-demand applications.

In the past few years, Wavelength Division Multiplexing (WDM) has emerged as a key technology that is capable of providing the high bandwidths required by current and future networks. In WDM networks, data is carried on individual fibers at several different wavelengths simultaneously. One area of concern in WDM networks, however, is the need to convert the optical signals back into electrical ones at each switching node, then back to optical. This conversion adds both cost and latency to the network. To address this concern, several all-optical switching architectures for WDM networks, ranging from packet switching to circuit and wavelength routing, have been suggested in the industry literature.

Optical Burst Switching (OBS) has recently garnered a great deal of interest from the optical networking industry. It is a hybrid approach that combines the best of coarse-grain optical circuit switching with fine-grain optical packet switching. In OBS networks, control signaling is transmitted on its own band and only the signaling wavelength goes through optical-electrical-optical (O/E/O) conversion at each hop.

OBS signaling messages precede data transmissions and network nodes interpret the signals as switching instructions. Every switch along the path from source to destination configures itself to be a link in a single continuous channel. After a delay period following the control signals, the source transmits the data traffic. Data flows straight to the destination, through all intermediate switches, without going through any intermediate O/E/O conversion.

Optical Burst Switching Avoids Buffering

One key advantage of optical burst switching is that the network is completely buffer-less. Data is never stored at any of the nodes inside the network and is transmitted completely in the optical domain from one node to another. This makes the design of all-optical networks simple and relatively inexpensive; especially considering that optical memory is still in its infancy.

The technological demands on and restrictions of both optical and electronic components must be addressed, however, in order to implement optical burst switched networks. One of the keys to overcoming the electronic component restrictions is the availability of multi-gigabit transceivers (MGT). Multi-gigabit transceivers are electronic devices designed for high-speed data transmission and can implement a variety of protocols.

The development of MGT devices initially stemmed from the ever-increasing bandwidth demands of traditional networks. Since the 4th quarter of 2003, analysts estimate, 10-Gbit/sec usage has been increasing steadily. Forecasts suggest that at least fifty million 10-Gbit/sec links will be deployed over the next 5 years. MGTs are an answer to utilizing these links in high-performance communication systems for applications ranging from enterprise, data center, networking and storage. High-performance MGT devices are now available from a variety of companies.

These devices are more than simple (but high-speed) serial data pumps. The primary functional components of the Rocket IO-based MGT core, for example, are Physical Media Attachment (PMA) and Physical Coding Sublayer (PCS) blocks along with the Rocket IO multi-gigabit serial drivers and receivers. The PMA block is serial I/O protocol agnostic, supporting all major standards. Other key supported features include a programmable transmit pre-emphasis and receive equalization, as well as decision feed back equalization (DEF). The PCS block supports channel bonding that allows it to bundle with Rocket IO channels for higher bandwidth and lower latency. Consequently, it achieves a 50% reduction in latency and supports multiple encoding schemes, including 8B/10B and 64B/66B.

FPGAs Integrate MGT Cores

With high-speed serial buses and interfaces expected to one day be deployed in nearly every conceivable electronic product, the inherent scalability and cost advantages of MGT technology make it a key technology for both current and next-generation telecommunications, networking, and storage applications. In acknowledgement of this importance, the current generation of field programmable gate arrays (FPGAs) has integrated high-speed (1 to 10 Gbits/sec) transceivers, processor cores and dense programmable logic. The gigabit transceivers allow high-speed data streams to directly reach the FPGA core, making the FPGAs useful for many high-speed networking applications. The devices allow those applications to be prototyped and tested in a real environment in a matter of just a few months. Those designs also provide companies with the flexibility to provide customers with protocol enhancements via software upgrades, instead of customers having to invest in new hardware.

Consider, for example, the high-speed network interface card (NIC) developed by the North Carolina-based RTI (Research Triangle Institute) International and designed to support next-generation, high-demand communication OBS networks. The NIC uses the Just-in-Time (JIT) signaling protocol. In JIT signaling, the source transmits control signal bursts, then sends the data without waiting for an explicit confirmation of the control signal from the other nodes. Intermediate switches simply configure themselves as soon as they process the signaling message and remain in that state until data passes through them. This approach keeps the overall latency low by avoiding any round-trip delays.

Figure 1 shows the prototype implementation of the OBS local-area-network (LAN) clients. The implementation uses an off-the-shelf gigabit Ethernet card. It plugs into the host and connects to Xilinx’s AFX prototyping board, which contains a Virtex II Pro (XC2VP20) FPGA. The AFX card supports four duplex multi-gigabit channels and the prototype implementation uses three of these channels. The first channel connects to the gigabit Ethernet card and carries host messages. The second channel acts as the signaling channel and connects to the controller. The third channel is used as the data channel, which goes into the optical front-end card. The FPGA uses a commercially available PCS/PMA core and a custom external MAC layer to connect the external interfaces to the JIT engine

The Virtex-II Pro FPGA implements the JIT engine. The engine runs at 62.5 MHz and can handle data streams of up to 1 Gbit/sec. The NIC utilizes Xilinx Rocket IO-based MGT cores available in the FPGA, along with the standard PCS/PMA and MAC layer. The MGT cores, PCS/PMA and MAC layers operate at 125 MHz. Integration of the MGT cores in the FPGA makes it possible to focus the rest of the FPGA resources primarily on layer 3 functionality.

MGT cores can handle a variety of different protocols with data transmission rates from 622 Mbits/sec to 10 Gbits/sec. In this example, the OBS NIC allows a full duplex data stream of up to 1 Gbits/sec. Because of the broad range of speed, coupled with programmability and the availability of high-speed connectivity Intellectual Property (IP), MGT cores based on Virtex-4 or Virtex-II Pro are suitable for any high-speed application. This might include any front-end or back-end application, high-speed switching circuitry, or any aggregation of input/output ranging from 622 Mbps to 10 Gbps. Intel’s network processor or advanced switching interface and multi-standard IOs are ideal examples of suitable applications for MGT cores.

Figure 1: A prototype implementation of an optical burst switching (OBS) LAN client depends on the availability of integrated multi-gigabit transceiver cores in the FPGA.

Next-Generation OBS in the Works

The next-generation OBS NIC will build on the current RTI (Research Triangle Institute) International development effort. Employing the Virtex-4 platform, it will address the 10-Gbps data channel. However, the signaling channel will still work at 1 Gbps. This new OBS NIC will have a Xilinx PCI-X IP core interface that connects directly to the host. The JIT engine, which implements the JIT OBS signaling protocol, will be pipelined and parallelized. This architecture will result in significant performance gains in processing OBS connections.

RTI (Research Triangle Institute) International’s next-generation OBS NIC will also support other emerging OBS features such as optical quality of service (QoS). Additionally, the prototype board will have a data burstifier module in which data bursts will be scheduled using JIT+ scheduling. The JIT engine will even be designed to be easily modifiable to support other types of optical burst scheduling algorithms.

Overall system throughput in optical networks ultimately depends on few key factors such as I/O speed range, channel bonding, burst length, scheduling algorithm, and protocol overhead. These characteristics are extremely critical as they dictate whether or not the network will be able to support the necessary bandwidth. The implementation of multi-gigabit transceiver cores available in various FPGA platforms gives applications like optical burst switching a means of adequately addressing these factors head-on.

MGTs enable current generation OBS applications, like the 1 Gbit/sec network interface card from RTI (Research Triangle Institute) International. They also are versatile enough to support 10-Gbps data-channel applications. As a result, they, along with optical switching, are the keys to designing the next generation network.

Sam Sanyal is the Product Solutions Marketing Manager for Connectivity and Embedded products, at Xilinx. He has over 22 years of experience in the Semiconductor Industry in various applications, design and FAE positions. Sanyal holds a MSECE from Cal Poly University, a BSEE from Texas A& M, and a BSc in Physics from Calcutta University in India. Xilinx is an associate member of the Intel® Communication Alliance.

Mrugendra Singhai is a Research Engineer at RTI (Research Triangle Institute) International. He has over six years experience in the research industry, Singhai holds a MS in Computer Engineering from North Carolina State University and a BS in Electronics & Telecommunication Engineering from the S.G.S. Institute of Technology & Science, in India.