[go: up one dir, main page]

EP1212861A1 - Systeme de communication a anneau de fibre optique - Google Patents

Systeme de communication a anneau de fibre optique

Info

Publication number
EP1212861A1
EP1212861A1 EP00963413A EP00963413A EP1212861A1 EP 1212861 A1 EP1212861 A1 EP 1212861A1 EP 00963413 A EP00963413 A EP 00963413A EP 00963413 A EP00963413 A EP 00963413A EP 1212861 A1 EP1212861 A1 EP 1212861A1
Authority
EP
European Patent Office
Prior art keywords
link
data
data stream
client device
frame buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP00963413A
Other languages
German (de)
English (en)
Other versions
EP1212861A4 (fr
Inventor
Christopher D. Finan
Mark Farley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ciena Corp
Original Assignee
ONI Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ONI Systems Corp filed Critical ONI Systems Corp
Publication of EP1212861A1 publication Critical patent/EP1212861A1/fr
Publication of EP1212861A4 publication Critical patent/EP1212861A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0287Protection in WDM systems
    • H04J14/0293Optical channel protection
    • H04J14/0295Shared protection at the optical channel (1:1, n:m)
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0227Operation, administration, maintenance or provisioning [OAMP] of WDM networks, e.g. media access, routing or wavelength allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0227Operation, administration, maintenance or provisioning [OAMP] of WDM networks, e.g. media access, routing or wavelength allocation
    • H04J14/0241Wavelength allocation for communications one-to-one, e.g. unicasting wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0278WDM optical network architectures
    • H04J14/0283WDM ring architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0005Switch and router aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0226Fixed carrier allocation, e.g. according to service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q11/0071Provisions for the electrical-optical layer interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0075Wavelength grouping or hierarchical aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/009Topology aspects
    • H04Q2011/0092Ring

Definitions

  • the present invention relates generally to optical fiber communication systems, and particularly to a system architecture for making efficient use of optical fiber communication rings and provided reliable logical connections between network nodes.
  • Fiber optic rings have been installed, and as of 1999 are in the process of being installed in many cities. These communication networks have the potential to provide low cost, high bandwidth connections within a geographical area of several miles, as well as low cost, high bandwidth connections to other communication networks, such as the Internet. To date, however, the equipment available for providing data communications over these networks has been sufficiently expensive that relatively little use is being made of these fiber optic networks.
  • the present invention provides a low cost system architecture that allows Fibre Channel (FC) and Gigabit Ethernet (GE) data streams to be seamlessly routed through such fiber optic ring networks with very high reliability, all while making efficient use of the available bandwidth.
  • FC Fibre Channel
  • GE Gigabit Ethernet
  • An optical fiber ring network includes a plurality of interconnected nodes, each pair of neighboring nodes being interconnected by a pair of optical fiber links.
  • data is transmitted in both directions over each optical link, using a first optical wavelength ⁇ l to transmit data in a first direction over the link and a second optical wavelength ⁇ 2 to transmit data in a second, opposite direction over the link.
  • the two optical wavelengths ⁇ l and ⁇ 2 differ by at least 10 nm.
  • each of the data streams transmitted over the optical link has a bandwidth of at least 2 5 Gbps Further, each data stream has at least two logical streams embedded therein
  • each link multiplexer that includes one or more link cards for coupling the link multiplexer to client devices, and one or more multiplexer units for coupling the link multiplexer to the optical links
  • Each link card includes frame buffers capable of storing numerous Fibre Channel frames that are being transmitted to and from the client dev ⁇ ce(s) coupled to that link card
  • the link card also includes flow control logic for pre-filling the frame buffers with frames of data before the receiving client devices send flow control messages to request their transmission.
  • the combined effect of the frame buffers and flow control logic is that the full bandwidth of the links can be utilized even when the network nodes are very far apart and the client devices have small input data buffers
  • FIG 1 is a block diagram a fiber optic ring network having a plurality of nodes that employ the present invention
  • FIG 2 is a block diagram showing multiple physical communication paths between network nodes
  • FIG 3 is a block diagram of a link multiplexer, for use at any one node of the fiber optic ring network
  • FIG 4 is a block diagram of a link card, which is a component of the link multiplexer of FIG 3,
  • FIG 5 is a detailed block diagram of a link card
  • FIG 6 is a block diagram of a Mux Interface Frame Processor, which is a component of the link card of FIG 5
  • FIG 7 is a block diagram of a link card FC Link Interface Frame Processor, which is a component of the link card of FIG 5
  • FIG. 8 is a block diagram of a link card GE Link Interface Frame Processor, which is a component of the link card of FIG. 5;
  • FIG. 9 is a block diagram of a multiplexer unit, which is a component of the link multiplexer of FIG. 3;
  • FIG. 10 is a block diagram of a time division multiplexer and transmission rate smoothing circuit, which is a component of the multiplexer unit of FIG. 6.
  • FIG. 11 is a block diagram of a receive datapath circuit, multiple instances of which are used in the time division multiplexer and transmission rate smoothing circuit of FIG. 7;
  • FIG. 12 is a block diagram illustrating a segment of a fiber optic ring network
  • FIG. 13 is a block diagram illustrating a fiber optic ring network in accordance with the present invention
  • FIG. 14 is a block diagram illustrating how the fiber optic ring network illustrated in FIG. 13 is reconfigured during failover caused by a broken fiber;
  • FIG. 15 is a block diagram illustrating how the fiber optic ring network illustrated in FIG. 13 is reconfigured during failover caused by a failed (client device) node;
  • FIG. 16 is a block diagram illustrating how the fiber optic ring network illustrated in FIG. 13 is reconfigured during failover caused by a failed MUX at a head end of the fiber optic ring network.
  • the network includes a pair of fiber optic cables 102 that traverse a loop or ring.
  • the fiber optic cables are segmented so that signals on the pair of fiber optic cables are received by the link multiplexer 106 at that node, and then either processed or forwarded to the next segment of the fiber 102.
  • the link multiplexers 106 perform numerous functions: forwarding signals from one optical fiber segment to the next, routing signals from the optical fiber cables to client devices or communication lines, and routing signals to the optical fiber cables from client devices or communication lines.
  • the link multiplexers 106 also combine signals from multiple sources using time division and wavelength division techniques so as to transmit them over the fiber optic cables 102.
  • the link multiplexer 106 at each node 104 is typically coupled to other devices or communication lines via a switch or switch fabric 108.
  • the switch 108 connects various devices and communication channels to the host (or client) side ports of the link multiplexer.
  • the switches 108 are generally conventional switching devices and fabrics, such as time division multiplexed busses, and in some circumstances are not needed, and are therefore not described further in this document.
  • An example of a node 104-1 is a node that is coupled to a local area network (LAN).
  • the LAN may, in turn, be coupled to any number of server computers 1 10 and end user workstations 112.
  • the LAN may be coupled to link multiplexer 106 for the node 104 by a switch 108 and router 1 14, or perhaps just a router 1 14 if no other switching functionality is required.
  • a second example of a node 104-2 is one that provides an Internet connection 1 16 to the network 100, via a router 1 14 that is coupled to the link multiplexer 104-2.
  • a node 104-3 is one that contains a "disk farm" 1 18, which is generally a set of disks for providing centralized data storage used by devices coupled to other ones of the nodes 104.
  • the present invention makes it practical for companies with buildings at multiple locations throughout a city or similar region to use centralized data storage.
  • the speed of data access provided by the fiber optic network 100 is so high that there is little if any perceptible difference to end users between having data storage in each of the facilities as compared with having data storage at a single, central storage node on the ring network.
  • the link multiplexers 104 (FIG. 1) contain optical transceivers that both transmit and receive signals on each optical fiber 102.
  • the bidirectional physical signal paths between Node 1 and Node 2 are as follows: 1) OL12-1, 2) OL12-2, 3) OL13-1 - OL23-1, and 4) OL13-2 - OL23-2.
  • each optical fiber channel is used to send signals in only one direction.
  • each optical fiber The bidirectional signal paths on each optical fiber are formed using a particular type of '"coarse wavelength division multiplexing.”
  • one optical wavelength is used to transmit a stream of data, while a second optical wavelength is used to receive a stream of data.
  • "coarse" wavelength division multiplexing means that the optical wavelengths of the two optical signals are at least 10 nm apart (and preferably at least 20 nm apart) from each other.
  • each optical fiber cable carries a first 2.5 Gbps data stream at 1510 nm, and a second 2.5 Gbps data stream at 1570 nm. The two data streams flow in opposite directions.
  • port A of Mux 1 in one node is always connected to port B of Mux 1 in a neighboring node, and similarly port A of Mux 2 in one node is always connected to port B of Mux 2 in a neighboring node.
  • the ⁇ l signals flow in one direction through the ring while the ⁇ 2 signals flow in the opposite direction.
  • each link multiplexer uses four Mux units 142.
  • the four optical wavelengths used are 1510 nm, 1530 nm, 1550 and 1570nm.
  • each data signal transmitted over the optical fiber cables transmits data at a speed of 5.0 Gbps or 10 Gbps, thereby doubling or quadrupling the bandwidth of the optical fiber ring, without increasing the number of optical fiber cables used.
  • FIG. 3 there is shown a preferred embodiment of a link multiplexer 106.
  • the link multiplexer 106 includes up to eight link cards 140, and two multiplexer units 142.
  • Each link card 140 provides a high speed connection to a client device or communication channel.
  • two types of link cards are used: one for Fibre Channel connections, which operate at 1.0625 Gbps, and another for connections to Gigabit Ethernet channels, which operate at 1.25 Gbps. Internally, the link cards operate at 1.25 Gbps.
  • the link cards and multiplexer units operate at a rate that is greater than 1.25 Gbps, such as 1.28 Gbps or 1.5 Gbps or even 2.0 Gbps.
  • link cards 140 included in a link multiplexer 106 can include any combination of Fibre Channel and Gigabit Ethernet link cards (e.g., two FC link cards, or two GE link cards, or one of each, or four of each, and so on).
  • Each multiplexer unit 142 handles up to four full duplex, full bandwidth Fibre Channel (FC) or Gigabit Ethernet (GE) data streams. More specifically, each multiplexer can transmit as much as 5.0 Gbps of data, over two physical channels that each operate at 2.5 Gbps, and can receive as much as 5.0 Gbps of data, over two other physical channels that each operate at 2.5 Gbps.
  • FC Fibre Channel
  • GE Gigabit Ethernet
  • the link card 140 includes a Fibre Channel or Gigabit Ethernet interface 150 for coupling to a full duplex Fibre Channel or Gigabit Ethernet data stream.
  • the interface 150 can also be used to couple to fractional data streams, in particular to half bandwidth data streams (operating at 0.503125 Gbps for Fiber Channel or 0.5125 Gbps for Gigabit Ethernet).
  • Two buffers 152 are used to buffer data being transmitted in each direction, in particular for providing retiming between the clock domains of the Mux units (i.e., Mux 1 and Mux 2) and the clock domains of client device(s).
  • Mux 1 and Mux 2 the clock domains of the Mux units
  • Mux 1 and Mux 2 the clock domains of client device(s).
  • Mux unit interface 154 is used to couple the link card to the two multiplexer units 142-1 and 142-2.
  • FIG. 5 shows a more detailed diagram of the link card 140, which will now be described starting with gigabit interface cards (GBIC's) 160 at the bottom of FIG. 5 and working upwards toward the Mux unit interface 154.
  • the link card 140 includes a pair of GBIC's 160, each of which couples the link card 140 to a full duplex Fiber Channel or
  • the transmission media which connects the client devices 159 (also sometimes called the host system or client communication channel) to the link card 140, is typically a coaxial cable or fiber optic cable.
  • the GBIC's 160 transmit and receive serial data streams. In order to describe the data flow in a consistent manner, the data stream from the
  • Mux units 142 to the client devices is referred to as an outbound data stream, and the data stream in the opposite direction is referred to as an inbound data stream.
  • Each of the two GBIC's 160 is coupled to a respective link interface frame processor 164 by a respective serializer/deserializer (SERDES) circuit 162, such as the Vitesse VSC7125 (for Fibre Channel data streams running at 1.0625 Gbps) or the Vitesse VSC7135 (for Gigabit Ethernet data streams running at 1.25 Gbps).
  • the SERDES 162 converts the inbound serial data stream received from the GBIC 160 into a 10-bit parallel data stream and transmits the converted data stream to the link interface frame processor 164. Also, the SERDES 162 converts a 10-bit parallel outbound data stream received from the link interface frame processor 164 into a serial data stream and transmits the converted data stream to the GBIC 160.
  • the link interface frame processor 164 decodes 10b symbols in the inbound data stream from the GBIC into 8b symbols, and encodes 8b symbols received from an outbound frame buffers 168 into 10b symbols suitable for transmission.
  • the link interface frame processor 164 also controls the operation of an inbound frame buffer 166 and the outbound frame buffer 168.
  • a link card channel which includes the GBIC 160, SERDES 162, link interface frame processor 164, and a pair of inbound and outbound FIFO (first-in-first-out) frame buffers 166,168, can operate in one of two modes under user control: distance buffering enabled or disabled. When distance buffering is disabled the data frames and flow control primitives are passed through the inbound and outbound frame buffers 166, 168 as quickly as possible.
  • the link interface frame processor 164 receives and interprets flow control primitives received from the client device and then controls reading the data from the outbound frame buffer 168 as requested by the client device.
  • the client device controls the reading of data from the outbound frame buffer 168 using flow control primitives.
  • the flow control primitives are not passed through the buffers 166, 168 when distance buffering is enabled. Instead, the flow control primitives are consumed by the link interface frame processor 164.
  • the link interface frame processor 164 when distance buffering is enabled, the link interface frame processor 164 generates flow control primitives to send to the client device based on the fullness of the inbound frame buffer 166. Furthermore, when distance buffering is enabled the Mux interface frame processor 170 generates and receives it's own flow control primitives that are sent to the link card(s) connected thereto. It should be noted that the buffers 166, 168 preferably do not overflow in normal operation with distance buffering either enabled or disabled.
  • the link interface frame processor 164 extracts "extra" Idle symbols (sometimes called “Idles”) from the outbound data stream, storing only data frames and one Idle between data frames in the inbound FIFO frame buffers 166. Thus, if there are more than one Idle between data frames the extra ones are not stored in the inbound FIFO frame buffer 166. For the outbound data streams, the link interface processor 164 inserts as many Idles as may be needed to fill the space between data frames being transmitted to the client devices
  • Each word stored in the frame buffers 166 and 168 includes a pair of 8-bit characters, a flag to indicate if the first character of the pair is a "K" character, and a parity bit, for a total of 18 bits K characters are special symbols used for control, and thus are not ordinary data Examples of K characters are Idles, flow control primitives, and begin of frame and end of frame symbols
  • Each frame buffer 166, 168 is preferably large enough to store hundreds of Fibre Channel (FC) or Gigabit Ethernet frames
  • each frame buffer 166, 168 is sufficiently large to allow 240 full sized FC frames (of 2148 bytes each including headers, CRC and delimiters) to be stored
  • the link card 140 is able to accept from each data channel of the client device up to 240 full size Fiber Channel (FC) frames more than the next downstream device has indicated it is ready to accept
  • the link interface processor 164 also translates protocol specific frame delimiters, idle Words, and link synchronization characters into generic counte ⁇ arts that are sent through the rest of the fiber optic ring network 100 As a result, the operation of all components of the fiber optic ring network other than the link interface processors operate in a manner that is protocol independent In the case of Fibre Channel link cards, the link interface processors translate 4 byte idles and link synchronization words into 2 byte generic versions, which are written to the inbound frame buffer 166 Similarly when the 2 byte generic versions of these symbols are read from the outbound frame buffer 168, they are converted back to the 4 byte Fibre Channel versions, with the reading of the outbound frame buffer paused as necessary to align data frames to the 4 byte boundaries
  • the Mux unit interface 154 includes a Mux interface frame processor 170 that controls the flow of data between the frame buffers 166, 168 and the Mux units 142 (Mux 1 and Mux 2 of FIG 3)
  • the Mux interface frame processor 170 also decodes 10b symbols in the two data streams received from Mux 1 and Mux 2 into 8b symbols, and encodes 8b symbols received from the frame buffers 168 into 10b symbols suitable for transmission over optical fiber cables
  • the Mux interface frame processor 170 handles flow control as follows. When distance buffering is disabled, the MUX interface frame processor 170 passes data frames and flow control primitives through the inbound and outbound frame buffers 166, 168 as quickly as possible.
  • the MUX interface frame processor 170 needs to wait for the inbound frame buffer 166 to collect enough (i.e., a predefined amount) of the FC frame before it starts transmitting the frame to the MUX 142 to avoid a buffer underrun condition. This is because the MUX unit interface 154 always operates at 1.25 Gbps and the FC link interface operates at 1.0625 Gbps. To avoid an underrun condition in FC mode, the MUX interface processor 170 waits until at least 384 bytes of an FC frame are in the inbound FIFO buffer 166 before starting to read the frame, or until that much time has elapsed to handle the case when the frame is less than 384 bytes in length. In the case of Gigabit Ethernet, there is no need to wait before starting to read the frame out of the buffer since the clock speeds of the client device and the Mux unit interface 154 are matched.
  • the MUX interface frame processor 170 executes a flow control protocol with the link interface frame processor 164 for that channel. For instance, if the outbound frame buffer 168 starts filling up, this condition is detected by the MUX interface frame processor 170, which responds by sending flow control signals to the MUX interface frame processor 170 in the "sending link card" (connected to the sending client device), which then stops sending frames over the fiber optic network, leaving them in the sending link card's inbound frame buffer 166.
  • the MUX interface frame processors 170 in the sending and receiving link cards will exchange flow control messages (using flow control primitives) and allow the data frames to start flowing again.
  • link synchronization characters are received, only one of them is stored in the inbound FIFO frame buffer 166.
  • the Mux interface frame processor 170 when reading data from the inbound FIFO frame buffer 166, replicates the link synchronization characters and/or Idles, for as many times as may be needed to fill the data stream being sent downstream.
  • the Mux unit interface 154 draws data from the inbound frame buffer 166 and sends it to the Mux units 142 at a fixed rate for so long as the Mux units 142 are able to accept data and there is data in the inbound frame buffer 166 to be sent.
  • the Mux unit interface accepts data from the Mux units 142 as long as the outbound frame buffer 168 has room to store at least one additional full size frame (e g , 32k bytes for Gigabit Ethernet frames), and stops accepting data (1 e , new frames of data) from the Mux units 142 once the outbound frame buffer 168 passes that predefined fullness mark
  • the Mux interface frame processor 170 is coupled to each of the two multiplexers
  • SERDES se ⁇ ahzer/dese ⁇ alizer circuit 174
  • the SERDES 174 converts the serial data stream from a multiplexer 142 into a 10 bit parallel data stream that is transmitted to the Mux interface frame processor 170, and converts a 10 bit parallel data stream received from the Mux interface frame processor 170 into a serial data stream that is transmitted to one of the Mux units 142
  • the Mux interface frame processor 170 is statically configured by the user to route data to and from client device interface 0 from and to one of the MUX's (l e , either MUX 1 or MUX 2) Client device interface 1 data then would be routed to and from the other MUX (142) not being used by client device interface 0
  • the MUX interface frame processor 170 is configured to route frames from both client interfaces 159 to the same MUX 142 and the frames would be specially tagged so they could be sent out to appropriate client device, via the appropriate client device interface at the other end of the link
  • the other MUX 142 would then be used for failover, in case there is a failure in a hardware component in the original path Routing frames from both device interfaces 159 to the same MUX is particularly useful when the frames have been compressed by the link interface frame processors 164 (I e , in embodiments in which the link interface frame processors 164 include data compression circuitry)
  • Each link card 140 also includes a CPU or controller 180 for controlling the operation of the link card 140, and, in particular, for configuring the data paths through the link card 140 and for initializing the link card 140 upon power up, reset, or a change in the data channel configuration of the system that changes the data paths through the link card 140
  • the link interface frame processor 164 further performs data compression and decompression functions, compressing outbound data streams using a predefined data compression and decompressing inbound data streams using a corresponding decompression method
  • data compression and decompression functions Numerous appropriate data compression methods are well known to those skilled in the art, and thus are not described here
  • a data compression method that, on average, achieves at least 2: 1 data compression, the bandwidth of the system can be doubled.
  • FIG. 6 illustrates a block diagram of the Mux interface frame processor 170.
  • the Mux interface frame processor 170 includes a pair of FIFO read circuits 181 : a first FIFO read block 181-1 provided for the interface 0 inbound FIFO and a second FIFO read block 181-2 provided for the inbound interface 1.
  • Each FIFO read circuit 181-1 , 181 -2 is configured to control the reading of the corresponding inbound FIFO frame buffer 166 (Fig. 5). If there is nothing (i.e., no packets, and no other data other than idles or link initialization primitives) in the buffer 166, then the FIFO read circuit will repeatedly output the most recently read link initialization word or Idle word.
  • Each FIFO read circuit 181-1 , 181-2 also holds off reading from its corresponding inbound FIFO 166 if the corresponding Inband Tx circuit 185-1 or 185-2 is processing a pending request or if the corresponding Rx Credit logic 190-1 or 190-2 is enabled and indicates there is no credit available.
  • the FIFO read circuit 181 delays reading a packet from the inbound FIFO until enough of the frame is present to ensure that an underrun will not occur (as already discussed above).
  • the pair of FIFO read circuits 181- 1 , 181 -2 are coupled to a pair of multiplexors 183- 1 , 183-2 configured to allow Interface 0 to be statically connected to MUX 1 or MUX 2 and Interface 1 to be connected to the other MUX.
  • this MUX configuration can be enhanced to allow the data streams from both client devices interfaces to be blended and routed through a single MUX 142 (MUX 1 or Mux 2), for instance during failover.
  • the Mux interface frame processor 170 also includes a series of circuits blocks: an Inband Tx circuit 185, a TX credit circuit 186 and a 8b/10b encoder 187.
  • Each Inband Tx circuit 185 includes registers to hold a pending Inband frame that the onboard CPU 180 wishes to send and arbitration logic to send the Inband frame when the FIFO Read circuit 181 signals that there is a gap between the inbound frames being transmitted through the Mux interface frame processor.
  • Inband frames are periodically transmitted by the link card 140.
  • the Inband frames are removed from the data stream by the Inband Rx circuit 189 in the receiving link card, which sends the received Inband frames to that link card's local CPU/controller 180 for processing.
  • the CPU's 180 on the two link cards at the two ends of a communication channel can send messages (i.e., in Inband frames) back and forth to check on the functionality of the channel and to coordinate user settings (i.e., to make sure the user settings, including the user setting indicating whether distance buffering is enabled or disabled, are the same) and the like.
  • Each Tx Credit circuit block 186 is configured to insert link credit words onto the link instead of Idles when distance buffering is enabled and there is room in the outbound frame buffer 168 (FIG. 5) to store an additional packet, taking into account all previously sent link credit words.
  • Each 8b/10b encoder 187 is configured to encode the 16 data bits and 1-bit k- character flag read from the FIFO into two 10 bit characters and to send the resulting 20 bit word to the SERDES 174.
  • the data receiving circuitry of the Mux interface frame processor 170 includes a cascading chain of three circuit blocks for each receive channel: an Rx DataPath circuit 191 , an RX Credit circuit 190 and an Inband Rx circuit 189.
  • the Rx DataPath circuit 191-1 is substantially identical to the Rx DataPath circuit 191 in the TDM smoother in MUX1 and MUX2, and will be discussed below with respect to FIG. 1 1.
  • the Rx Credit circuit 190 strips link credit words from the data stream and for each such link credit word adds to the available storage credit, if distance buffering is enabled.
  • the storage credit accumulated by the Rx Credit circuit 190 indicates how many data frames the corresponding FIFO read circuit 181 can read from its inbound FIFO and send down the channel to the link card on the other side of the channel.
  • the Rx Inband circuit 189 strips inband frames from the data stream and stores them for reading by the link card's local CPU 180.
  • a pair of outbound multiplexors 184-1 and 184-2 are configured to allow MUX 1 to be statically connected to client device Interface 0 or Interface 1 and MUX 2 to be connected to the other Interface. As stated above, this MUX configuration can be enhanced to allow the data streams from both client devices interfaces to be blended and routed through a single MUX 142 (MUX 1 or Mux 2).
  • each multiplexors 184 is sent to a FIFO Write circuit 182, which writes received frames to the outbound FIFO frame buffer 168 (FIG. 5) and also writes link initialization words and Idles to the buffer 168 when they are different from the immediately preceding words in the data stream.
  • the Mux interface frame processor 170 further includes Status and Control Registers 192, which are a set of registers that are readable and or writeable by the link card's local CPU 180 in order to monitor and control the Mux interface frame processor.
  • each communication channel is either in Fibre Channel (FC) mode or Gigabit Ethernet (GE) mode.
  • FC Fibre Channel
  • GE Gigabit Ethernet
  • a different version of the link interface frame processor 164 (FIG. 5) is provided for use with client devices transmitting and receiving data in each mode.
  • a FC link interface frame processor depicted in FIG. 7 is provided for FC mode
  • a GE link interface frame processor depicted in FIG. 8 is provided for GE mode.
  • FC Link Interface Frame Processor FC Link Interface Frame Processor.
  • the FC Link Interface Frame Processor 164-1 includes an FC Rx DataPath circuit 193-1 that is substantially similar to the RX DataPath circuit (described below with respect to FIG. 1 1) used in the Mux TDM-Smoother of the Mux interface frame processor.
  • the RX DataPath circuit 193-1 has additional logic at its front end to convert 4 character FC idle and link initialization words to predefined 2 character generic counte ⁇ arts. These counte ⁇ art symbols are used only internal to the fiber optic ring network (i.e., within the link cards 140, MUX's 142 and the fiber optic cables).
  • the FC link interface frame processor 164-1 further includes:
  • An Rx Credit circuit 194-1 that strips link credit words (RRDY) from the data stream and for each such link credit word adds to the available storage credit, if distance buffering is enabled. • A FIFO Write circuit 195-1 that writes received frames to the inbound FIFO frame buffer 166 (FIG. 5) and writes link initialization words and idles to the FIFO when they are different from the immediately preceding words in the data stream.
  • FIFO frame buffer 168 (FIG. 5). If there is nothing in the outbound FIFO frame buffer, then the FIFO Read circuit 200-1 will repeatedly output the most recently read link initialization or idle word. If distance buffering is enabled, the FIFO Read circuit furthermore holds off reading data frames from the FIFO if no data frame credits are available. • A ring to FC conversion circuit 199-1 that converts 2-character generic idle and link initialization words into their standard FC 4-character counte ⁇ arts.
  • a Tx Credit circuit 198-1 that inserts link credit words onto the link instead of Idles when distance buffering is enabled and there is room in the inbound FIFO frame buffer 166 to store an additional data frame, taking into account previously send link credit words (see later section on distance buffering for details).
  • An 8b/10b encoder 197-1 that is configured to encode the 16 data bits and 1-bit k- character flag read from the outbound FIFO frame buffer 168 into two 10 bit characters and to send the resulting 20 bit word to the SERDES 162 (FIG. 5).
  • An FC Statistics circuit 196-1 that includes logic to maintain statistics on one Fibre
  • Channel Link Several modes of operation are supported to provide users detailed protocol specific information, for instance packet counts, error counts, counts of various specific types of K characters, and so on.
  • the GE Link Interface Frame Processor 164-2 is used on link cards that couples the client devices using Gigabit Ethernet to the fiber optic network.
  • the GE IFP 164-2 includes an FC GE DataPath circuit that is substantially similar to the RX
  • the RX DataPath circuit 193-1 has additional logic at its front end to convert GE idle words into predefined 2 generic counte ⁇ arts.
  • the delimiters framing Gigabit Ethernet frames are modified to a generic format. These counte ⁇ art symbols are used only internal to the fiber optic ring network (i.e., within the link cards 140, MUX's 142 and the fiber optic cables).
  • the GE link interface frame processor 164-2 further includes:
  • RxPause logic circuit 194-2 strips Pause frames from the data stream and starts a pause timer, internal to the link card, if distance buffering is enabled.
  • a FIFO Write circuit 195-1 that writes received frames to the inbound FIFO frame buffer 166 (FIG. 5) and writes link initialization words and idles to the FIFO 166 when they are different from the immediately preceding words in the data stream.
  • a FIFO Read circuit 200-1 that controls the reading of data frames from the outbound FIFO frame buffer 168 (FIG 5) If there is nothing in the outbound FIFO frame buffer, then the FIFO Read circuit 200-1 will repeatedly output the most recently read link initialization or idle word If Rx Pause logic (which is used in conjunction with distance buffering) is enabled, the FIFO Read circuit furthermore holds off reading data frames from the FIFO if the Rx Pause logic circuit 194-2 indicates that transmission should be paused
  • a ring to GE conversion circuit 199-2 that converts 2-character generic idle into Gigabit Ethernet idle words, and generic frame delimiters back into Gigabit Ethernet frame delimiters •
  • a Tx Pause circuit 198-2 generates and inserts into the outbound data stream a Pause frame when distance buffering is enabled and the inbound FIFO frame buffer 166 is at least half full
  • the FIFO fullness threshold level for generating a Pause frame may differ in other embodiments See the discussion on distance buffering, below, for details
  • An 8b/10b encoder 197-2 that is configured to encode the 16 data bits and 1 -bit k- character flag read from the outbound FIFO frame buffer 168 (FIG 5) into two 10 bit characters and to send the resulting 20 bit word to the SERDES 162 (FIG 5)
  • a GE Statistics circuit 196-2 that includes logic to maintain statistics on one Gigabit Ethernet Channel Link Several modes of operation are supported to provide users detailed protocol specific information, for instance packet counts, error counts, counts of various specific types of K characters, and so on
  • each Mux unit 142 includes a pair of wavelength division multiplexer and demultiplexer circuits (WDM) 202- 1 , 202-2, each of which is coupled at one end to a respective optical fiber cable segment, for example OL12-1 and OL13-1
  • Each of the WDM's 202-1, 202-2 includes an optical signal receiver for receiving and demodulating signals at a first optical wavelength and an optical signal transmitter for transmitting signals at a second optical wavelength More specifically, in a preferred embodiment the first WDM 202-1 transmits at the same optical wavelength ⁇ l that the second WDM 202-2 receives, and receives at the same optical wavelength ⁇ 2 that the second WDM 202-2 transmits.
  • each Mux unit 142 is configured to handle four wavelengths over eight channels.
  • the four wavelength - eight channel card has twice as many SERDES 208, TDM/Smoother 206 and SERDES 204 circuits compared with two wavelength - four channel Mux depicted in FIG. 9.
  • the four additional SERDES 208 are connected to the crosspoint switch 210.
  • Two SERDES 204 are then connected to each of its Fiber 1 WDM and Fiber2 WDM circuits.
  • Fiberl/WDM then transmits at wavelengths ⁇ l and ⁇ 3, and receives at wavelengths ⁇ 2 and ⁇ 4.
  • Fiber2/WDM transmits at wavelengths ⁇ 2 and ⁇ 4, and receives at wavelengths ⁇ l and ⁇ 3.
  • Each of the data signals received and transmitted by the WDM's 202-1 , 202-2, both on the optical side and on the electrical (internal) side are 2.5 Gbps serial data signals in the preferred embodiment. In other embodiments, other data transmission rates may be used, such as 5.0 Gbps or 10 Gbps.
  • Each WDM 202 is coupled to a respective time division multiplexer and smoothing circuit (TDM smoother) 206 by a respective serializer/deserializer (SERDES) circuit 204, such as the Vitesse VSC7146 (for data streams running at 2.5 Gbps).
  • TDM smoother time division multiplexer and smoothing circuit
  • SERDES serializer/deserializer
  • Each SERDES 204 converts the 2.5 Gbps serial data stream from its respective WDM 202 into a 20 bit parallel data stream that is transmitted to the TDM smoother 206 to which it is coupled, and converts a 20 bit parallel data stream received from the TDM smoother 206 into a serial data stream that is transmitted to the WDM 202.
  • the TDM smoother 206 performs a number of functions, including retiming of signals between clock domains, the multiplexing of data streams from two 1.25 Gbps channels into a single 2.5 Gbps data stream, and the demultiplexing of data streams from a 2.5 Gbps data stream into two 1.25 Gbps channels.
  • the TDM smoother 206 is described in more detail below, with reference to FIGS. 10 and 1 1.
  • the TDM smoother 206 internally uses 20b parallel data streams. On its Channel AB interfaces (which are coupled to the WDM's 202) it outputs and receives 20b parallel data streams. On its Channel A and Channel B interfaces, which are coupled to a crosspoint switch 210, the TDM outputs 10b parallel data streams.
  • a pair of SERDES circuits 208 such as the Vitesse VSC7135, are coupled to the switch side of each TDM to convert the 10b, 125 MHZ, data streams going to and from the switch side TDM interface into 1.25 GHz (i.e., 1.25 Gbps) serial data signals that are transmitted to and from the crosspoint switch 210.
  • the crosspoint switch 210 is a 16 x 16 crosspoint switch, such as the Triquint TQ8017 1.25 Gbps 16 x 16 digital crosspoint switch.
  • the 2.5 Gbps signal received by the Mux unit 142 from each optical fiber cable includes two 1.25 Gbps data signals, which in turn may be sub-divided into two or more logical signals.
  • Each 1.25 Gbps data signal is considered to be a separate logical channel, and each such channel may be either an FC channel or a GE channel.
  • the two data channels on a single optical fiber cable may be an two FC channels, two GE channels, or one FC channel and on GE channel. Since FC and GE data streams are both converted into a generic data stream that is protocol independent, the two data channels within each 2.5 Gbps signal can be any combination of underlying data streams.
  • Each Multiplexer unit 142 includes a CPU or controller 212 for configuring the switch 210 and keeping track of the status of the TDM smoothers 206 and WDM's 202.
  • FIG. 10 shows a more detailed diagram of the TDM smoother circuit 206.
  • the left side of the diagram represents the switch side interface between the TDM smoother 206 and the SERDES circuits 208 (FIG. 9), while the right side of the diagram represents the WDM side interface between the TDM and the SERDES circuits 204 (FIG. 9).
  • the inbound data path through the TDM smoother 206 converts the Channel A and Channel B Rx data streams into a combined Channel AB Tx data stream, while the outbound data path through the TDM smoother 206 converts the Channel AB Rx data stream received from a WDM circuit into a pair of Channel A and Channel B Rx data streams.
  • the incoming data is converted by the link cards into a 1.25 Gbps stream that has frames of "encapsulated" data (i.e., surrounded by special start and end frame characters).
  • Each frame of data that is transmitted through the link multiplexer begins and ends with a special 20-bit encapsulation character.
  • the link multiplexer transmits 20-bit flow control characters between link cards to stop and start the flow of frames between the FIFO frame buffers in the link cards.
  • Data characters which are 10-bits each, are transmitted through the link multiplexer in pairs in 20-bit chunks.
  • the basic unit of transmission in the link multiplexer both for data and control characters, is 20-bits long.
  • the set of predefined 20-bit control characters used in the link multiplexer of the preferred embodiment includes, but is not limited to the following:
  • VIO Internal Violation
  • NOS Internal Not Operational Sequence
  • Each of the special 20-bit characters used in the link multiplexer consists of a predefined K28.5 10-bit character, followed by a 10-bit character specific code.
  • the K28.5 character is the "special” character most commonly used in Fibre channel and Gigabit Ethernet to denote control characters, as opposed to data characters. It is an ideal character because it has a predefined "comma” bit pattern ("0011111 ”) used by deserialization circuits to align received bit streams to word boundaries.
  • Another special character, called the K28.1 character also contains the comma bit pattern.
  • the link multiplexer When combining two or more data streams for transmission over an optical fiber cable, the link multiplexer marks a first one of the data streams by replacing all its K28.5 characters with the K28.1 character, which enables the receiving device to separate out and properly identify the different logical data streams with the received physical data stream.
  • the two inbound data paths each begins with a latch 230 that stores every second 10- bit character, which is then combined with the immediately following 10-bit characters to form a stream of 20-bit characters.
  • the 20-bit characters are transmitted through an Rx
  • the Rx DataPath circuit 232 is described in more detail below with reference to FIG. 12.
  • the TDM 234 combines the Channel A and Channel B data streams using strict alternation. That is, it alternates between transmitting a 20-bit character from Channel A and a 20-bit character from Channel B. For instance, on even numbered clock cycles Channel A data is selected by the TDM 234 for transmission, and on odd numbered clock cycles Channel B data is selected by the TDM 234 for transmission.
  • the TDM 234 replaces all the K28.5 characters in Channel A with K28.1 characters.
  • the TDM 234 marks a first one of the logical channels by replacing its K28.5 characters with K28.1 characters, which enables the receiving device to identify all the logical channels within the received signal.
  • the K28.5 characters in all the other logical channels are left unchanged.
  • every link multiplexer 106 in the system uses identical multiplexer units, all the data streams transmitted over the optical fiber cable segments use the same marking scheme for distinguishing between a first subchannel within each data stream and the other subchannel(s) in the same data stream. Since the 20-bit data streams are combined "blindly,” there can be “false commas” straddling the boundary between the two disparate 20-bit characters. As a result, the SERDES circuits in the link cards and multiplexer units are run with their "comma detect" mode disabled, except during link initialization, so as to prevent the SERDES circuits from accidentally realigning the received bit streams on a false comma.
  • a time division demultiplexer (TDDM) 240 receives a 20-bit 125 MHZ signal.
  • the received signal contains two logical subchannels, in the form of alternating 20-bit characters.
  • the TDDM 240 inspects the received signal to (A) find the 20-bit word boundaries in the signal and (B) determine which logical subchannel has K28.1 characters and is therefore Channel A.
  • the TDDM 240 transmits the 20-bit characters from first subchannel through a first Rx DataPath circuit 232-3 to a first output buffer 244-1 , and transmits the 20-bit characters for the second subchannel through a second Rx DataPath circuit 232-4 to a second output buffer 244-2.
  • the output buffers 244 each convert a received stream of 20-bit 62.5 MHZ data signals into a 125 MHZ stream of 10-bit characters.
  • Each of the Rx DataPath circuits 232 receives a stream of 20 bit symbols, but outputs a data stream of 18 bit symbols, each of which includes 16 bits of data, one flag to indicate if the first 8 bits need to be encoded as a "K character," and a valid bit to indicate if the data word is valid or invalid. These 18 bits are then encoded by a respective one of the 8b/ 10b encoders 242.
  • the Channel A data path circuit within the RX DataPath circuit also has an Inband Tx circuit 246 for inserting special inband control frames into the data stream during the idle times.
  • Control information can be distributed to the controllers in the link cards and multiplexer units of a fiber optic network by a single computer system or a single node controller on the fiber optic network.
  • the controllers within the network system communicate with each by having the control CPU's 212 of the MUX units 142 (and the control CPU's 180 of the link cards 140) send these inband frames.
  • the control CPU 212 writes the frame to a 64 byte register inside the Inband Tx circuit 246.
  • the control CPU then writes a flag to tell the hardware that the frame is ready to go.
  • the Inband Tx circuit 246 inserts the control frame with a special start of frame delimiter onto the DataPath instead of Idles.
  • the Inband Rx circuit 248 detects the special start of frame delimiter and stores the 64 byte frame data into the next one of eight Rx buffers (included in Status and Control Registers 254).
  • the Inband Rx circuit 248 propagates Idles instead of the inband control frame data to the subsequent 8b/10b encoder 242-3.
  • the Inband Rx circuit marks the Rx buffer into which the frame was written as being in use and signals the control CPU 250 that an Inband control frame is available. Once the control CPU 212 has read the frame, it marks the Rx buffer as available. If a special inband frame is received and the next Rx buffer is not available, the inband frame data is discarded by the Inband Rx circuit 248.
  • the TDM smoother 206 also includes a set of status and control registers 254 that are read by the Mux unit's CPU 212 via a CPU interface 252.
  • the TDM 234 does not change the K character symbols of one of the data streams so as to mark the A and B channels. Instead, the link cards of the system insert immediately before each frame a special Start of Packet (SOP) K character, replacing the Idle that immediately precedes the frame with SOP symbol.
  • SOP Start of Packet
  • the TDM 234, upon receiving an SOP symbol from the Channel A data path converts that symbol into a SOP1 symbol, thereby marking the data in Channel A as the first data channel.
  • the TDDM 240 inspects the received signal to (A) find the 20-bit word boundaries in the signal and (B) determine which logical subchannel has SOP1 characters and is therefore Channel A.
  • the TDDM 240 transmits the 20-bit characters from first subchannel through a first Rx DataPath circuit 232-3 to a first output buffer 244- 1 , and transmits the 20-bit characters for the second subchannel through a second Rx DataPath circuit 232-4 to a second output buffer 244-2.
  • the TDDM 240 converts the SOP and SOP1 symbols back into Idle symbols, since these special symbols are only for internal use within the fiber optic network.
  • the Rx DataPath circuit 232 (of which there are four instances in each TDM smoother 206, FIG. 10, and two instances in each MUX interface frame processor 170, FIG. 6), receives a 20 bit signal converts it to 16 bit data, a K character flag and an invalid word flag.
  • the Rx DataPath circuit 232 replaces any invalid words that are in a frame with a special violation word (FVIO), eliminates any invalid words that are outside of a frame, and retimes the data stream onto the local clock of the link card or MUX unit. It also maintains a count of invalid words received so that failing links can be easily isolated.
  • FVIO special violation word
  • Each received 20 bit word is initially decoded into 16 bit data and flags by a 10b to 8b decoder circuit 274.
  • the decoder circuit 274 produces a K-character flag plus a valid flag that indicates whether the 20 bit word was made up of valid 10 bit codes.
  • the 16 bit data and flags are sent to a word decoder and loss of synch state machine 276.
  • the word decoder 276 keeps track of whether the received data is inside a frame or outside of frame by recognizing the start of frame and end of frame delimiters. If the received word is valid, the 16 bit data and K character flag are passed as is to a drop circuit 278. If the received word is invalid and the data is in the middle of frame, the word is replaced with the special FVIO word. Downstream logic will recognize that this is not the original data, but it will not count it as an invalid word to facilitate error isolation, because it is not known where along data path the error occurred except that it occurred at a node prior to the receiving node. If the received word is invalid and the data is not in a frame, then a Force Drop flag is asserted to the drop circuit 278 so that the invalid word will be dropped completely from the data stream.
  • the state machine 276 If the state machine 276 detects four invalid words within any ten consecutive words, the state machine 276 assumes that the received data stream has lost synchronization. In this case it will propagate an FNOS word to the drop circuit 278, marked with a K-character flag and a Insert/Drop OK flag. After this, the state machine inspects the incoming data stream and replaces each word in the data stream with an FNOS word until it receives three consecutive valid words that are either Link Initialization words or Idles, at which point the state machine 276 assumes that synchronization of the received data has been re-established and resumes propagating words from the data stream to the drop circuit 278.
  • the word decoder and los of synch state machine 276 determines if the received word is an Idle or one of a predefined set of four link initialization words. When any of these five symbols is detected, the state machine 276 sets a corresponding one 5 idle/init decode flags and also sets the Insert/Drop OK flag. The 16 bit data, K character flag, 5 idle/init decode flags and the Insert/Drop Ok flag are passed through a 23 bit wide FIFO 280. In a preferred embodiment, the FIFO 280 stores up to 128 words, each 23 bits wide.
  • the drop circuit 278, 128x23b FIFO 280 and an insert circuit 282 form a smoother or data retiming circuit.
  • the drop circuit 278 and the write side of the FIFO 280 operate on the Rx Clock (recovered by the external SERDES circuit from the serial receive data).
  • the insert circuit 282 and the read side of the FIFO 280 operate on a System clock that comes from a local oscillator. Nominally, both of these clocks operate at the same frequency, but in practice they will be slightly different, and thus the need to retime the data stream.
  • the drop circuit 278 normally writes to the FIFO 280 every clock cycle. However if the Force Drop flag is on (i.e., set), or if the FIFO 280 is more than half full and the Insert/Drop Ok flag is on, the FIFO write enable will be suppressed and the current word from the decoder 276 will be discarded (i.e., it will not be written into the FIFO 280).
  • the insert circuit 282 normally reads from the FIFO 280 every cycle. However if the FIFO 280 is less than one quarter full and the last word read from the FIFO 280 had the Insert/Drop OK flag set, the FIFO read is suppressed and last read word is replicated onto the output.
  • the FIFO 280 will occasionally go past half full since the rate of reads from the FIFO is slightly slower than the rate of writes to the FIFO.
  • the drop circuit 278 will then occasionally drop words to keep the FIFO 280 less than half full. If the System clock is slightly faster than the Rx Clock, then the FIFO will occasionally go below one quarter full since the rate of reads from the FIFO is slightly faster than the rate of writes.
  • the insert circuit 282 will then insert a word into the data stream to keep the FIFO above one quarter full.
  • the insert circuit 282 has some special features to support the transmission of Inband data.
  • the Inband Tx circuit 246 e.g., of the TDM smoother
  • the insert circuit 282 stops reading from the FIFO 280 and sends an "Inband Tx go" signal to the Inband Tx circuit that is immediately downstream from the RX DataPath circuit 232.
  • the insert circuit continues to replicate the current word on its output for several clock cycles, until the entire pending Inband frame has been inserted into the data stream by the Inband Tx circuit.
  • the downstream inband Tx circuit While the Inband Tx go signal is asserted, the downstream inband Tx circuit will replace the data from the Rx DataPath circuit with the Inband Tx frame. Once the inband frame transmission is complete, the Inband Tx circuit de-asserts the Inband Tx request signal, and the insert circuit 282 resumes normal operation. After an Inband frame has been sent, the FIFO 280 will often be more than half full, and therefore the drop circuit 278 will drop as many words as possible to bring the FIFO back under half full.
  • the FIFO 280 will not be overrun while inband transmission is in progress, since the inband transmission will not start until the FIFO 280 is less than half full.
  • Another function of the insert circuit 282 is to set a valid output flag for use by the MUX interface frame processor instances of the Rx DataPath circuit 232.
  • the insert circuit 282 sets the valid output flag whenever (A) the word read from the FIFO does not have its Insert/Drop OK flag on, or (B) the word read from the FIFO is not an Idle or link initialization word that is the same as the previous word as determined by the 5 idle/init flags passed through the FIFO.
  • the MUX interface frame processor uses the valid output flag to determine what words need to be written to the Outbound frame buffer 168 (FIG. 5).
  • the Rx DataPath valid output flag is not used by the TDM smoother 206.
  • the Rx DataPath circuits 193-1 and 193-2 in the link cards have a slightly modified word decoder and state machine 276.
  • the word decoder 276 includes a FIFO, which can have a length of one or two words, that enables the word decoder to perform a look ahead for start of frame (SOF) symbols preceded by an Idle. Whenever this combination of two symbols is detected by the word decoder 276, the word decoder replaces the Idle immediately preceding the SOF with an SOP symbol.
  • SOF start of frame
  • the data stream paths through the ring network are statically configured. That is, the signal paths are not constantly being determined on the fly. Rather, it is assumed that the customers using ring network lease bandwidth on the network on an ongoing basis. Signal paths through the network are, in general, changed only when (A) there is a change in the set of leased channels on the network, or (B) there is a link failure.
  • the host devices also called clients or client devices communicating via the ring network are many kilometers apart. For instance when two devices are fifty kilometers apart, with a round trip communication path of 100 kilometers, the round trip communication time is at least 500 microseconds, excluding the time it takes for the receiving device to receive and respond to an incoming signal.
  • the input buffers of the receiving device are small (e.g., 8k bytes)
  • the effective bandwidth of a 1.0625 Gbps channel may be much smaller than the full bandwidth. For instance, consider a system in which a client device requests files from a disk farm at a node that is fifty kilometers away, and the requesting client's input buffers only hold 8k bytes (i.e., about four Fibre Channel frames).
  • the client When the client sends its initial data request, it also sends four storage credits to the disk farm node. It does not send more than four credits because that would cause the disk farm to send more data than the client can reliably buffer and process, which would result in costly resend requests and large delays.
  • the disk farm using prior art methodology, responds by sending only four FC frames of data and waits until it receives more storage credits from the requesting client before sending any more. However, it takes at least 500 microseconds for the client to receive the first data and return another three credits. Thus, at best, the client is able to receive 8k bytes per 500 microsecond period, or a total data rate of about 16 Megabytes per second, as opposed to the 100 Megabytes per second bandwidth of the channel. Thus, in this example, about 84% of the available bandwidth is wasted due to the long round trip time required for sending storage credits to the sending device. This performance can be improved by increasing the size of the requesting client's input buffers, as well as by sending a new storage credit as soon as each frame is received.
  • bandwidth usage is improved by providing frame buffers 166, 168 (FIG. 5) in the link cards and then splitting the flow control into three separate domains.
  • the domains are (1) client device to link card, (2) link card to link card across the fiber optic network, and (3) link card to client device.
  • the buffering in the client devices is sufficient to handle the round trip link time from the client device to the link card full bandwidth can be maintained, in part because of the large frame buffers 166, 168 provided in the link cards and in part due to the use of the link cards as the senders and receivers of storage credits.
  • the Link interface frame processor will issue flow control primitives to the attached client device to maintain maximum bandwidth while ensuring the Inbound Frame buffer does not overflow. Based on the flow control primitives issued by the attached client device the Link interface frame processor will control the reading of the Outbound frame buffer 168.
  • Fibre Channel Devices After a Fibre Channel Link is initialized, Fibre Channel Devices perform a login procedure that includes exchanging how many buffer to buffer credits they have.
  • the number of buffer to buffer credits advertised by a first client at one end of a Fibre Channel link is the number of frames a second client, attached to the first client by the link, can send to the first client before it needs to wait for additional credit. Additional credits are transferred by sending a special word, called RRDY.
  • One RRDY word transfers one credit, which enables the receiving device to transmit on Fibre Channel frame.
  • the fiber optic network of the present invention allows the login procedure between the two attached client devices to complete without modifying the information exchanged.
  • the link cards of the system do, however, determine the number of buffer to buffer credits supported by the devices at each end of the link by examining the login frames as they pass through. Referring to Fig. 7, the inbound frame buffer 166 and outbound frame buffer 168 can each hold a maximum of 240 maximum sized Fibre channel frames. Whenever a frame is written to the inbound frame buffer 166, the RX_credit circuit 194-1 increments a "pending RRDY" counter internal to the link card.
  • the TX_Credit circuit 198-1 inserts a Fibre channel RRDY word into the Tx data stream and decrements the pending RRDY counter. If the inbound frame buffer is more than half full, the RRDY's are held pending until the inbound frame buffer drops below half full.
  • FC client device is actually operating under the assumption that it can only send as many frames as was specified by the client device on the remote end of the fiber optic network
  • the remote device's buffer to buffer credit is less than or equal to 120 frames, the flow control between the local client device and the inbound frame buffer will operate properly If the advertised buffer credit (of the remote device) is greater than 120, then distance buffering can be disabled, in which case all frames and RRDY's will be passed through the system end to end with no internal buffering of storage credits
  • most FC client devices have buffer to buffer credits in the range of two to sixteen frames Very few FC client devices have internal buffering for even as many as sixty-four FC frames
  • the flow control of frames from the Outbound Frame Buffer 168 to the client device operates as follows
  • the link card must obey the buffer to buffer credit advertised by the attached device during login
  • the TX_cred ⁇ t circuit 198-1 initializes an available credit counter to the advertised number as it examines the login frames Subsequently whenever it sends a frame it decrements the available credit counter by one Whenever the RX_cred ⁇ t circuit 194-1 receives an RRDY it increments the available credit counter by one As long as the available credit counter is greater than zero, frames are read from the Outbound frame buffer and sent to the client device If the available credit counter is zero, then frames are held pending in the Outbound frame buffer until an RRDY arrives
  • the Tx_Pause circuit 198-2 sends an Ethernet Pause frame to the attached device with the pause_t ⁇ me field set to the maximum value This should cause the attached device to stop sending Ethernet frames
  • the Tx_Pause circuit 198-2 sends an Ethernet Pause frame with a zero in the pause_t ⁇ me field to allow the attached device to resume sending frames
  • the pause time counter is loaded from the pause_t ⁇ me field The pause time counter is decremented by 1 each 512 bit times (which is the Ethernet standard) If the pause time counter is greater than zero, then frames are held pending in the Outbound Frame Buffer by the FIFO read circuit 200-2.
  • the link card to link card flow control operates in a fashion very similar to the standard Fibre Channel flow control mechanism.
  • the MUX interface frame processors 170 assume the attached link card has 120 buffer credits available. The available buffer credit is decremented by 1 (by the Tx credit circuit 186) each time a frame is sent from an inbound frame buffer.
  • the available buffer credit is incremented by 1 (by the Rx credit circuit 190) each time an intra-network buffer credit word (called FRRDY for "Finisar FRRDY") is received. If the available buffer credit is zero, frames are held pending in the inbound FIFO frame buffer.
  • FRRDY intra-network buffer credit word
  • an FRRDY intra- network buffer credit is sent back across the network (by the Tx credit circuit 186) each time a frame is written to the outbound FIFO frame buffer. If the outbound frame buffer is more than half full, the FRRDY intra-network buffer credits are held pending (and are sent once the outbound FIFO frame buffer becomes less than half full).
  • a first channel blending scheme allows the two channels on a single link card to be "blended" into a single channel by the Mux unit interface 154.
  • a dual channel link card will use only a single 1.25 Gbps channel on one MUX unit, instead of one channel on each of two MUX unit as was described previously.
  • the two channels on this link card are then connected to two channels on another link card. While one channel is transmitting a frame to the MUX unit interface 154, any frames from the other channel are held in its Inbound FIFO frame buffer 166.
  • the large size of the frame buffers 166 on the link card give the system the ability to absorb relatively long bursts of frames, up to 360 frames, arriving from both channels at the same time without having to slow down the senders. Whenever the frame bursts from the channels are shorter than that, the idle times between bursts are used to empty the inbound frame buffers over the single Mux channel, without having to send flow control words to force the client device to slow down the rat at which it sends frames.
  • the normal SOF (start of frame) delimiters used internal to the fiber optic network have one bit modified to indicate which link card channel the data is being sent to and from.
  • a second channel blending scheme multiple link cards (in two more network nodes) are connected in a logical ring. All frames are encapsulated with a target link card ID. As frames arrive in the MUX interface frame processor, the target link card ID is decoded. If the target link card ID in the frame matches the ID of the receiving link card, the frame is stored in the appropriate outbound frame buffer. If the target link card ID's do not match, the data is forwarded to another node through the MUX interface frame processor. If data is not currently being passed back through to the MUX interface frame processor, data from one of the inbound frame buffers can then be sent out to the MUX unit.
  • a supplemental buffer is provided to buffer one frame inside the MUX interface frame processor.
  • the link cards meter their data flow onto the network by using a "leaky bucket" methodology to keep their bandwidth to a user specified amount. If a given link card is not using all of it's specified bandwidth, it can send a bandwidth credit onto the network which another link card can claim to temporarily burst above its user specified maximum. Responding to Link Failures
  • the ring architecture of the present system can be used with redundancy techniques to provide a complete redundancy solution, enabling the system to be reconfigured in response to virtually any component failure to as to restore data transmission services to all or almost all nodes of the system.
  • FIG. 12 shows the symbols that will be used in the following figures to represent one node of a fiber optic network in accordance with the present invention.
  • Each Mux unit of the node is shown as a rectangular box, and each optical fiber cable connected to the node is represented by two lines, with each line representing a distinct logical channel.
  • Link cards are represented by smaller boxes next to one of the Mux boxes.
  • a label such as "2LC: 1 A2/1B2" indicates the number of link cards, and also indicates which Mux unit ports are connected to the link cards.
  • FIG. 13 shows a typical fiber optic network in accordance with the present invention.
  • the "Head End” node may be considered a "Point of Presence” node for a service provider and the "Customer" nodes are high bandwidth clients served by the Point of Presence.
  • Customers 3 and 5 have two "clear" channels all the way to the Head End node.
  • the other customers have one clear channel each, along with another clear channel to another customer node. It should be understood that the system configuration shown in Fig. 13 is merely one example of many, many possible such configurations.
  • a standard switch either Fibre Channel or Gigabit Ethernet
  • the switch is enabled only during certain failover modes of operation, as will be described below.
  • these switches are used in the context of the present invention to automatically route around any failures in the links from the switch to the link card, or with the link card itself.
  • These switches also allow backup paths to be provided by the fiber optic network that
  • the switch at Customer node #2 is enabled, routing traffic back and forth between the two link cards at that node.
  • Customer node #6 loses it's direct connection to the head end, but the activation of the switch at Customer node #2 provides it with a path to the head end through Customer node #2.
  • the total maximum bandwidth of nodes 2 and 6 will be cut in half, though each node could use the original maximum bandwidth if the other node is idle.
  • Customer node #3 loses one of it's direct paths to the head end, but has a second path to the head end that is active.
  • a more complicated failover scenario is when one of the Mux cards in a customer node fails.
  • nodes with just two link cards are configured to use the same Mux unit with one link card going out one Mux port and the other link card going out the other Mux port.
  • the link card controllers will configure both link cards to use a user specified backup path on the other Mux unit and the external switch on that node will be activated, as shown for Customer node #1 in FIG. 15. In a system where all network data paths are used, this reconfiguration will result in the two link cards of the node with the failed Mux unit being inserted in the middle of an already used link.
  • a client node Mux unit failure appears the same as a fiber break and thus is handled as discussed previously.
  • FIG. 16 another failover mode is used when one of the "Head End" Mux units fails.
  • the Head End will be connected to the optical fiber cables so that each Mux unit is connected to both fiber optic rings.
  • the external switches at two or more customer nodes will need to be activated.
  • each customer node will have at least one link card that is still connected by a live data path to the remaining alive Mux unit at the Head End.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Small-Scale Networks (AREA)
  • Optical Communication System (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Un réseau en anneau de fibre optique (100) comporte plusieurs noeuds interconnectés (104-1, 104-2, 104-3, 104-4), chaque paire de noeuds voisins étant interconnectée par une paire de liaisons optiques. Au moyen du multiplexage optique grossier, les données sont transmises dans les deux directions sur chaque liaison, une première longueur d'onde μ 1 étant utilisée pour la transmission des données dans une première direction sur la liaison et une seconde longueur d'onde μ2 étant utilisée pour la transmission des données dans une seconde direction opposée sur la liaison. Les deux longueurs d'onde μ1 et μ2 diffèrent d'au moins 10 nm. Chaque train de données transmis sur la liaison optique (102) possède une largeur de bande d'au moins 2,5 Gbps. Par ailleurs, chaque train de données possède au moins deux trains logiques intégrés. Un multiplexeur de liaison (106) prévu au niveau de chaque noeud du réseau (100), possède une ou plusieurs cartes de liaison (140-1, 140-2) permettant de coupler le multiplexeur de liaison (106) aux dispositifs clients (159), et une ou plusieurs unités de multiplexeur (142-1, 142-2) permettant de coupler le multiplexeur de liaison (106) aux liaisons optiques. Chaque carte de liaison (140) comprend des mémoires d'images (152) capables de stocker des séquences d'image de canal de fibres optiques, transmises aux dispositifs client couplés à ladite carte de liaison (140), ou en provenance de ces derniers.
EP00963413A 1999-09-13 2000-09-13 Systeme de communication a anneau de fibre optique Withdrawn EP1212861A4 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15351999P 1999-09-13 1999-09-13
US153519P 1999-09-13
PCT/US2000/025089 WO2001020835A1 (fr) 1999-09-13 2000-09-13 Systeme de communication a anneau de fibre optique

Publications (2)

Publication Number Publication Date
EP1212861A1 true EP1212861A1 (fr) 2002-06-12
EP1212861A4 EP1212861A4 (fr) 2005-01-05

Family

ID=22547558

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00963413A Withdrawn EP1212861A4 (fr) 1999-09-13 2000-09-13 Systeme de communication a anneau de fibre optique

Country Status (8)

Country Link
EP (1) EP1212861A4 (fr)
JP (1) JP2003509955A (fr)
KR (1) KR20020059400A (fr)
CN (1) CN100367693C (fr)
AU (1) AU7482900A (fr)
CA (1) CA2384869C (fr)
IL (1) IL148594A0 (fr)
WO (1) WO2001020835A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6718139B1 (en) 1999-09-13 2004-04-06 Ciena Corporation Optical fiber ring communication system
JP2005209057A (ja) 2004-01-26 2005-08-04 Matsushita Electric Ind Co Ltd データ通信方法
JP4646839B2 (ja) * 2006-03-17 2011-03-09 富士通株式会社 光伝送装置、光伝送システム及び光伝送方法
JP2008067402A (ja) * 2006-04-28 2008-03-21 Furukawa Electric Co Ltd:The 加入者宅側光回線終端装置及び光伝送システム
CN105790844B (zh) * 2014-12-25 2018-03-02 沈阳高精数控智能技术股份有限公司 一种通用的支持多种拓扑的光纤通信方法
WO2024216724A1 (fr) * 2023-04-18 2024-10-24 宁德时代新能源科技股份有限公司 Système de batterie et dispositif électrique

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864414A (en) * 1994-01-26 1999-01-26 British Telecommunications Public Limited Company WDM network with control wavelength
EP1063803A1 (fr) * 1999-06-15 2000-12-27 Lucent Technologies Inc. Réseau en anneau pour paquets optiques à large bande

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5784377A (en) * 1993-03-09 1998-07-21 Hubbell Incorporated Integrated digital loop carrier system with virtual tributary mapper circuit
US5526155A (en) * 1993-11-12 1996-06-11 At&T Corp. High-density optical wavelength division multiplexing
ATE219876T1 (de) * 1994-04-15 2002-07-15 Nokia Corp Transportnetz mit hoher übertragungskapazität für die telekommunikation
JP3068018B2 (ja) * 1996-12-04 2000-07-24 日本電気株式会社 光波長分割多重リングシステム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864414A (en) * 1994-01-26 1999-01-26 British Telecommunications Public Limited Company WDM network with control wavelength
EP1063803A1 (fr) * 1999-06-15 2000-12-27 Lucent Technologies Inc. Réseau en anneau pour paquets optiques à large bande

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO0120835A1 *

Also Published As

Publication number Publication date
EP1212861A4 (fr) 2005-01-05
CN100367693C (zh) 2008-02-06
JP2003509955A (ja) 2003-03-11
IL148594A0 (en) 2002-09-12
AU7482900A (en) 2001-04-17
KR20020059400A (ko) 2002-07-12
CA2384869C (fr) 2008-04-01
CA2384869A1 (fr) 2001-03-22
CN1390403A (zh) 2003-01-08
WO2001020835A1 (fr) 2001-03-22

Similar Documents

Publication Publication Date Title
US6718139B1 (en) Optical fiber ring communication system
US6201787B1 (en) Automatic loop segment failure isolation
US6289002B1 (en) Automatic isolation in loops
US6854031B1 (en) Configurable serial interconnection
US7596321B2 (en) Time division multiplexing of inter-system channel data streams for transmission across a network
US8000600B2 (en) Method and an apparatus for preventing traffic interruptions between client ports exchanging information through a communication network
US7308006B1 (en) Propagation and detection of faults in a multiplexed communication system
CA2330742C (fr) Elimination des donnees non valables dans un reseau en boucle
CA2384869C (fr) Systeme de communication a anneau de fibre optique
US7639655B2 (en) Ethernet switch interface for use in optical nodes
EP1703677B1 (fr) Système de terminaison de ligne d'accès, unité de terminaison de ligne d'accès et méthode de commande de transmission
CN101094075A (zh) 对控制信号进行中继的以太网通信系统
JP2002232441A (ja) 通信装置
JP2003264568A (ja) データ伝送システム
SE509246C2 (sv) Metod och anordning för tidluckeåteranvändning

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020319

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: CIENA CORPORATION

A4 Supplementary search report drawn up and despatched

Effective date: 20041118

RIC1 Information provided on ipc code assigned before grant

Ipc: 7H 04J 14/02 A

Ipc: 7H 04Q 11/00 B

17Q First examination report despatched

Effective date: 20050209

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20070530