[go: up one dir, main page]

0% found this document useful (0 votes)
259 views51 pages

Qfx5200 Deepdive TDM Presentation

Uploaded by

a525.unitech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
259 views51 pages

Qfx5200 Deepdive TDM Presentation

Uploaded by

a525.unitech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Juniper QFX5200

Deep Dive

Michael Pergament, TME

1 Copyright © 2015 Juniper Networks, Inc.


Cloud Speed Adoption
80

70
Percent of server shipments

60
QFX5100-48S
50 1GbE
10GbE
40 40GbE
25GbE
30 QFX5200-32C
50GbE
100GbE
20 EX4300

10

0
Series1
QFX5100-24Q

2014 2015 2016 2017 2018


Source: Dell’Oro 2015

2 Copyright © 2015 Juniper Networks, Inc.


Why 25/50GbE Matters

Reduces data center CapEx by providing migration


pass from 10GbE to 25GbE by leveraging existing
(single port/lane) 10GbE infrastructure
Maximizes ports and bandwidth in ToR switch
faceplate (supports high server density in a rack)

2.5 speed increase at almost same cost as 10GbE

Transition path from 25GbE to 50GbE to 100GbE

On the ToR same QSFP28 can be used for


25/50GbE (breakout) and 100GbE

3 Copyright © 2015 Juniper Networks, Inc.


Use Case 1: MSDC
S S S S

 Web services are growing fast


L L L L L L L L

 40% Y/Y from 2015 to 2017


A A A A A A A A A A A A A A A A

 Strong growth projections into 2017

 Enable MSDC customers to transition


towards 25/50 Gbps in access and 100
Gbps in aggregation/core with much
better economics. Having 2.5 speed
increase at same cost as 10 Gbps.

4 Copyright © 2015 Juniper Networks, Inc.


Use Case 2: FSI

Need for high performance


Regulatory requirements for
precision timing
All transactions time stamped in EU by 2017, followed by U.S.
Ultra-low latency (< 700 ns)
40GbE access in FSI colo

5 Copyright © 2015 Juniper Networks, Inc.


QFX5200
Problem
 Coping up with change in server access technology
 Investment protection & SDN adoption
IP Fabric
 Applications driving architecture diversity
MPLS Fabric
 Increasing operations complexity & cost
(FRS)

Solution
 Choice of 10GbE, 25GbE, 40GbE, 50GbE,
100GbE
 VXLAN L2 gateway, OVSDB and EVPN
Junos
MC-LAG  ISSU (post-FRS) & automation integration
Fusion
(FRS)
(post-FRS)
Benefits
 Future proof and investment protection
 Open & standards based for multi-vendor network
Network  ZTP for simplified operation
Overlays  ISSU (post-FRS) with less than one second traffic impact
VxLAN during network software upgrades, upgrade time changed
(post-FRS) from 5-15 minutes to seconds
 OpenFlow (post-FRS)
6 Copyright © 2015 Juniper Networks, Inc.
QFX5200-32C
 1 rack unit
 4-core IvyBridge CPU +
Cave Creek PCH
 16 GB of main memory and 64 GB
SSD
 32x QSFP ports
 PTP connectors: GbE port for
Grand Master connectivity + 2
SMB connectors for PPS (Pulse
per second) and 10MHz clock
output
 Dual PSU with 1+1 redundancy
with AC and DC options
 5 fan FRUs with n+1 redundancy
 Both AFO and AFI airflows
supported
 1 RJ-45 and 2 SFP management
ports
 1x USB 2.0
 1x RS-232 Console port
7 Copyright © 2015 Juniper Networks, Inc.
QFX5200-64Q
 2 rack unit
 4-core IvyBridge CPU + Cave
Creek PCH
 16 GB of main memory and 64 GB
SSD
 64x 40GbE or 32x 100GbE ports
 PTP connectors: GbE port for
Grand Master connectivity + 2
SMB connectors for PPS (pulse
per second) and 10MHz clock
output
 Dual PSU with 1+1 redundancy
with AC and DC options
 5 fan FRUs with n+1 redundancy
 Both AFO and AFI airflows
supported
 1 RJ-45 and 2 SFP Management
ports
 1x USB 2.0
 1x RS-232 Console port
8 Copyright © 2015 Juniper Networks, Inc.
QFX5200 Multi-Speed Support

 Each 100GbE port on QFX5200-32C can be


channelized into:
 4 x 10 Gbps (max 128 ports)
QFX5200-32C  4 x 25 Gbps (max 128 ports)
 2 x 50 Gbps (max 64 ports)
 1 x 40 Gbps (max 32 ports). 4 x 10 Gbps
SerDes for each 40 Gbps port

 QFX5200-64Q can have 64 x 40 Gbps ports.


Only 2 out of 4 rows will support 100 Gbps
interfaces. 2 x 20 Gbps SerDes for each 40
Gbps port
QFX5200-64Q
 ASIC reset is not required for 40 -> 100 Gbps
conversion
9 Copyright © 2015 Juniper Networks, Inc.
NG Fixed 40GbE/100GbE Data Center Leaf/
Spine Switches Summary

QFX5200-32C QFX5200-64Q
Size, RU 1 2
Switch Throughput 3.2Tbps 3.2Tbps
25GbE (Breakout Cable, QSFP28) 128 128
10GbE* (Breakout Cable, QFSP+) 128 128
40GbE (QSFP+) 32 64
50GbE (Breakout Cable, QSFP28) 64 64
100GbE (QSFP28) 32 32
PTP Built-in Built-in
Power Supplies 850W each 1600W each

Port speeds < 10G are not supported


*

10 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Port Speed Conversion
 QFX5200-32C will detect optic inserted and
convert port automatically

 25/40/50/100GbE naming convention used is


QFX5200-32C “et-x/y/z” and “et-x/y/z:0/1/2/3”

 On QFX5200-64Q, only 2 out of 4 rows can be


used for 100GbE
 Once 100GbE optic is inserted in row 1, port 1:
 It will come up automatically if there is no
optic inserted in row 2, port 1
 If there is optic inserted in row 2, port 1,
then row 1, port 1 will stay inactive until
row 2, port 1 is either disabled or optic is
QFX5200-64Q unplugged
11 Copyright © 2015 Juniper Networks, Inc.
QFX5200 Multi-Speed Support Limitations

 There are 5 Ethernet speed classes: 10, 25, 40, 50, 100 Gbps

 Port configuration with more than 4 port speed classes is NOT supported

 The only supported configurations with 4 port speed classes are:


 10, 25, 50, 100

 All port configurations with 1-3 classes are supported

12 Copyright © 2015 Juniper Networks, Inc.


QFX5200 25/100GbE Optic Support

100GbE Optics Model # Description Production Time Line

JNP-QSFP-100G-LR4 Duplex SMF, up to 10KM FRS

JNP-QSFP-100G-CWDM Duplex SMF, up to 2KM FRS+


JNP-QSFP-100G-PSM4 Parallel SMF, up to 2KM FRS+
JNP-QSFP-100G-SR4 Parallel MMF, up to 100M FRS

JNP-DAC-4X25G-1M - 3M Copper 4x25G breakout FRS

JNP-DAC-2X50G-1M - 3M Copper 2x50G breakout FRS

JNP-100G DAC-1M - 3M Copper QSFP28 to QSFP28 FRS


JNP-100G-AOC-1M - 30M Optical QSFP28 to QSFP28 FRS+

13 Copyright © 2015 Juniper Networks, Inc.


Broadcom Tomahawk Hardware Data

Raw I/O Bandwidth 3200 Gbps


Throughput 2.13 Tbps @ 64 bytes
3.2 Tbps @ >250 bytes
Packet Buffer 16 MB (4 x 4 MB)
Overlay Routing Not supported
Analytics in Hardware Yes
Latency 300 ns
Unified Forwarding Table 128k
Queues per Port 10
VXLAN/NVGRE Support Yes
MPLS Label Switching Yes
Internal Loopback Bandwidth 100 Gbps

14 Copyright © 2015 Juniper Networks, Inc.


RFC2544 Throughput vs. Packet Size
Throughput (Gbps)
3500
3200 Gbps

3000

2500

2000

% of 64 % of 1536 Line Rate


1500
Bytes Packets Bytes Packets Performance
1000 25 75 Yes
50 50 Yes
500
75 25 Yes
0
64

1000
1036
1072

1216
1252
1288
1324
1360
1396
1432
1468
1504
1540
1576
1612
1648
100
136
172
208
244
280
316
352
388
424
460
496
532
568
604
676
712
748
784
820
856
856
892
928
964

1108
1144
1180
Packet Size (B)

15 Copyright © 2015 Juniper Networks, Inc.


Tomahawk ASIC Low Latency vs. Feature-Rich
Mode Of Operation
Only this one will
Baseline L2 (300ns) L2 +Light L3 (+100 ns) All Features (+100ns)
be supported at FRS.
Expected latency will


Tunneling
NIV/802.1BR
be around 500 ns.
 Virtual Ports
 MAC/subnet-based VLANs
 Algorithmic LPM
 3-stage ACLs
Baseline L2 (300ns) L2 +Light L3 (+100ns)  Hierarchical ECMP
 Mirroring
 Parse L3 Host & LPM Lookups
 NAT
 1-stage ACLs (IFP)
 ECMP
 Flex Counters
Baseline L2 (300 ns)

 Parse L2 / HiGig Header


 Port-based, single VLANs
 Spanning Tree
 CPU Packet Injection
 Loopback Path

16 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Unified Forwarding Table
Profile 1: l2-heavy-one
UFT (Unified Forwarding Table) 8K (L3 16K
136K (L2 MAC)
L2 MAC + L3 Host + LPM Host) (LPM)

Profile 2: l2-heavy-two
UFT (Unified Forwarding Table) 16K
Profile 1–5 same as on QFX5100
104K (L2 MAC) 40K (L3 Host)
L2 MAC + L3 Host + LPM (LPM)
Profile 6 is Tomahawk-specific. Primary
Profile 3: l2-heavy-three (Default) use-case for this profile
UFT (Unified Forwarding Table)
72K (L2 MAC)
L2 MAC + L3 Host + LPM
72K (L3 Host)
16K
(LPM)
is OpenFlow
ACL-EM matching conditions cannot
Profile 4: l3-heavy
have wildcards but only Exact Match
UFT (Unified Forwarding Table) 16K
40K (L2 MAC) 104K (L3 Host)
L2 MAC + L3 Host + LPM (LPM) conditions defined through Template in
Profile 5: LPM-heavy*
CLI
8K 8K (L3 UFT (Unified Forwarding Table)
128K (LPM)
ACL-EM is not supported at FRS
(L2 MAC) Host) L2 MAC + L3 Host + LPM

Profile 6: Filter Mode (post FRS)


8K (L2 8K (L3 UFT (Unified Forwarding Table)
16K (LPM) 64K (ACL-EM)
MAC) Host) L2 MAC + L3 Host + LPM

17 Copyright © 2015 Juniper Networks, Inc.


QFX5200 UFT Customer Profile

 Allows to configure MAC, L3


root> configure
Entering configuration mode
root> show chassis forwarding-options
UFT Configurtion: Host, ALPM tables with
custom-profile
{master:0}[edit]
root# show chassis forwarding-options
custom-profile {
Configured custom scale:
Entry type Total scale(K)
customer sizes
l2-entries { L2(mac) 8
num-banks 0; L3(unicast & multicast) 72
}
l3-entries {
Exact Match
Longest Prefix Match(lpm)
num-65-127-prefix = 1K
0
80  Total of 4 shared memory
num-banks 2;
}
lpm-entries {
--------------Bank details for various types of entries------------------
Entry type Dedicated Bank Size(K) Shared Bank Size(K)
banks can be used
num-banks 2; L2(mac) 8 32 * num shared banks
} L3(unicast & multicast) 8 32 * num shared banks

 Commit check for total table


} Exact Match 0 16 * num shared banks
Longest Prefix Match(lpm) 16 32 * num shared banks

size in custom mode

18 Copyright © 2015 Juniper Networks, Inc.


Features and Multi Dimensional Scale

16 MB
packet
128x25GbE buffer PTP Feature Scale
UFT scale 128K
Congestion L2 MACs 136k
monitoring
100usec LPM 128k
L3 host scale 84k
ZTP
MPLS Labels 16k
ISSU ECMP 64-way
Guest 64k w/exact match
Filter
workload rules
VRF scale 2k
ECMP monitoring Enhanced ECMP MPLS Multicast Groups 16K
LB
L3 VPNs 2K

19 Copyright © 2015 Juniper Networks, Inc.


QFX5200 CPU-PFE Internal Rate Limiters
Hostbound packets are assigned a CPU code
CPU code is mapped to one of 43 CMIC queues
Each CMIC queue has burst, packet rate limit and
CPU
priority parameters that are applied to DMA packet to
host’s CPU packet descriptor memory
There are 3 DMA channels on the receive DMA which
allows mapping of CMIC queues into 3 channels
This three channels are mapped to high, medium and
low priority traffic
Q1 Q2 Q43

PFE

20 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Changing CPU-PFE Internal
Rate Limiters
 Rate and Burst can be configured at CMIC
queue level

 DDoS protection is used to accomplish that

 Policer installed on Broadcom ASIC (single


PFE) so no policers required at RE-kernel and
PFE-uKernel

22 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Port Mapping

4 MB 4 MB MMU 4 MB 4 MB

Pipe0 Pipe1 Pipe2 Pipe3

32 x 10/25GbE 32 x 10/25GbE 32 x 10/25GbE 32 x 10/25GbE


8 x 40GbE 8 x 40GbE 8 x 40GbE 8 x 40 GbE
16 x 50GbE 16 x 50GbE 16 x 50GbE 16 x 50GbE
8 x 100GbE 8 x 100GbE 8 x 100GbE 8 x 100GbE

 Entire 16 MB buffer in MMU partitioned into 4 slices


 Group of 32 x 10GbE or 8 x 40GbE or 8 x 100GbE ports can use shared
buffer from 4 MB
23 Copyright © 2015 Juniper Networks, Inc.
QFX5200 Pipeline Block Diagram
Ingress Pipeline (4 in Total)

Intelligent Tunnel VLAN L2 L3


ICAP
Parser Termination Processing Switching Routing

Memory
Management
Unit
Egress Pipeline (4 in Total)

Packet Modification Egress ACL EGR VLAN Egress


Content Aware Engine Processing Parsing

24 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Pipeline Block, Intelligent Parser
Ingress Pipeline (4 in Total)

Intelligent Tunnel VLAN L2 L3


ICAP
Parser Termination Processing Switching Routing

 Examines ingress packets from all physical ports (Ethernet,


HiGig, CPU managed interface controller)

 Parser extracts information from first 128 bytes of a packet (L2 header,
EtherType, L3 header, TCP/IP protocols etc.)

 Parser stores this information for the various search engines that
require it

25 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Pipeline Block (Cont.)
Ingress Pipeline (4 in Total)

Intelligent Tunnel VLAN L2 L3


ICAP
Parser Termination Processing Switching Routing

 Tunnel Termination block determines whether the device must terminate


incoming tunnel packets (VXLAN, MPLS, IP etc.) before the next step

 VLAN Processing block may filter VLAN packet content using


ContentAware Processors (CAP) or can do Ingress VLAN translation on
incoming packet

26 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Pipeline Block, L2 Switching Block
Ingress Pipeline (4 in Total)

Intelligent Tunnel VLAN L2 L3


ICAP
Parser Termination Processing Switching Routing

 L2 Logic performs:
 VLAN/priority assignment
 MAC DA lookup
 MAC SA lookup for hardware-based learning (only one packet from a
particular source address and VLAN ID is sent to CPU)
 VLAN type selection
 VLAN lookup
 L2 multicast lookup
27 Copyright © 2015 Juniper Networks, Inc.
QFX5200 Pipeline Block, L3 Routing Block
Ingress Pipeline (4 in Total)

Intelligent Tunnel VLAN L2 L3


ICAP
Parser Termination Processing Switching Routing

 Source and destination lookup for IPv4 and IPv6 unicast packets

 Source and destination lookup for IPv4 and IPv6 multicast packets

 Longest prefix match (LPM)

 Strict and loose uRPF checks

28 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Pipeline Block, CAP Block

Ingress Pipeline (4 in Total)

Intelligent Tunnel VLAN L2 L3


ICAP
Parser Termination Processing Switching Routing

 ContentAware processing is designed to support:


 Firewall filters
 Differentiated services
 QoS-type applications on ingress and egress
 DoS attack detection
 Programmable packet processing

29 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Pipeline Block, MMU

 MMU performs the following:


 Absorbs packet streams from ports at aggregated max bandwidth of
3.2 Tbps
 Accounts for capacity usage by ingress and egress
 Queuing/scheduling
 Packet shaping
 Supports up to 9416 byte Jumbo frames
 Allows CPU traffic to and from any port
 Headroom monitor to optimize buffer usage for lossless traffic
 ECC on all control memories
 ECC on packet memory for 1-bit and 2-bit error

30 Copyright © 2015 Juniper Networks, Inc.


QFX5200 Pipeline Egress

Packet Modification Egress ACL EGR VLAN Egress


Content Aware Engine Processing Parsing

 Egress parser parses packets from MMU in a similar way to the ingress
pipeline
 Egress VLAN translation block enables VLAN tag processing to add,
remove or replace tags in outgoing packets
 ECAP is used to filter packet content on egress
 Packet Modification Engine is handling:
 Tunneling
 L3 routed packet modification

31 Copyright © 2015 Juniper Networks, Inc.


QFX5200 MPLS

 Supports pop, swap, push, and PHP operations

 Supports up to 2 label lookups per packet within a single pass

 Supports pushing up to 3 labels

 Supports popping up to 2 labels

 Supports swapping 1 label and pushing 1 label

 MPLS ECMP support on P router

32 Copyright © 2015 Juniper Networks, Inc.


QFX5200 in MPLS DC

spine
switch
spine
switch
 Trident2 (used in
MPLS
Label stack
IP
Src IP Dest IP
Payload
SS1 SS2
MPLS
Label stack
IP
Src IP Dest IP
Payload QFX5100) cannot do
ECMP if used as P
BGP- ... ...
S5 VM6 VM2 VM6 S5 VM6 VM2 VM6
LU

1) Static LS2 label swapped with BGP-LU label


2) MPLS P ECMP required to use all available
paths
leaf
switch
LS1
leaf
switch
LS2
device. Only one spine will
MPLS
Label stack
IP
Src IP Dest IP
Payload

...
MPLS
Label stack
IP
Src IP Dest IP
Payload

...
be used in this particular
use case. Limited
LS2 S5 VM6 VM2 VM6 VM6 VM2 VM6

server server server server server server


S1 S2 S3 S4 S5 S6

workaround is possible.
NIC NIC

IP Payload vswitch vswitch IP Payload  QFX5200 can natively


support ECMP on P
Src IP Dest IP Src IP Dest IP
...
... VM2 VM6
VM2 VM6 vNIC vNIC vNIC vNIC vNIC vNIC

device
VM1 VM2 VM3 VM4 VM5 VM6

33 Copyright © 2015 Juniper Networks, Inc.


QFX5200 MPLS Scale

 16k PW labels

 16k MPLS label lookups

 1k L3 VPNs

34 Copyright © 2015 Juniper Networks, Inc.


Hierarchical ECMP Load Balancing (ASIC)
Hierarchical ECMP resolution:
Link A1
 Resolve tunnel
ECMP
Tunnel
1
Group
A
Link A2
Link A3  Resolve link within tunnel
Link A4
Default is non-hierarchical
Link B1
ECMP. Single table with 2k routes, 16k
ECMP ECMP members.
ECMP
Tunnel Group
Link B2
2 Link B3
Group 1 B

Each level for H-ECMP is implemented


as a separate ECMP group table (1k
routes) and member table (8k ECMP
ECMP Link C1
Tunnel Group Link C2
3 C

members)

36 Copyright © 2015 Juniper Networks, Inc.


Load Distribution Monitors (ASIC), FRS+
Measure load distribution among
link members in a LAG or
ECMP group
Link A1
ECMP
Tunnel Group
Link A2
1 Link A3
Parameters:
A
Link A4

 Monitor start time


ECMP Tunnel ECMP
Group
Link B1
Link B2  Monitor duration
2 Link B3
Group 1  Time scale units: 100 us,
B

1 ms, 10 ms, 100 ms, 1 s


Tunnel
ECMP
Group
Link C1
Link C2
 Number of duration cycles to
3 C
collect the data
Packet and byte utilization values are
stored in flex counters

37 Copyright © 2015 Juniper Networks, Inc.


Visibility Through Trace Packets
Trace Packets
Ingress Egress Injected through CPU PCIe
Pipeline #1 Pipeline #1
Launched as normal front panel packet
Follows normal data path but carries

Packet Buffers
Ingress Egress
Pipeline #2 Pipeline #2
“trace packet” flag
2. Trace Packet Ingress Egress
Virtual Path Pipeline #3 Pipeline #3
Data Captured
Ingress Egress
Meta-data is inserted at each
Pipeline #4 Pipeline #4
pipeline stage
Forwarding tables used in switching
1. Inject Trace Packets 3. Trace Packet
Captured
decision
CPU LAG / ECMP link selected
4. SW Reads Trace Log

38 Copyright © 2015 Juniper Networks, Inc.


Visibility Through Trace Packets (Cont.)
Trace Packets Generation
Ingress Egress Use FF to filter packets
Pipeline #1 Pipeline #1
FF will get new action like trap 2cpu
Internal rate limiter 1 pps will be used

Packet Buffers
Ingress Egress
Pipeline #2 Pipeline #2
for this action
2. Trace Packet Ingress Egress
CPU will re-inject these packets back
Virtual Path Pipeline #3 Pipeline #3
into ingress pipeline and mark them
Ingress Egress
as Trace Packets
Pipeline #4 Pipeline #4
With this approach, external traffic
generators can be used as well
1. Inject Trace Packets 3. Trace Packet
Captured
CPU

4. SW Reads Trace Log

39 Copyright © 2015 Juniper Networks, Inc.


sFlow on QFX5200-64Q / QFX5200-32C
Two Flavors of Ingress Sampling
Sample destination is local CPU
sFlow Collector
Host path limited by 2k pps
Traffic
Data
Sample destination is agent on
remote system
sFlow Agents
Analysis
Sampled packet is encapsulated
and port mirrored to the destination
Almost line-rate (tunnel overhead needs
to be taken in account)
Egress sampling
Sample destination is always local CPU
Host path limited by 2k pps

40 Copyright © 2015 Juniper Networks, Inc.


sFlow Hierarchy
New CLI knob- inline sampling (toggle)

protocols {
sflow {
polling-interval <number>;
sample-rate <number>;
inline-sampling;
collector {
ip-address <ip address>;
udp-port <port number>;
}
interfaces <interface-name> {
polling-interval <number>;
sample-rate <number>;
}}} Copyright © 2015 Juniper Networks, Inc.
41
ACLs on QFX5200-64Q / QFX5200-32C
Tomahawk pipeline can be operated in two modes
Global Mode:
 Allows to install entries (for PACL) per pipeline
 Results in higher PACL scale
 Not supported at FRS
PerPipeLocal Mode
 All entries are installed in all pipelines irrespective of the bind
point belonging to the pipe
 Only supported mode at FRS

42 Copyright © 2015 Juniper Networks, Inc.


ACLs on QFX5200-64Q / QFX5200-32C (Cont.)
Trident flavors allow paring of up to 2 slices and Tomahawk allows up to 3 slices
to be paired.

As Tomahawk has narrow width, most of the groups which were occupying 2
slices in Trident would need 3 slices.

The scale for EFP and VFP filters is same as QFX5100 -1024 entries.

TCAM Key Width Slices


IFP 80/160 12
EFP 234 4
VFP 234 4

43 Copyright © 2015 Juniper Networks, Inc.


Slice Paring Modes
IFP TCAM slices can be operated in any of the following Modes.

Mode Key Width Scale


Single Wide Mode 80 bit * 1 slice 12 * 512 = 6144
Single Wide Intra Slice 160 bit * 1 slice 12 * 256 = 3072
Mode
Double Wide Intra Slice 160 bit * 2 slice = 320 6 * 256 = 1536
Mode bits
Triple Wide Intra Slice 160 * 3 slice = 480 bits 4 * 256 = 1024
Mode

44 Copyright © 2015 Juniper Networks, Inc.


QFX5200 TCAM Scale
Each slice in IFP operates in 160 bit width with max entries for slice being 256
Each slice in EFP operates in 234 bit width with max entries for slice being 256

TCAM Slices Scale


IFP 12 12*256 = 3072
EFP 4 4*256 = 1024

45 Copyright © 2015 Juniper Networks, Inc.


Ingress ACLs on QFX5200

IFP-DYN IFP-BA

Triple Wide Intra Slice mode Single Wide Intra Slice


256 entries @ 480 bits width 256 entries @160 bits width

IVACL/IPACL/IRACL share same 8 slices

IPACL scale in absence of IVACL/IRACL: 512


IVACL scale in absence of IPACL/IRACL: 512
IRACL scale in absence of IPACL/IVACL: 1024

IVACL IPACL IRACL

Triple Wide Intra Slice mode Triple Wide Intra Slice mode Double Wide Intra Slice mode
256 entries @ 480 bits width 256 entries @ 480 bits width 256 entries @ 320 bits width

46 Copyright © 2015 Juniper Networks, Inc.


QFX5200 IPv6 ACLs and Compression
480 bit width might not be enough to accommodate all matching conditions for IPv6
Compression is used for Src IPv6 and Dest IPv6 (special table in TCAM)
Compression is ALWAYS enabled (cannot be disabled)
All filters can have maximum of:
128 Src IPv6 in total
128 Dest IPv6 in total

47 Copyright © 2015 Juniper Networks, Inc.


Software Architecture (JDM post-FRS)

Software Architecture Highlights

VM
VM or
Container
 Increase platform velocity (i.e. Time To Market)
Container

JDM  Versatile container and VM support


Juniper
JCP JCP JSF
Device
Manager  Platform and PFE functions will be
Third Party
Junos Junos Juniper
VNF
Ubuntu Linux, Nagios, Graylog,
Openflow, Openstack,
Control Control Service independent of Junos
Plane Plane Function
Automation, Phone Home, ZTP,
Puppet, Chef, Analytics, Other (vSRX)
Active Stand By
Third Party Apps  Flexible architecture support (ToRs, security,
Hardware Abstraction Layer
FreeBSD
Junos Platform Software Device chassis)
SDK and Packet Forwarding Drivers Drivers

j-ovswitch
 Improve performance – Multicore CPU
System Services
Wind River Linux 7
 Allow multiple toolchain types via JDM
ONIE
 Support programming via multiple APIs
Hardware (x86, Merchant ASIC, Juniper ASIC)

48 Copyright © 2015 Juniper Networks, Inc.


Software Summary
Use case/Feature set QFX5200-32C/QFX5200-64Q
IP fabric feature set FRS
Layer 2/L3 access with MC-LAG FRS
Resilient hashing and user defined hashing for FRS
LAG/ECMP
Load distribution monitoring for LAG and ECMP Post FRS
Supported in Junos Fusion Post FRS
EVPN/VXLAN Post FRS
OVSDB/VXLAN Post FRS
PTP w/time stamping Post FRS
25,50G FRS
MPLS FRS (LSR ECMP)
ISSU Post FRS
ND, ZTP, Puppet, Chef FRS
Analytics push model Post FRS
sFlow in-line sampling FRS
Fast Reboot (< 1 min) FRS

49 Copyright © 2015 Juniper Networks, Inc.


10/25/50GbE Access and Fixed Configuration
Spine
 25GbE ToR
QFX10000  Fixed 100GbE spine
 64 way ECMP
 Advanced L3
QFX10000  VxLAN L2 gateway
– OVSDB/EVPN
QFX5200-32C  MPLS forwarding plane
– With BGP-LU and
ISIS-SR

50 Copyright © 2015 Juniper Networks, Inc.


QFX5200: What it is NOT

VCF

Junos
VC Fusion
Aggregator

Metro
Ethernet

51 Copyright © 2015 Juniper Networks, Inc.


Thank you
QFX5200-64Q/QFX5200-32C Port Mapping

 Entire 16 MB buffer in MMU partitioned into 4 slices


 Each XPE (Cross-Point Element) gets dedicated 4 MB
 Group of 32 x 10GbE or 8 x 40 GbE or 8 x 100 GbE ports can use
53
shared buffer from 4 MB Copyright © 2015 Juniper Networks, Inc.

You might also like