Aci Secure Adc Design Guide
Aci Secure Adc Design Guide
Cisco public
D
Troubleshooting 41
Cisco ACI troubleshooting 41
Cisco Secure ADC troubleshooting 41
Introduction
The document discusses design considerations and deployment options for Cisco ACI with Cisco Secure ADC,
an advanced Application Delivery Controller (ADC), from three aspects: network design, ADC design, and
multitenant design. This document covers features up to Cisco ACI Release 5.1.
One of the key considerations of the network design with an ADC is to ensure that incoming and return traffic
go through the same device. There are various options you can use to insert the application delivery controller
in the data center. One way is to use the load balancer as the gateway for servers. Another option is to use
Source Network Address Translation (SNAT) on the ADC to ensure that traffic is returned back to it. In an ACI
infrastructure, Policy-Based Redirect (PBR) can also be used to make the return traffic go back to the load
balancer.
The advantage of leveraging the ACI PBR functionality is that it makes the overall configuration simpler and
allows for easy changes to the environment. For example, if an organization wanted to add an additional
security device such as a web application firewall to the path to inspect the traffic, this could be done with ACI
PBR functionality without requiring any changes to the rest of the environment.
Cisco Secure ADC can be deployed in different high-availability modes. This document will cover the two
common Cisco Secure ADC deployment modes: active/active and active/standby. Various design
considerations, such as endpoint movement during failovers, virtual MAC, source MAC-based forwarding, Link
Layer Discovery Protocol (LLDP), and IP aging will be discussed around each of the deployment modes.
Multitenancy is supported by both Cisco ACI and Cisco Secure ADC in different ways. This document will cover
a few ways that multitenancy constructs on ACI can be mapped to multitenancy on Secure ADC. The discussion
will address tenants, Virtual Routing and Forwarding (VRF), virtual ADCs, and multitenancy, based on which
Secure ADC form factor you use.
The Cisco Application Policy Infrastructure Controller (APIC) is used to manage the ACI fabric. The F5 ACI
ServiceCenter is an application that runs on the APIC controller that augments the integration between ACI and
Cisco Secure ADC.
Prerequisites
To best understand the network design presented in this document, you should have basic knowledge of Cisco
ACI and Cisco Secure ADC.
Cisco ACI offers the capability to insert Layer 4–7 services, such as firewalls, load balancers, and Intrusion
Prevention Systems (IPSs), using a feature called a service graph. For more information, refer to the Cisco ACI
service-graph-design white paper at https://www.cisco.com/c/en/us/solutions/collateral/data-center-
virtualization/application-centric-infrastructure/white-paper-c11-734298.html.
The service graph functionality can then be enhanced by associating to Policy-Based Redirect (PBR) policies.
For more detailed information on PBR, refer to the Cisco ACI PBR white paper:
https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/white-
paper-c11-739971.html.
For other load balancing concepts, such as virtual load balancing and cloud load balancing, see
https://www.radware.com/products/load-balancing-application-delivery/.
Terminology
This document uses the following terms with which you must be familiar:
This document will use the term Load Balancer (LB) and Application Delivery Controller (ADC) interchangeably
because an ADC is an LB with advanced application features and functionality.
Figure 1.
Cisco ACI topology
Cisco Nexus 9000 series switches that support ACI spine or leaf mode can be found at:
https://www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/index.html.
The minimum ACI fabric design should have two spine nodes, two leaf nodes, and three APICs. (Figure 1
illustrates four spine nodes and six leaf nodes.) The fabric design can scale up to 500 leaf nodes per ACI fabric.
See the latest ACI verified scalability guide for details: https://www.cisco.com/c/en/us/support/cloud-systems-
management/application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html.
Although Figure 1 shows a separate leaf node pair for APIC cluster, servers, storage, and others, it’s not
mandatory to use a separate leaf node. Even though there is no specific role configuration on each leaf, a leaf
connected to the external network is called a border leaf.
The fundamental security architecture of the ACI solution follows a whitelist model. A contract is a policy
construct used to define communication between Endpoint Groups (EPGs). Without a contract between EPGs,
no unicast communication is possible between those EPGs by default. A contract is not required to allow
communication between endpoints in the same EPG.
Figure 2.
EPGs and contracts
An EPG provides or consumes (or provides and consumes) a contract. For instance, the App EPG in the
example in Figure 2 provides a contract that the Web EPG consumes and consumes a contract that the DB EPG
provides.
An endpoint can belong to one EPG. Physical, virtual, and container endpoints can coexist in the same EPG.
How to define which EPG an endpoint belongs to is based on the following EPG types:
Figure 3 illustrates ACI logical network design constructs. The tenant is a logical entity to construct EPGs,
contracts, and network components for EPGs. Each EPG belongs to a Bridge Domain (BD) that is a broadcast
domain boundary in ACI. A BD belongs to a VRF.
In this example, Web and App EPGs are in the same BD (BD1) and DB EPG is in a dedicated BD (BD2). Unlike
traditional networks, multiple different EPGs can be in the same subnet as different security groups and zones.
Figure 4 provides an example using routing and bridging to insert a load balancer without a service graph. For
incoming traffic from an endpoint in the consumer EPG, the VIP is routed by the ACI fabric. Why? Because the
VIP is an ACI internal endpoint if the gateway of the server is the load balancer; the return traffic from an
endpoint in the provider EPG is simply bridged by the ACI fabric.
If the load balancer interface and the servers are not in the same subnet, the use of SNAT on the load balancer
can make the return traffic go back to the load balancer. Even if the use of a service graph is not mandatory in
this case, the use of service graph offers these advantages:
● ACI automatically manages VLAN deployment on the ACI fabric and the virtual networks for service node
connectivity.
● ACI automatically connects and disconnects virtual Network Interface Cards (vNICs) for virtual service
appliances using a Virtual Machine Manager (VMM). To see more on ACI VMM, please visit the following
link: https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-731961.html.
● ACI provides a more logical view of service insertion between consumer and provider EPGs.
● ACI can redirect traffic to the service node without the need for the service node to be the default
gateway of the servers.
One of the main advantages of service graph is the PBR feature, which is helpful when inserting Layer 4–7
service devices. With this PBR feature, ACI redirects traffic matched with the contract without relying on routing
or bridging. For load balancer designs, PBR can be used for return traffic generated from the servers to make
the return traffic go back to a load balancer that doesn’t perform SNAT.
Figure 5 illustrates this with an example. The incoming traffic from an endpoint in a consumer EPG to VIP
doesn’t require PBR because it’s routed to the VIP that is also an ACI internal endpoint. For the return traffic
from an endpoint in the provider EPG, PBR is required if the load balancer didn’t perform SNAT on the incoming
traffic. Without PBR, traffic would directly go back to the consumer endpoint, which prevents the load balancer
from seeing both directions of the traffic.
For more detailed information on service graph design and PBR, refer to the following white papers:
● Service Graph Design with Cisco Application Centric Infrastructure White Paper:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-734298.html.
● Cisco Application Centric Infrastructure Policy-Based Redirect Service Graph Design White Paper:
https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739971.html.
Some of the additional use cases for deploying a Secure ADC besides L4-L7 load balancing are:
● SSL offloading
● SSL interception, both inbound and outbound
● Traffic steering
● Web application firewall
● Application re-writing
● Application caching
● Application protection
● Application monitoring
● Application authentication
● Global load balancing
These topics are out of scope for this document, but are listed here for the reader to understand the additional
value that Cisco Secure ADC provides.
The Cisco Secure ADC is also available as a virtual appliance that can be deployed in both private and public
clouds.
● Incoming and return traffic go through the same load balancer that is a stateful device.
● The traffic to the other VIP goes through a load balancer, and the return traffic from servers goes directly
back to the client: this is called Direct Server Return (DSR).
The following list of questions will help the reader understand what is required:
● Is the load balancer deployed in Layer 2 or Layer 3 mode? (Cisco Secure ADC supports both Layer 2
and Layer 3.)
● How is the return traffic handled? Is the load balancer the gateway? Is the load balancer doing SNAT? Is
ACI PBR redirecting the traffic to the load balancer or is the load balancer deployed in DSR mode?
● What High-Availability (HA) option is used for the load balancer - active/standby HA pair, active/active
HA pair, or multiple HA pairs?
● Is the VIP in the same subnet range as the IP address of a load balancer interface (Cisco Secure ADC
calls it “ADC-IP”) or outside of the subnet range?
● What are the dynamic routing protocol requirements?
In this document, the assumption is that the load balancer is deployed in Layer 3 mode with active/standby HA
because this represents the majority of deployments.
● In the first example, on the left-hand side of the image, the load balancer is deployed in two-arm mode
and is the default gateway of the servers. SNAT or PBR is not required because the load balancer is in
the traffic path based on routing.
● In the second example, the load balancer is deployed in two-arm mode and is placed between two
different routers or VRFs: one is for external connectivity and the other is the gateway of servers. SNAT
or PBR is not required because the load balancer is in the traffic path based on routing.
● In the third example, the load balancer is deployed in two-arm mode in a way that not all traffic from the
servers has to go through the load balancer itself. SNAT or PBR is required to make return traffic back to
the load balancer. If neither SNAT nor PBR is used, the return traffic would go back to the client directly,
and as a result, the traffic would be dropped by the client. The reason: because the source IP address of
the return traffic (of the server) is different from the destination IP address of the incoming traffic sent by
the client, which was directed to the VIP.
● In the fourth example, the load balancer is deployed in one-arm mode in a way that not all traffic from
the servers has to go through the load balancer itself. SNAT or PBR is required to make return traffic
back to the load balancer. If neither SNAT nor PBR is used, the return traffic goes back to the client
directly, which will be dropped by the client because the source IP address of the return traffic is
different from the destination IP address of the incoming traffic sent by the client. The load balancer
interface can be in the same or a different subnet with servers. This design can be used for Layer 2 DSR,
where the return traffic doesn’t go back through the load balancer. For Layer 2 DSR, the load balancer
and servers must be in the same subnet.
The following subsections explain how to translate the typical load balancer design options just described into
ACI network constructs. Table 1 summarizes the comparison of the design options in a Cisco ACI fabric.
In these examples, the load balancer’s external interface IP and the VIP are in the same subnet. For the case
where they are not in the same subnet, refer to the VIP outside of the self IP subnet range section. Even if the
examples reference north-south traffic flows, which is traffic from the outside to internal servers through a VIP,
the same design considerations can also be applied to east-west traffic flows, which is traffic from internal
servers to other internal severs through a VIP.
Two-arm (inline) LB is the gateway for the Use LB as the gateway Simple network Inter-subnet traffic
LB as gateway servers. for the servers design must go through the
associated to the VIP load balancer.
Two-arm (inline) LB LB as routing next hop Use the ACI fabric as a Take advantage of Need to manage two
Fabric as gateway (VRF sandwich) gateway for the servers the ACI anycast VRFs
associated to the VIP. gateway.
LB is routing next hop of
the ACI fabric.
Two-arm LB SNAT or PBR Use the ACI fabric as a Take advantage of Service graph is
Fabric as gateway gateway for the LB and the ACI anycast mandatory to use
also for the servers gateway. PBR.
associated to the VIP.
Use SNAT or PBR to Selective traffic
make return traffic go redirection by using
back through the LB. PBR
One-arm LB SNAT or PBR Use the ACI fabric as a Take advantage of Service graph is
Fabric as gateway* gateway for the LB and the ACI anycast mandatory to use
also for the servers gateway. PBR.
associated to the VIP.
Use SNAT or PBR to Selective traffic
make return traffic back redirection by using
to load balancer. PBR
*
This design can be used for Layer 2 DSR where the return traffic doesn’t go back through the load balancer (the details are not covered in
this document).
Figure 7.
Two-arm (inline) load balancer as gateway
The “LB-Ext” bridge domain for the load balancer’s external interface has the bridge domain subnet that is the
gateway for the load balancer to the external network through L3Out. The “Web” bridge domain for the load
balancer’s internal interface and the servers doesn’t have a bridge domain subnet because the load balancer’s
internal IP address is the gateway for the servers. The L3Out connected to the external network has the L3Out
EPG “External” with the external network subnets that are allowed to access the load balancer’s VIP in the “LB-
Ext” bridge domain.
The traffic coming from the external network arrives in the ACI fabric and is routed to the VIP (192.168.10.100)
because the VIP is an ACI local endpoint in the “LB-Ext” bridge domain. The traffic is then load balanced to one
of the servers associated to the VIP. The return traffic from the server arrives on the load balancer’s internal
interface because it is the gateway of the servers. The load balancer then routes the traffic back to the ACI
fabric that is the gateway of the load balancer to the external network.
● Two contracts — One is between the L3Out EPG “External” for the external network and the EPG “LB-
Ext” for the load balancer’s external interface, and the other is between the EPG “LB-In” for the load
balancer’s internal interface and “Web” EPG for the servers. All EPGs are created by a user.
● One contract — If there is no security requirement, the load balancer’s internal interface and the servers
can be combined into one EPG instead of different EPGs with a contract. All EPGs are created by a user.
● Service graph — Use service graph on a contract between the L3Out EPG “External” for the external
network and “Web” EPG. The EPGs (called “internal service EPGs” or “shadow EPGs”) for the load
balancer’s external and internal interfaces are automatically created through service graph rendering.
The internal service EPGs are not displayed in the GUI, and the user doesn’t need to manage them.
Figure 8.
Two-arm (inline) load balancer as gateway (ACI network and contract design)
● The load balancer’s internal interface and the EPG for the servers are in the same bridge domain (ACI is
used for bridging).
● ACI can be used as the next hop for the external side of the load balancer.
● All inter-subnet traffic goes through the load balancer.
● SNAT or PBR is not required.
● Service graph is not mandatory.
Figure 9 provides an example of this configuration. In this example, the load balancer’s VIP and the load
balancer’s external interface IP are in the same subnet.
Figure 9.
Two-arm (inline) load balancer with the fabric as gateway
The external-facing VRF, “VRF1,” has the L3Out connected to the external network and the L3 bridge domain,
“LB-Ext,” that has the gateway for the load balancer to the external network. The L3Out connected to the
external network has the L3Out EPG “External” with the external network subnets that are allowed access to the
load balancer’s VIP in the “LB-Ext” bridge domain. The internal-facing VRF, “VRF2,” has the L3 bridge domain,
“Web,” which is the gateway for the servers, and the L3Out “LB-In” for the load balancer’s internal interface
connectivity. The L3Out “LB-In” has the L3Out EPG “LB-In” with the external network subnets that are allowed
to access the servers through the load balancer.
If the load balancer does SNAT and uses the load balancer’s internal IP subnet range as NATed IP, the load
balancer’s internal interface can be in a bridge domain instead of an L3Out because the NATed IP is a local
endpoint IP in VRF2 that doesn’t require an additional route.
Figure 10 illustrates the contract configuration for this design. To permit end-to-end traffic, one of the following
configurations is required:
● Two contracts — One is between the L3Out EPG “External” for the external network and the EPG “LB-
Ext” for the load balancer’s external interface, and the other is between the L3Out EPG “LB-In” for the
load balancer’s internal interface and the “Web” EPG for the servers. All EPGs are created by a user.
● Service graph — Use of service graph on a contract between the L3Out EPG “External” for the external
network and the “Web” EPG. The L3Out EPG “LB-In” for the load balancer’s internal interface needs to
be created separately and is selected in the service graph. The EPG “LB-Ext”, internal service EPG, for
the load balancer’s external interface is automatically created through service graph rendering. The
internal service EPG is not displayed in the GUI, and the user doesn’t need to manage it.
Figure 10.
Two-arm (inline) load balancer with fabric as gateway (ACI network and contract design)
Figure 11 provides an example of this scenario. This example consists of three bridge domains: one is for the
external interface of the load balancer, another is for the internal interface of the load balancer, and the third is
for the servers. If the servers and the internal interface of the load balancer are in the same subnet, the two
bridge domains can be combined to one bridge domain. In this example, the load balancer’s VIP and the load
balancer’s external interface IP are in the same subnet.
Figure 11.
Two-arm (inline) load balancer with fabric as gateway
The traffic coming from the external network arrives on the ACI fabric and is routed to the VIP (192.168.10.100)
because the VIP is an ACI local endpoint in the “LB-Ext” bridge domain. The traffic is then load balanced to one
of the servers associated to the VIP. The load balancer must have the route to the server subnet
(10.10.10.0/254) through the “LB-In” bridge domain subnet IP. The return traffic from the server arrives on the
“Web” bridge domain subnet IP because it is the gateway for the servers. SNAT or PBR is required to make the
return traffic go back to the load balancer. If SNAT was enabled on the load balancer, the destination IP of the
return traffic will be at the IP in the “LB-In” bridge domain, which is owned by the load balancer (for example,
192.168.11.10), so that the return traffic is routed and sent to the load balancer’s internal interface. If PBR is
enabled for the return traffic, PBR is applied on the traffic from the “Web” EPG to L3Out EPG “External.” As a
result, that traffic is redirected to the load balancer’s internal interface. The load balancer then routes the return
traffic back to the ACI fabric “LB-Ext” bridge domain subnet IP that is the gateway of the load balancer to the
external network.
Figure 12 illustrates the contract configuration for this design. To permit end-to-end traffic, one of the following
configurations is required:
● Two contracts (SNAT on the load balancer) — One is between the L3Out EPG “External” for the external
network and the EPG “LB-Ext” for the load balancer’s external interface, and the other is between the
EPG “LB-In” for the load balancer’s internal interface and the “Web” EPG.
● Service graph (PBR for return traffic) — Use service graph PBR on a contract between the L3Out EPG
“External” for the external network and the “Web” EPG. The EPGs (called “internal service EPGs” or
“shadow EPGs”) for the load balancer’s external and internal interfaces are automatically created
through service graph rendering. The internal service EPGs are not displayed in the GUI, and the user
doesn’t need to manage them.
Figure 12.
Two-arm (inline) load balancer with fabric as gateway (ACI network and contract design)
● ACI provides routing for the servers and the load balancer; it is their default gateway or routing next hop.
● PBR or SNAT is required.
● The service device can be in the same bridge domain as the servers or in a different bridge domain.
● If PBR is used to make the return traffic go back to the load balancer, a service graph is mandatory, and
specific traffic is redirected to the load balancer’s internal interface.
● If SNAT is used to make the return traffic go back to the load balancer, the NATed IP must be in the load
balancer’s internal side subnet range.
Note: This design can also provide Layer 2 DSR, where the return traffic from the servers goes directly
back to the client without going through the load balancer. The Layer 2 DSR design requires the load
balancer and servers to be in the same subnet; that the VIP is configured as a loopback address on the
server; and that the server is configured not to answer ARP requests for the VIP. This document mainly
explains designs with SNAT or PBR. Refer in ACI Fabric Endpoint Learning white paper for Layer 2 DSR
design considerations in ACI.
Figure 13 provides an example of this scenario. This example consists of one bridge domain for the load
balancer interface and the servers. The load balancer’s VIP and the load balancer’s external interface IP are in
the same bridge domain subnet in this example, but they can be in different bridge domains if needed.
Figure 13.
One-arm (inline) load balancer with fabric as gateway
The traffic coming from the external network arrives on the ACI fabric and is routed to the VIP (10.10.10.100)
because the VIP is an ACI local endpoint in the “Web” bridge domain. The traffic is then load balanced to one of
the servers associated to the VIP. The return traffic from the server arrives on the “Web” bridge domain subnet
IP because it is the gateway for the servers. SNAT or PBR is required to make the traffic return back to the load
balancer. If SNAT is enabled on the load balancer, the destination IP of the return traffic is the IP in the “Web”
bridge domain, which is owned by the load balancer (for example, 10.10.10.10). ACI bridges the traffic from
the servers to the load balancer. If PBR is enabled for the return traffic, PBR is applied on the traffic from the
“Web” EPG to the L3Out EPG “External.” ACI redirects the traffic to the load balancer’s internal interface. The
load balancer then routes the return traffic back to the subnet IP address of the “Web” bridge domain.
Figure 14 illustrates the contract configuration for this design. To permit end-to-end traffic, one of the following
configurations is required:
● Two contracts (SNAT on the load balancer) — One is between the L3Out EPG “External” for the external
network and the EPG “LB” for the load balancer’s interface, and the other is between the EPG “LB” for
the load balancer’s interface and “Web” EPG.
● One contract (SNAT on the load balancer) — If the interface of the load balancer and the servers is in the
same bridge domain and there is no security requirement, they can be combined into one EPG instead of
being different EPGs with a contract.
● Service graph with PBR for return traffic — This design uses service graph PBR on a contract between
the L3Out EPG “External” for the external network and the “Web” EPG. The EPG (called “internal service
EPG” or “shadow EPG”) for the load balancer interface is automatically created through service graph
rendering. The internal service EPG is not displayed in the GUI, and the user doesn’t need to manage
them.
Figure 14.
One-arm (inline) load balancer with fabric as gateway (ACI network and contract design)
● ACI provides routing for the servers and the load balancer; it is their default gateway or routing next hop.
● PBR or SNAT is required.
● The service device can be in the same or different BDs with the servers.
● If PBR is used to make the return traffic go back to the load balancer, a service graph is mandatory and
specific traffic is redirected to the load balancer’s internal interface.
● If SNAT is used to make the return traffic go back to the load balancer, the NATed IP must be in the load
balancer interface subnet range.
To add a route to the VIP on an ACI fabric, three options are available:
● Add a secondary IP on the bridge domain of the EPG for the load balancer external interface. This option
requires you to allocate the VIP subnet range in the ACI bridge domain, which might not be preferable if
VIP is a public IP address or if multiple VIPs in the same subnet are owned by different load balancers
across different networks.
● Add a /32 static route on the EPG for the load balancer’s external interface. This option supports /32
static route only. If you need a VIP subnet range or RHI, you need to use a L3Out to connect the load
balancer’s external interface instead of this option. This option is available on an EPG created by a user.
As of Cisco APIC Release 5.0, this option is not available in an internal service EPG created through
service graph rendering.
● Use an L3Out to add a static route or to establish dynamic routing neighborship with the load balancer.
This option requires an L3Out configuration for load balancer interface connectivity. This option supports
Route Health Injection (RHI), which requires dynamic routing to advertise the VIP from the load balancer.
The first two options don’t require an L3Out, hence the network design is the same as the examples already
covered. As a result, this section focuses on designs using an L3Out.
Note: This document does not cover how to create L3Out and L3Out design considerations. Refer to the
ACI Fabric L3Out Guide for details: https://www.cisco.com/c/en/us/solutions/collateral/data-center-
virtualization/application-centric-infrastructure/guide-c07-743150.html.
Figure 15.
Two-arm (inline) load balancer as gateway
Figure 16.
Two-arm (inline) load balancer with fabric as gateway
Either SNAT or PBR can be used to make the return traffic go through the load balancer. If PBR is used to
redirect return traffic (from the provider “Web” EPG to the consumer L3Out EPG “External” for an external
network) to the load balancer’s internal interface, it requires the unidirectional PBR feature that is available in
Cisco APIC Release 5.0.
Figure 18 provides an example of what this looks like. The VIP 10.10.20.100 is outside the load balancer
interface local subnet, 192.168.10.0/24. The L3Out “LB-Ext” for the load balancer interface is used to add the
route on the ACI fabric to reach VIP 10.10.20.100 through 192.168.10.254, which is the load balancer interface
IP. If RHI is enabled on the load balancer, use of dynamic routing on the L3Out “LB-Ext” is required to establish
dynamic routing peering between the ACI border leaf nodes and the load balancer.
As of APIC Release 5.0, ACI PBR can redirect traffic to a load balancer interface that is connected to a bridge
domain, and not the interface on an L3Out. Therefore, SNAT must be enabled on the load balancer.
You can configure the devices of a redundant system to run in one of two redundancy modes: active/standby
mode and active/active mode.
Figure 19.
Active/standby Cisco Secure ADC topology
Active/active mode
With active/active mode, both devices are in an active state simultaneously; each device processes traffic for
different virtual servers (VIPs) or SNATs. If an event prevents one of the devices from processing traffic, the
other device begins processing that traffic in addition to its own. In this mode, both devices actively process
application traffic, each for a different application.
There are two methods to provide active/active configuration on the Cisco Secure ADC; one is through the use
of service groups, known as Service HA. The other option is by enabling multiple virtual instances of the ADC
on each platform; this is known as vADC.
The first diagram in Figure 20 provides an example of the Service HA, with two Cisco Secure ADCs, Secure-
ADC-1 and Secure-ADC-2. Each Secure ADC contains a service group that is active, with another service
group backing up the active service group on the alternate Secure ADC.
Figure 20.
Device groups with active/active configurations
vADCs provide more flexibility as each vADC on an appliance acts as an independent device, including its own
dedicated interfaces, VLANs, IP address space, routing domain, management, and so on. This provides
complete segregation from the underlying physical ADC and other vADCs. One of the key benefits of this is
administration. One vADC can be managed by one team, while the other vADC is managed by another.
● Any objects on a Secure ADC device that you configure for synchronization remain synchronized
between the two devices.
● Failover capability and connection mirroring are enabled on each device.
Figure 21.
Active/active Cisco Secure ADC topology
Some considerations while deploying an active/active mode is that an active/active architecture can be more
helpful in the case of high load than an active/standby architecture. The drawback is that if you lose one of your
devices, the other one may not be able to cope with complete traffic load. In terms of functionality, the software
module supports active/active mode, but the combined performance of both active/active devices failing back
to a single device may not, and there may be additional challenges when one device fails. For this reason, even
in an active/active mode, the maximum recommended performance should never exceed 50% of the
performance of a single device.
There are two possible methods to monitor the health and session mirroring; one is to use a dedicated
interswitch link, and the other is to use network failover. Network failover leverages the data path interfaces to
provide health and session mirroring.
Interswitch links
An interswitch link is a direct connection between two Cisco Secure ADCs that is used for session mirroring
and to send health checks. Both the active and backup devices will monitor the health of the other device to
allow for fast failover without traffic interruptions. When using active/active mode with vADC, that same
interswitch link can be leveraged, and each pair of matching vADC messages are uniquely identified by the HA
ID that is unique per vADC pair.
On the Secure ADC, multiple interfaces can be used to decide if the network failover should occur. Configuring
failover requires you to specify peer IP addresses and floating IPs on each device. Some of these IP addresses
enable continual, High-Availability (HA) communication among devices in the device group, while other
addresses ensure that application traffic processing continues when failover occurs.
The types of IP addresses on each Secure ADC device that can be used for network failover are:
● A local, static, ADC-IP address for VLAN “HA.” This unicast ADC-IP address is the main address that
other devices in the device group use to communicate continually with the local device to assess the
health of that device. When a device in the device group fails to receive a response from the local
device, the Secure ADC system triggers failover.
● A local management IP address. This unicast management IP address serves the same purpose as the
static ADC-IP address for VLAN “HA,” but it is only used when the local device is unreachable through
the “HA” static ADC-IP address.
In the case of using network failover for HA on the Secure ADC with ACI (Figure 22), if the network failover
traffic is carried outside of the Cisco ACI fabric (for example, when using interfaces that are connected to each
Secure ADC device directly), the Cisco ACI fabric doesn’t have to manage the failover network. If the network
failover traffic is carried within the Cisco ACI fabric, an EPG will need to be configured for the failover traffic.
Figure 22.
Failover network design option
● HA – https://support.radware.com/app/answers/answer_view/a_id/16467/~/alteon-high-availability-
mechanism
● Cisco ADC session mirroring –
https://support.radware.com/app/answers/answer_view/a_id/15695/~/alteon-session-mirroring---
limitations-and-
recommendations#:~:text=Session%20mirroring%20can%20be%20performed,NAAP%20packets%20can
not%20be%20routed.
The other option to prevent endpoint learning from being disabled in the bridge domain is to enable “Rogue EP
Control”. If Rogue EP Control is enabled, Endpoint Move Dampening through Endpoint Retention Policy,
explained above, will not take effect. The APIC configuration location for “Rogue EP Control” is at System >
System Settings > Endpoint Controls > Rogue EP Control. This configuration is a fabric-wide setting and is
disabled by default.
For more details on ACI endpoint learning behavior and the configurations above, refer to sections “Endpoint
Retention Policy” and “Rogue EP Control” in the “ACI Fabric Endpoint Learning White Paper”:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739989.html.
Another scenario possible after Secure ADC failover takes place is that the new standby Secure ADC still sends
traffic using floating ADC-IPs and VIPs as source IP addresses. This will result in the ACI fabric learning the IPs
from multiple locations through the data plane. This issue can be avoided by disabling IP data-plane learning.
For more details on ACI IP data-plane learning and its use case, refer to the section “IP Data-plane Learning” in
the “ACI Fabric Endpoint Learning White Paper”: https://www.cisco.com/c/en/us/solutions/collateral/data-
center-virtualization/application-centric-infrastructure/white-paper-c11-739989.html#IPDataplaneLearning.
● To improve reliability and failover speed in “lossy” networks by minimizing Address Resolution Protocol
(ARP) table updates on servers and network devices that are in the same broadcast domain with the
Secure ADC system
● When using Policy-Based Redirect (PBR) on Cisco ACI
When configuring traffic-group virtual MAC for Secure ADC on VMware ESXi servers, you must configure the
virtual switch's Forged Transmits and Promiscuous Mode settings to Accept. By default, the Promiscuous Mode
and Forged Transmits settings are disabled. Since the VMM integration with the Cisco APIC, the port-group
security settings are controlled by the APIC and cannot be changed directly on VMware vCenter or ESXi
servers. The APIC settings for the port-group security settings are available at the domain association
configuration under an EPG.
When enabled, Return to Last Hop allows the Secure ADC system to send return traffic from pools to the MAC
address that transmitted the request, even if the routing table points to a different network or interface. As a
result, the Secure ADC system can send return traffic to clients, even when there is no matching route. An
example would be when the Secure ADC system does not have a default route configured and the client is
located on a remote network. Additionally, Return to Last Hop is useful when the Secure ADC system is load
balancing transparent devices that do not modify the source IP address of the packet. If the Return to Last Hop
option is not enabled, the Secure ADC system may not return connections to the same transparent node,
resulting in asymmetric routing.
If ACI PBR is used and Return to Last Hop is enabled, “Source MAC Rewrite” might need to be enabled on ACI
PBR. Otherwise, the Secure ADC will use the original source MAC address, instead of the ACI bridge domain
MAC, as the destination MAC for return traffic, even if the next hop of the traffic should be the ACI bridge
domain MAC. See the Cisco ACI PBR white paper for more details:
https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-
infrastructure/white-paper-c11-739971.html.
IP Aging on ACI
Because of the nature of the load balancer that owns multiple IPs with a single MAC address, it is
recommended to enable “IP Aging” on ACI. The APIC configuration location for IP Aging is at System > System
Settings > IP Aging. The default setting is disabled.
If IP Aging is disabled, an endpoint might have unused IP addresses stuck on the same MAC address. For
example, when VIPs are using the same MAC as the ADC-IP, when Secure ADC is added and then the VIP is
deleted, as shown in Figure 23, the ACI fabric keeps the entry for the VIP that was already deleted as long as
traffic is received from the MAC. This is because the ACI fabric sees the endpoint as all three components (the
MAC, ADC-IP, and VIP). If traffic is received from any one of these components, the entries for all three will be
kept active.
Figure 23.
IP Aging policy use case
If IP Aging is enabled, the ACI fabric sends a unicast ARP packet at 75 percent of the configured endpoint
retention timer for all IP addresses that belong to the endpoint. If no response is received from that particular IP
address, it will be aged out of the endpoint table. (Note that the MAC address and responding IP address for
the endpoint will be retained.)
● Role-Based Access Control (RBAC) to create separate users for each tenant
● Network isolation for each tenant
● Security isolation for each tenant
● Allowing communication between tenants
A tenant in the ACI object model represents the highest level object. A tenant consists of networking-related
objects such as VRFs, bridge domains and subnets, and policy-related objects such as application profiles,
EPGs, and contracts, as shown in Figure 24. A tenant could be a unique customer, an operating group, a
business device, an application, etc.
Figure 24.
Cisco ACI tenant logical model
As of Cisco APIC Release 4.2(3), an ACI fabric supports up to 3000 tenants and 3000 VRFs. Refer to the ACI
verified scalability guide for the latest status: https://www.cisco.com/c/en/us/support/cloud-systems-
management/application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html
Network isolation
Objects such as VRFs, BDs, EPGs, and contracts defined in a tenant are not visible from other tenants unless
objects are defined in a common tenant. Thus, the typical multitenant design is deployed so that each tenant
has unique VRFs and BDs (Figure 26). The result is that EPGs in different tenants can’t be in the same network
(VRF/BD), which means each tenant network is logically isolated, even though both VRFs are deployed in the
same ACI fabric.
Figure 26.
Use of unique VRFs and BDs provides network isolation
Figure 27.
Use of common VRFs/BDs still provides security isolation
EPGs in different tenants can’t have a contract unless a contract is exported from the provider tenant to the
consumer tenant, or a contract defined in a common tenant is used.
A vADC is a virtualized instance of the OS that behaves in the same manner as a traditional hardware ADC, with
the exception that, while it is bound to a specific hardware resource, the amount of resources allocated to the
vADC may vary based on the user’s or application’s resource needs. This enables you to run multiple
independent and private vADCs that vary in their processing power. Each vADC comprises a vSP (virtualized
switch processor) and a vMP (virtualized management processor), providing the vADCs with their own set of
resources, network infrastructure, and services that are completely independent of neighboring vADCs. This
enables multiple users to run vADCs and allocate resources to these vADCs without introducing any risk to the
other vADCs within the shared physical environment.
vADC domain
A vADC domain creates a strictly defined set of resources, including a routing table. Each vADC domain
contains IP address spaces, routing information, and VLANs. IP address spaces can be duplicated between
domains, allowing easy reuse of RFC 1918 private addressing for multiple customers or projects. Route
domains can be strictly isolated from one another or have explicitly controlled access between them. This
allows a common “front-end” network space to be presented to an access network but with services running
within dedicated “tenant” network spaces. System resources are explicitly dedicated for each domain, ensuring
that performance and SLA can be guaranteed. This design allows for the most efficient use of system resources
since each domain will consume only the resources it is allocated (see Figure 29).
Figure 29.
Examples of vADC domains
vADC administrator
The vADC administrator manages Layer 3 and server load balancing functionality controlling the service and/or
application policies and performance.
User roles
A user role is a property of a Cisco Secure ADC administrative user account. For each Secure ADC user
account, you can assign a different user role to each vADC to which the user has access. This allows you to
assign multiple user roles to each user account on the system.
The tasks that a user can perform is based on the user role assigned. For example, a user with the operator role
can view settings but not change any setting. Conversely, a user with the administrator role can perform all
tasks related to objects within the related administrator role specified.
The Secure ADC system offers several different user roles that you can choose from when assigning roles to a
user account. Each user role grants a different level and type of permissions to the user.
Note: You must have an administrator user role to assign user roles to user accounts.
Figure 30.
Cisco ACI and Secure ADC multitenancy mapping
Another method to achieve multitenancy, which includes appliance-based separation along with administrative
and network separation, is to use one dedicated Secure ADC device per APIC tenant.
The PBR section in the document includes a traffic flow example for the load balancer.