Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Implementing ACI

Download as pdf or txt
Download as pdf or txt
You are on page 1of 147

Implementing Cisco

Application Centric Infrastructure


Course Overview
CCIEx4 #8593 & CCDE #2013::13
Course Overview

+ Course follows the blueprint for 300-620 DCACI – Implementing Cisco


Application Centric Infrastructure
+ Course Topics:
+ ACI Topology & Hardware Overview
+ ACI Initialization & Fabric Discovery
+ ACI Management
+ Managing ACI Configurations
+ Understanding the ACI Object Model
Course Overview

+ Course Topics
+ Implementing ACI Fabric & Access Policies
+ Implementing ACI Tenant Policies
+ Implementing Contracts
+ Implementing Taboo Contracts
+ Implementing the VZAny EPG
+ Implementing Preferred Group Members
+ Implementing VRF Policy Enforcement
+ Implementing Layer 2 Out
+ Implementing Layer 3 Out
+ ACI and VMware Integration
+ Service Graphs Overview
+ Implementing Service Graphs in Unmanaged Mode
+ Implementing Service Graphs in Managed Mode
Implementing Cisco
Application Centric Infrastructure
Introduction
CCIEx4 #8593 & CCDE #2013::13
Course Overview

+ Course follows the blueprint for 300-620 DCACI – Implementing Cisco


Application Centric Infrastructure
+ Recommended Reading - CCNP Data Center Application Centric
Infrastructure DCACI 300-620 Official Cert Guide
+ Course Topics:
+ ACI Topology & Hardware Overview
+ ACI Initialization & Fabric Discovery
+ ACI Management
+ Managing ACI Configurations
+ Understanding the ACI Object Model
Course Overview

+ Course Topics
+ Implementing ACI Fabric & Access Policies
+ Implementing ACI Tenant Policies
+ Implementing Contracts
+ Implementing Taboo Contracts
+ Implementing the VZAny EPG
+ Implementing Preferred Group Members
+ Implementing VRF Policy Enforcement
+ Implementing Layer 2 Out
+ Implementing Layer 3 Out
+ ACI and VMware Integration
+ Service Graphs Overview
+ Implementing Service Graphs in Unmanaged Mode
+ Implementing Service Graphs in Managed Mode
Implementing Cisco
Application Centric Infrastructure
ACI Topology & Hardware Overview
What is ACI?

+ Application Centric Infrastructure (ACI)


+ Cisco’s Software Defined Networking (SDN) solution for the Data Center
+ Network automation through a policy-based model
ACI Behind the Scenes

+ What really is ACI?


+ An automated VXLAN overlay tunnel system
+ Supports both layer 2 and layer 3 VXLAN gateways
+ Underlay network uses IS-IS for transport and MP-BGP
+ Uses a whitelist model by default
+ Leafs are VXLAN Tunnel Endpoints (VTEPs)
+ Provides VTEP to VTEP IP transport through spines
+ Details are hidden from the frontend
+ ACI’s VXLAN is meant to be plug-and-play
ACI Hardware Components

+ Application Policy Infrastructure Controller (APIC)


+ Spine switches
+ Leaf switches
What is APIC?

+ APIC is the brains of the ACI system


+ I.e. the Controller
+ APIC runs on a dedicated server or as a virtual machine
+ We program the APIC via GUI, CLI, or APIs, the APIC programs the
Spines and Leafs
+ Leafs and Spines have a CLI, but it’s read-only
How APIC Works

+ APIC runs in clusters of 3, 5, or 7 servers


+ Number of APICs depends on number of Leaf ports
+ See Verified Scalability Guide for Cisco APIC
+ Database is sharded across all available APICs
+ Think like a RAID set, but for the database
+ APIC is not in the data plane
+ If an APIC fails, traffic still forwards uninterrupted
+ If all but 1 APIC fails, the fabric configuration goes into read-only mode
Application Policy Infrastructure Controller (APIC) Hardware

+ APIC has 3 generations of hardware


+ APIC-M1 & APIC-L1 - UCS C220 M3 servers
+ APIC-M2 & APIC-L2 - UCS C220 M4 servers
+ APIC-M3 & APIC-L3 - UCS C220 M5 servers
+ More information at Cisco Application Policy Infrastructure Controller
Data Sheet
Nexus 9000 ACI Switches

+ Currently two main Nexus 9K ACI lines


+ Nexus 9500 modular ACI spines
+ Nexus 9300 fixed ACI spines & leafs
+ Nexus 9200 is NX-OS mode only - no ACI support
+ ACI and NX-OS modes are mutually exclusive
+ Essentially two unrelated firmware versions
+ Some switches support both modes, some only one
+ More info at Cisco Nexus 9000 Series Switches Compare Models
+ Low level details at Cisco Live BRKARC-3222 - Cisco Nexus 9000
Architecture
What are Spine Switches?

+ Spines connect to all Leafs in a standard topology


+ No east/west interconnect between Spines
+ Spines run IS-IS and MP-BGP to Leafs
+ Spines form the collapsed core for high speed forwarding between Leafs
+ Spines also host the endpoint database
+ Uses the Council of Oracle (COOP) protocol
+ More on this later...
Spine Switch Hardware

+ Spine switches currently have 2 generations


+ Gen 1
+ 9336PQ Fixed Spine and 9736PQ Linecard
+ Gen 2
+ Model numbers that end in C, EX, FX, and GX
+ Cloud Scale 1/10/25/40/100/400G are shipping
+ More information at Cisco Nexus 9000 Series Switches Compare Models
What are Leaf Switches?

+ Leafs connect northbound to all Spines, and southbound to end devices


+ Servers, switches, routers, firewalls, load balancers, etc.
+ No east/west interconnect between Leafs
+ Leafs run IS-IS and MP-BGP to Spines
+ Leafs perform the VXLAN encapsulation/decapsulation between access
ports and fabric ports
+ Leafs perform most of the work in the ACI fabric
+ Spines simply forward IP packets between Leaf switches, they do not
participate in VXLAN
Leaf Switch Hardware

+ Like Spines, currently 2 generations of Leaf Switches


+ Gen 1
+ All Nexus 9300 Series ACI leaf switches whose model numbers end in
PX, TX, PQ, PX-E, and TX-E are considered first-generation leaf
switches.
+ Gen 2
+ ACI leaf switches whose model numbers end in EX, FX, FX2, and FXP
are considered second-generation leaf switches.
+ Cloud Scale 1/10/25/40/100/400G are shipping
+ More information at Cisco Nexus 9000 Series Switches Compare Models
Standard ACI Topology

+ ACI uses a CLOS Fabric topology


+ Sometimes called a folded CLOS Fabric, 2 Stage Fabric, or “fat-tree”
+ All Leafs uplink to all Spines
+ APICs are dual connected to Leafs
+ No xconnects between Leafs
+ No xconnects between Spines
+ Traffic flow is always Leaf - Spine - Leaf
+ Scale out bandwidth by adding more Spines
Standard ACI Topology Visualized
Class ACI Topology

C220 M4

Ten0 Ten1

N9K
APIC M2 E101/1/1 E102/1/1 Spine1
N2K1 N2K2
Ten1 E1/1

Ten0 E1/2

E1/44 - 47 E1/44 - 47

UCS2-FI-A UCS2-FI-B E1/49


E1/48
E1/49 E1/48
E1/14 E1/20 E1/20
N9K E1/14 N9K
Leaf1 E1/13 E1/20 E1/20 E1/13 Leaf2
E1/10 UCS1-FI-A UCS1-FI-B E1/3
E1/1 E1/11 E1/12 E1/1 E1/2 E1/12
E1/4 E1/9
E1/2 E1/7 E1/6 E1/11
E1/5 E1/8 E1/9 E1/4 E1/5 E1/8
E1/3 E1/6 E1/7 E1/10

E1/7 E1/7 E1/15 E1/15 E1/23 E1/32 E1/40 E1/40 E1/48 E1/48
E1/24 E1/31
E1/8 E1/8 E1/16 E1/16 E1/24 E1/31 E1/39 E1/39 E1/47 E1/47
E1/23 E1/32

N7K1 N7K2 N7K3 N7K4 N7K5 N7K6 N7K7 N7K8 N7K9 N7K10 N7K11 N7K12
Implementing Cisco
Application Centric Infrastructure
ACI Initialization & Fabric Discovery
Initializing the ACI Fabric

+ Step 1: Console to the APIC to configure the CIMC IP Address


+ Step 2: Connect to the CIMC via SSH and reverse telnet to APIC
+ Step 3: Follow the initial configuration dialogue via the CLI
+ Step 4: Connect to the APIC via the GUI
+ Step 5: Commission the Leafs and Spines
Connecting to APIC via CIMC SOL

N7K1# ssh admin@192.168.0.3 vrf management


admin@192.168.0.3's password:
C220-FCH2025V10S# show sol
Enabled Baud Rate(bps) Com Port
------- --------------- --------
yes 9600 com0
C220-FCH2025V10S# scope sol
C220-FCH2025V10S /sol # set baud-rate 115200
C220-FCH2025V10S /sol *# commit
C220-FCH2025V10S /sol # exit
C220-FCH2025V10S# connect host
CISCO Serial Over LAN:
Close Network Connection to Exit

Application Policy Infrastructure Controller


apic1 login: admin
Password:
apic1#
Resetting the APIC

apic1# eraseconfig setup


Do you want to restore this APIC to factory settings? The
system will be REBOOTED. (Y/n): Y

Broadcast message from root@apic1


(unknown) at 15:40 ...

The system is going down for reboot NOW!


apic1#
Resetting the Leafs & Spines

spine1# setup-clean-config.sh
In progress
In progress
In progress
In progress
Done
spine1# reload
This command will reload the chassis, Proceed (y/n)? [n]: y
How Fabric Discovery Works

+ APICs are connected in-band to Leafs via VICs


+ APIC discovers Leaf via LLDP and assigns DHCP address
+ APIC commissions Leaf with unique Node ID
+ APIC learns about all Spines through Leaf
+ APIC commissions Spines with unique Node IDs
+ APIC learns about all Leafs through Spines
+ Remaining Leafs are commissioned with unique Node IDs
Configuring Fabric Discovery

+ Fabric > Inventory > Fabric Membership


+ Assign Node ID, Node Name, & optional Rack Name
+ Continue process for other Spines and Leafs
Implementing Cisco
Application Centric Infrastructure
ACI Out-of-Band Management
Managing APIC and Fabric Nodes

+ APIC connects to the Out-of-Band (OOB) management network via


dedicated mgmt0
+ HTTPS to this address is the APIC GUI
+ SSH to this address is the APIC CLI
+ APIC also connects inband to Leafs via Tenant infra VRF overlay-1
+ APIC can be used as a jumpbox to reach the Leafs and Spines CLI
Navigating the APIC GUI

+ System
+ Reports, e.g. “faults” and “health scores”
+ Tenants
+ Tenant is a container for policies
+ Most operations work is done here
+ VRFs, Bridge Domains, Subnets, Endpoint Groups, Contracts
+ Fabric
+ Physical connectivity, Inventory, vPCs, Port Channels, Routing, VLANs, etc.
+ Fabric Policies means uplinks from Leafs to Spines (i.e. the underlay)
+ Access Policies means downlinks from Leafs to endpoints (e.g. access ports)
Navigating the APIC GUI

+ Virtual Networking
+ VMware, Hyper-V, KVM, etc. integration
+ L4 – L7 Services
+ Plugins for firewalls, load balancers, etc.
+ Called Service Graphs
+ Admin
+ AAA, firmware, config rollback/import/export, etc.
+ Operations
+ Troubleshooting tools
+ Apps
+ Apps the APIC can run on box
+ Integrations
+ APIC integration with UCSM
Viewing the Topology from GUI

+ APIC GUI can show visual view of physical topology


+ Fabric > Inventory > Topology
+ Double clicking nodes displays details
+ Similar to UCS
Viewing the Topology from CLI

apic1# bash
admin@apic1:~> show fabric membership
clients:
serial-number node-id node-name model role ip decommissioned
supported-model
------------- ------- --------- ------------- ----- -------------- ------------- -------
SAL2024RRMD 101 leaf1 N9K-C9372PX-E leaf 10.0.184.95/32 no yes
SAL2024RRQC 102 leaf2 N9K-C9372PX-E leaf 10.0.184.93/32 no yes
SAL2022R22Y 201 spine1 N9K-C9336PQ spine 10.0.184.94/32 no yes
Connecting to Nodes from CLI

apic1# bash
admin@apic1:~> attach leaf1
# Executing command: ssh leaf1

Password:
Last login: Fri Aug 5 00:01:26 2016 from 10.0.0.1
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
<snip>
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
leaf1#
Verifying Underlay Routing from CLI

leaf1# show ip route vrf overlay-1


IP Route Table for VRF "overlay-1"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>

10.0.0.0/27, ubest/mbest: 1/0, attached, direct


*via 10.0.0.30, vlan7, [1/0], 01:17:41, direct
10.0.0.30/32, ubest/mbest: 1/0, attached
*via 10.0.0.30, vlan7, [1/0], 01:17:41, local, local
10.0.0.32/32, ubest/mbest: 2/0, attached, direct
*via 10.0.0.32, lo1023, [1/0], 00:52:03, local, local
*via 10.0.0.32, lo1023, [1/0], 00:52:03, direct
10.0.136.64/32, ubest/mbest: 1/0
*via 10.0.184.94, eth1/49.1, [115/2], 00:41:21, isis-isis_infra, L1
<snip>
APIC CLI Quick Reference

+ Enter bash shell


+ apic1# bash
+ Verify topology
+ admin@apic1:~> show fabric membership
+ SSH to fabric nodes
+ admin@apic1:~> attach leaf1
Leaf & Spine CLI

+ CLI has context sensitive help


+ Activated with <tab><tab>, not ?

leaf2# show isis <tab><tab>


adjacency dteps interface protocol statistics
database event-history internal route traffic
leaf2# show isis adjacency <tab><tab>
detail event-history system-id
ethernet internal vrf
leaf2# show isis adjacency detail vrf all <cr>
IS-IS process: isis_infra VRF:overlay-1
IS-IS adjacency database:
System ID SNPA Level State Hold Time Interface
C900.0000.0000 N/A 1 UP 00:00:53 Ethernet1/49.1
NX-OS Like CLI Supports Common Commands

+ show vrf + show interface status


+ show ip route vrf overlay-1 + show cdp neighbor
+ show vlan + show lldp neighbor
+ show vlan extended + show port-channel summary
+ show mac address-table + show port-channel traffic
+ show interface + show vpc
Static Node Management Addresses

+ Leaf and Spine Nodes also support direct mgmt0 IP connectivity


+ ACI uses Tenant mgmt VRF oob for this connectivity
+ APIC assigns Leaf and Spine mgmt0 addresses under Tenants > mgmt
> Node Management Addresses > Static Node Management
Addresses
Implementing Cisco
Application Centric Infrastructure
Managing ACI Configurations
ACI Configuration Snapshots

+ ACI supports configuration snapshots and rollback


+ Configured under Admin > Config Rollbacks
+ Snapshots can be stored on APIC itself or sent to a remote ftp/scp/sftp
server
+ Snapshots can also be scheduled to recur automatically
ACI Configuration Import/Export

+ Configuration Import and Export is controlled under Admin >


Import/Export
+ By default, a config snapshot runs every 8 hours and saves it locally to
APIC
+ Admin > Import/Export > Export Policies > Configuration >
DailyAutoBackup
+ A Remote Location defines an ftp/scp/sftp server location and
credentials for exporting snapshots to
+ An Import Policy defines if the configuration will be Merged or
Replaced, and if the operation is Atomic or Best Effort
Implementing Cisco
Application Centric Infrastructure
Understanding the ACI Object Model
The ACI Object Model

+ The Cisco ACI Policy Model Guide defines how APICs store an object
hierarchy called the Management Information Model (MIM)
+ The MIM forms a hierarchical Management Information Tree (MIT) called
the Policy Universe
+ Policy Universe contains all the objects that we can modify through CLI, GUI,
or APIs
The ACI Policy Universe Object Hierarchy Visualized
The ACI Policy Universe

+ APIC controllers
+ An APIC cluster is typically 3 controllers providing management and monitoring of
the ACI fabric
+ Tenants
+ Container for policies that allows for access control and configuration fault
isolation
+ Fabric policies
+ Fabric policies apply at switch or pod level to control protocols running on the Leaf
to Spine Fabric ports, such as NTP, IS-IS, BGP, and DNS
+ Access policies
+ Access policies apply to the southbound facing Leaf ports, and control protocols
such as CDP, LLDP, & LACP. Access policies also control what type of device
connects to the leaf, such as a server, switch, router, or appliance
The ACI Policy Universe

+ Virtual networking
+ ACI integrations with hypervisor or container environments, referred to as
VMM domains
+ Layer 4 to Layer 7 Services
+ APIC can push configurations to services such as firewalls and load
balancers and selectively steer traffic L4–L7 appliances, referred to as a
Service Graph
+ Authentication, authorization, and accounting (AAA)
+ AAA for user privileges, Role Based Access Control (RBAC), and security
domains allow for multitenancy in ACI
Tenant Object Hierarchy Visualized
Tenant Objects

+ Outside Network
+ Connection to a router or legacy switch
+ Application Profile
+ Container for Endpoint Groups (EPGs)
+ Endpoint Group
+ Logical construct where policy enforcement occurs
+ Objects within the same EPG can talk to each other by default
+ Objects in different EPGs cannot talk to each other by default
+ Typically defines the application
+ E.g. EPG “WEB_SERVERS”
Tenant Objects

+ Bridge Domain
+ The Layer 2 forwarding construct
+ Behaves effectively like a VLAN, but technically not a VLAN
+ Think of a bridge domain as a broadcast domain
+ Subnet
+ Subnet is the distributed Anycast Layer 3 gateway
+ I.e. the default gateway for your servers
+ Subnet exists on all Leafs that its Bridge Domain and an EPG are deployed
+ VRF
+ Virtual Routing and Forwarding Instance
+ Same as in regular NX-OS and IOS
+ Previously called a Private Network
+ By default, no communication between VRFs
Tenant Objects

+ Contract
+ A Contract is the traffic policy between EPGs
+ E.g. allow access from EPG WEB_CLIENTS to EPG WEB_SERVERS at
TCP Port 80 and 443
+ Contracts have Providers and Consumers
+ Provider offers the service
+ E.g. the web server
+ Consumer uses the service
+ E.g. the web client
+ Provider/Consumer effectively defines the traffic flow direction of the policy
+ Subject
+ Subject is the container for Filters, like an ACL
+ Filter
+ An Access List entry
Access Policies Object Hierarchy Visualized

Attachable Interface Profile Switch Profile


Access
Entity Profile
Interface Selector Switch Selector

Domain Policy Group Policy Group

VLAN Pool Policies Policies


Access Policy Objects

+ Domain
+ The link between Tenant Policies and Access Policies
+ Binds an EPG to access or virtual networking policies
+ Consumes a single VLAN pool
+ Controls what type of device connects to the Leaf ports
+ Physical/Baremetal Domain
+ Any device unmanaged by APIC
+ External Bridge Domain
+ Connection to a non-ACI Switch
+ Also known as L2out
+ External Routed Domain
+ Connection to a Router
+ Also known as L3out
+ Fibre Channel Domain
+ FCoE devices
+ Virtual Machine Manager (VMM) domain
+ A hypervisor that APIC has plugins for
+ E.g. integration with VMWare DVS
Access Policy Objects

+ VLAN Pool
+ Controls which VLAN encapsulations a Tenant can use
+ VLANs can be static or dynamic
+ Dynamic VLANs are automatically assigned using VMM integration
+ Attachable Access Entity Profile (AAEP)
+ Controls which Leaf ports a Tenant can apply policies on
+ Each Leaf interface belongs to one AAEP
+ Each AAEP may contain multiple domains
Access Policy Objects

+ Interface Profile
+ Container for Interface Selectors
+ Interface Selector
+ Port or range of ports on the leaf
+ E.g. 1/1-2
+ Calls the Interface Policy Group
+ Interface Policy Group
+ Calls the AAEP and the Interface Policies
+ Interface Policies
+ Controls protocols running on the Leaf ports
+ E.g. CDP, LLDP, LACP, MCP
Access Policy Objects

+ Switch Profile
+ Container for Switch Selectors
+ Calls the Interface Profile
+ Switch Selector
+ Calls the Node ID of the Switch
+ Calls the Switch Policy Group for that Node ID
+ Switch Policy Group
+ Container for Switch Policies
+ Switch Policies
+ Policies that apply Switch wide
+ E.g. BFD Timers, CoPP, Netflow timers, etc.
Using the GUI Show Usage

+ How objects are bound together is the biggest ACI learning curve
+ The GUI allows you to trace an object to see what other objects its
calling, and which objects are calling it
+ Most GUI screens support this as the Show Usage option at the bottom
of the window
Using the GUI Debug View

+ Object names can be found using the debug view on the APIC GUI
+ Help and Tools > Show Debug Info
Using the API Inspector

+ Every action in the GUI is really an API call against APIC


+ Help and Tools > Show API Inspector
Exploring the Object Hierarchy by Using Visore

+ Visore is a tool built into APIC for browsing the object tree
+ Access at https://APIC-IP/visore.html
Implementing Cisco
Application Centric Infrastructure
Implementing ACI Fabric & Access Policies
What are Fabric Policies?

+ Fabric Policies affect Fabric Ports, which are the Leaf to Spine links
+ Fabric > Fabric Policies
+ Examples of Fabric Policies
+ NTP
+ IS-IS Timers
+ BGP Route Reflection
+ SNMP
What are Access Policies?

+ Access Policies affect southbound facing ports on Leafs


+ Fabric > Access Policies
+ Examples of Access Policies
+ CDP
+ LLDP
+ LACP
+ MCP
Access Policies Object Hierarchy Visualized

Attachable Interface Profile Switch Profile


Access
Entity Profile
Interface Selector Switch Selector

Domain Policy Group Policy Group

VLAN Pool Policies Policies


Using the Access Policy Quick Start Wizard

+ Browse to Fabric > Access Policies > Quick Start > Configure an
Interface, PC, and VPC
+ Disadvantage of the Wizard is the naming conventions
+ Can’t rename an object once it’s created
+ Good for building a skeleton config of objects to know the
interconnections
Implementing Cisco
Application Centric Infrastructure
Implementing ACI Tenant Policies
Review of Tenant Objects

+ Application Profile
+ Endpoint Group
+ Bridge Domain
+ Subnet
+ VRF
+ Contract
+ Subject
+ Filter
Tenant Object Hierarchy Visualized
Creating Tenant Objects

+ Create a new Tenant under Tenants > Add Tenant


+ There are many different wizards for creating the same types of objects,
however the general workflow is as follows
+ Create a VRF
+ Create a Bridge Domain and associate it to the VRF
+ Create a Subnet under the Bridge Domain for your default gateway
+ Create an Application Profile
+ Create an Endpoint Group and associate it to the Bridge Domain
+ Associate the EPG to the relevant Domain
+ Deploy the EPG on the Leaf port previously initialized
+ This is where access or trunking is defined
Implementing Cisco
Application Centric Infrastructure
Implementing Contracts
Implementing Contracts

+ Contracts are the firewall filtering construct in ACI


+ Without a contract, communication between two EPGs is blocked
+ Traffic must be whitelisted by the contract
+ Contracts contain Subjects which contain Filters
+ Filters are reusable between contracts
+ Contracts have Providers and Consumers
+ Provider offers the service, e.g. EPG WEB_SERVERS
+ Consumer uses the service, e.g. EPG WEB_CLIENTS
+ This determines the ACL direction
+ Reverse filtering is applied from Provider to Consumer by default
Implementing Cisco
Application Centric Infrastructure
Implementing Taboo Contracts
Implementing Taboo Contracts

+ Taboo Contracts are a special type of contract that can be used to deny
traffic otherwise permitted by another contract
+ Example:
+ Regular contract says permit any, taboo says deny Telnet
+ Result is Telnet is denied and all other traffic is permitted
+ Taboo Contracts are processed before regular contracts
+ Taboo Contracts apply inbound to an EPG
+ I.e. not between two EPGs, but from Any to EPG
Implementing Cisco
Application Centric Infrastructure
Implementing the VZAny EPG
Implementing the VZAny EPG

+ To apply contracts to all endpoint groups within a VRF instance,


contracts can be applied directly to the VRF instance
+ This is referred to as the vzANY EPG or just ANY EPG
+ Example:
+ Allow ANY EPG to talk to EPG WEB_SERVERS at TCP Ports 80 & 443
+ EPG WEB_SERVERS Provides contract PERMIT_WEB
+ EPG ANY Consumes contract PERMIT_WEB
Implementing Cisco
Application Centric Infrastructure
Implementing Preferred Group Members
Implementing Preferred Group Members

+ By default, ACI only allows communication between EPGs if a contract is


configured between them
+ The Preferred Group allows a group of EPGs in the same VRF to
communicate fully without applying a contract
+ The Preferred Group option must be first enabled under the VRF
+ Tenant > Networking > VRFs > VRF > Policy
+ Add the EPG to the Preferred Group
+ Tenant > Application Profile > App > Application EPGs > EPG > Policy >
General
Implementing Cisco
Application Centric Infrastructure
Implementing VRF Policy Enforcement
Implementing VRF Policy Enforcement

+ By default, traffic between EPGs is dropped unless a contract permits it


+ This whitelist model is controlled by the option Policy Control
Enforcement Preference under the VRF
+ Tenant > Networking > VRFs > VRF > Policy
+ If this option is set to Unenforced, all EPGs in the VRF can talk
+ Useful for troubleshooting a reachability issue between endpoints
+ E.g. is it a contract problem or is it a routing problem?
Implementing Cisco
Application Centric Infrastructure
Implementing Layer 2 Out
ACI Layer 2 External Network Connectivity

+ In a Brownfield deployment, legacy applications may need to be


migrated into ACI Fabric through the usage of Classical Ethernet
+ Classical Ethernet Switches connect to ACI Leaf Switches via
802.1Q Trunks
+ ACI Leafs do not run Spanning-Tree Protocol
+ The next question is, do you want to apply policy between the
legacy app and the ACI Fabric?
+ If no policy, Extend the EPG Out of the Fabric
+ If policy required, Extend the Bridge Domain Out of the Fabric
Extending the EPG Out of the Fabric

+ Extending the EPG Out of the Fabric is as simple as statically


deploying the EPG on the port/PC/vPC connecting to the external
layer 2 network
+ Devices on the inside of the fabric and outside of the fabric are in
the same EPG
+ This means by default all traffic is allowed between the inside and
outside hosts without a contract
+ The connection between ACI and the Classical Ethernet
Switch(es) should be a Physical/Baremetal Domain
Implementing Cisco
Application Centric Infrastructure
Implementing Layer 2 Out – Extending the Bridge Domain Out of the Fabric
Extending the Bridge Domain Out of the Fabric

+ Extending the Bridge Domain Out of the Fabric means the


creation of a special EPG called an L2Out
+ Devices on the inside of the fabric are in the internal EPG, while
devices on the outside of the fabric are in the L2Out EPG
+ This means by default all traffic is dropped between the inside and
outside layer 2 networks
+ Contracts must be configured in order to whitelist traffic between the
EPGs
+ The connection between ACI and the Classical Ethernet Switch
should be an External Bridged Devices Domain
+ The L2Out EPG is defined under Tenant > Networking >
External Bridged Networks > Create Bridged Outside
Implementing Cisco
Application Centric Infrastructure
Implementing Layer 3 Out
ACI Layer 3 External Network Connectivity

+ ACI supports a variety of Layer 3 connection types to the outside


+ Individual port, Port-Channel, Virtual Port-Channel
+ Routed Interface, Routed Sub-Interface, SVI
+ A Layer 3 Outside (L3Out) network configuration defines how the
ACI fabric connects to external layer 3 networks
+ L3Out supports static routing and dynamic routing with BGP,
OSPF, and EIGRP
ACI Layer 3 External Network Connectivity

+ The Leaf Switch(es) where the L3Out is configured are


considered Border Leafs
+ The connection between ACI Leaf and the Router should be an
External Routed Devices Domain
+ The L3Out EPG is defined under Tenant > Networking >
L3Outs > Create L3Out
+ L3Outs do not participate in endpoint learning
+ E.g. you can’t learn all /32 IPs of the Internet
Layer 3 Out and Endpoint Learning

+ L3Outs do not participate in endpoint learning


+ Instead, the EPG is classified based on the source subnet
defined under the L3Out External EPG
+ 0.0.0.0/0 means all sources
+ If there are multiple L3Outs, the longest match to the source
determines the EPG
ACI Endpoint Learning

+ ACI uses 3 constructs for learning and forwarding


+ Endpoint table
+ Routing Information Base (RIB)
+ ARP Table
ACI Endpoint Table

+ The Endpoint Table stores MAC addresses, /32 IPv4 Addresses,


and /128 IPv6 Addresses
+ The primary lookup table used by ACI Leafs to find other
endpoints within the fabric
+ show endpoint from the Leaf CLI
What if the Endpoint is Unknown?

+ If an endpoint is unknown, the Leaf uses the Spine Proxy TEP


address as the outer destination IP, and forwards the packet to a
Spine
+ The Spine does a lookup in the COOP database, and forwards
traffic to the correct destination leaf VTEP
+ show coop internal info ip-db on the Spine CLI
+ This Leaf behavior is referred to as a zero-penalty forwarding
decision, because the traffic flow is always Leaf - Spine - Leaf
anyways
ACI Routing Information Base (RIB)

+ The ACI RIB, or Routing Table, is the next lookup after the
Endpoint Table
+ The routing table is still locally significant to each Leaf or Spine
+ Subnets are only deployed to a Leaf when an EPG in that Bridge
Domain is provisioned
+ Could be static or dynamic, more on this later…
+ Border Leafs are the devices that receive routes from outside
layer 3 networks, and propagate them to the rest of the fabric
+ How do we propagate routes? MP-BGP
+ Specifically VPNv4 and VPNv6 BGP like an MPLS VPN, but the
encapsulation is VXLAN instead of MPLS
Multiprotocol BGP (MP-BGP) and ACI

+ Multiprotocol BGP is the solution for distributing routing


information across the fabric
+ BGP runs in Tenant infra VRF overlay-1 automatically once a
Fabric Pod Policy is configured
+ Spines are the BGP Route Reflectors
+ Leafs are the BGP RR Clients
Multiprotocol BGP (MP-BGP) and ACI

+ Pod Policy is configured as Fabric > Fabric Policies > Pods > Policy
Groups > Create Pod Policy Group
+ Edit the BGP Route Reflector Policy Default to specify the BGP
options
+ BGP ASN
+ Which Spines run as BGP RRs
+ Verifications from the CLI:
+ show bgp vpnv4 unicast summary vrf all
+ show bgp vpnv4 unicast vrf all
Implementing Cisco
Application Centric Infrastructure
Implementing Layer 3 Out with EIGRP
Implementing Cisco
Application Centric Infrastructure
Implementing Layer 3 Out with OSPF
Implementing Cisco
Application Centric Infrastructure
Implementing Layer 3 Out with BGP
Implementing Cisco
Application Centric Infrastructure
ACI and VMware Integration
APIC and VMM Integration

+ APIC can integrate with VMware vSphere and other virtualized


environments in multiple ways
+ In vSphere, vCenter is the Virtual Machine Manager (VMM)
+ vCenter controls the virtual networking for vSphere ESXi Hosts in
a cluster
+ Called the Distributed Virtual Switch (DVS)
+ On the DVS, a port group is a group of ports with similar policy
requirements
+ APIC can push EPGs as port groups down to the virtual
environment
+ Result is that whitelisting now extends to the VM level
How VMM Integration Works

+ The link from the Leaf to the Hypervisor Host is in a VMM


Domain
+ The VMM Domain contains the connectivity details for one or
more vCenter servers
+ E.g. IP address, login credentials
+ VMM Domain object calls a VLAN Pool which contains Dynamic
VLANs
+ An EPG calls the VMM Domain, but is not statically deployed to
the Leaf
How VMM Integration Works

+ APIC pushes the EPG to the vCenter server as a port group and
chooses a VLAN number from the dynamic pool
+ Virtual Machine joins the port group through vCenter
+ vCenter reports to APIC where the VM lives through CDP/LLDP
+ APIC deploys the EPG to the Leaf port attached to the Hypervisor
+ Depends on the Resolution and Deployment Immediacy setting
Resolution vs. Deployment Immediacy

+ Resolution Immediacy defines when the policy is downloaded to


the Leaf
+ Resolution Immediacy
+ Immediate: EPG is downloaded to the leaf when the hypervisor
attaches to the VDS
+ On Demand: EPG is downloaded to the leaf when the hypervisor
attaches to the VDS and a VM is placed in the port group
+ Pre-provision: EPG is downloaded even before a hypervisor is
attached to the VDS
Resolution vs. Deployment Immediacy

+ Deployment Immediacy specifies when the policy is pushed to hardware


CAM
+ Deployment Immediacy
+ Immediate: Policy is programmed to CAM as soon as it’s
downloaded to the Leaf
+ On Demand: Policy is programmed to CAM only when the first packet is
received through the data plane
Implementing Cisco
Application Centric Infrastructure
Service Graphs Overview
Service Graphs Overview

+ A Service Graph is how ACI integrates with and automates


Firewalls, Load Balancers, and other Layer 4 – 7 devices
+ Service Graphs can use one of 3 management models
+ Network Policy Mode
+ Unmanaged Mode
+ Service Policy Mode
+ Managed Mode
+ Service Manager Mode
+ Hybrid Mode
Service Graphs in Network Policy Mode

+ In Network Policy Mode, or Unmanaged Mode, ACI automates the


network configuration to redirect traffic to the L4-L7 Device, but does not
configure the L4-L7 Device
+ This allows the Security Team, Application Team, etc. to still maintain
control of Firewall or Load Balancer policies, but not be bothered with
details of the network
Service Graphs in Service Policy Mode

+ In Service Policy Mode, or Managed Mode, ACI automates both the


network configuration and the L4-L7 Device configuration
+ Implies that ACI must have knowledge of L4-L7 Device vendor’s
implementation
+ Comes in the form of a plug-in to APIC called a Device Package
+ In Managed Mode, the Network Team is in charge of managing both the
network and the L4-L7 devices
Service Graphs in Service Manager Mode

+ In Service Manager Mode, or Hybrid Mode, ACI automates the network


configuration and minimal L4-L7 Device configuration
+ The Service Manager, which is an outside management tool, is in
charge of the specific configuration of the L4-L7 Device
+ E.g. Cisco Firepower Management Center (FMC)
L4-L7 Device Deployment Modes

+ Service Graphs support 3 device deployment modes


+ GoTo
+ GoThrough
+ One-arm
+ In GoTo mode, the default gateway of the servers is the L4-L7 device
+ Used for devices in Routed Mode
+ In GoThrough mode, the default gateway of the servers is the inside
Bridge Domain address
+ Used for devices in Transparent Mode or Bridged Mode
+ In One-arm mode, the default gateway is the server-side BD address
+ Used for single attached Load Balancers
Service Graphs and Bridge Domain Configuration Knobs

+ ACI does not flood traffic like a traditional bridge/switch


+ Endpoints are learned in the data plane and registered to the COOP
database in the Spines
+ If traffic is received for an unknown destination, it is punted to the Spine
+ Zero-penalty forwarding decision, because the traffic flow is
always Leaf - Spine - Leaf anyways
+ Forwarding: Optimize vs. Custom on the Bridge Domain
+ For Service Graphs, what if the L4-L7 Device is a Transparent Bridge?
+ Flooding would be required to populate the device’s forwarding table
+ This is the reason to modify the Forwarding knob on the Bridge Domain
+ Set to L2 Unknown Unicast: Flood and ARP Flooding: Enabled
+ Technically Unicast Routing can be set to Disabled
Implementing Service Graphs

+ Import the device package under L4-L7 Services tab


+ Only if running in Managed or Hybrid modes
+ Create the L4-L7 Device
+ Tenant > Services > L4-L7 > Devices
+ Create the Service Graph Template
+ Tenant > Services > L4-L7 > Service Graph Templates
+ Consume the Service Graph in a Contract
+ Tenant > Applications Profiles > Topology
+ Drag and Drop the L4-L7 Cloud
+ Tenant > Contracts > Standard > Subject > L4-L7 Service Graph
Implementing Cisco
Application Centric Infrastructure
Implementing Service Graphs in Unmanaged Mode
Service Graphs in Network Policy Mode

+ In Network Policy Mode, or Unmanaged Mode, ACI automates the


network configuration to redirect traffic to the L4-L7 Device, but does not
configure the L4-L7 Device
+ This allows the Security Team, Application Team, etc. to still maintain
control of Firewall or Load Balancer policies, but not be bothered with
details of the network
Implementing Service Graphs

+ Import the device package under L4-L7 Services tab


+ Only if running in Managed or Hybrid modes
+ Create the L4-L7 Device
+ Tenant > Services > L4-L7 > Devices
+ Create the Service Graph Template
+ Tenant > Services > L4-L7 > Service Graph Templates
+ Consume the Service Graph in a Contract
+ Tenant > Applications Profiles > Topology
+ Drag and Drop the L4-L7 Cloud
+ Tenant > Contracts > Standard > Subject > L4-L7 Service Graph
Implementing Cisco
Application Centric Infrastructure
Implementing Service Graphs in Managed Mode
Service Graphs in Service Policy Mode

+ In Service Policy Mode, or Managed Mode, ACI automates both the


network configuration and the L4-L7 Device configuration
+ Implies that ACI must have knowledge of L4-L7 Device vendor’s
implementation
+ Comes in the form of a plug-in to APIC called a Device Package
+ In Managed Mode, the Network Team is in charge of managing both the
network and the L4-L7 devices
Implementing Service Graphs

+ Import the device package under L4-L7 Services tab


+ Only if running in Managed or Hybrid modes
+ Create the L4-L7 Device
+ Tenant > Services > L4-L7 > Devices
+ Create the Service Graph Template
+ Tenant > Services > L4-L7 > Service Graph Templates
+ Consume the Service Graph in a Contract
+ Tenant > Applications Profiles > Topology
+ Drag and Drop the L4-L7 Cloud
+ Tenant > Contracts > Standard > Subject > L4-L7 Service Graph
Implementing Cisco
Application Centric Infrastructure
Course Conclusion
CCIEx4 #8593 & CCDE #2013::13
Recommended Reading

+ CCNP Data Center Application Centric Infrastructure DCACI 300-620


Official Cert Guide

You might also like