Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CCNA DC Networking Fundamentals Slides

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Cisco CCNA Data Center

Data Center Networking Fundamentals

March 16, 2017


Learning@Cisco
Today’s Speakers
Ozden Karakok
Technical Leader from the Data Center Products and Technologies team in the
Technical Assistant Center (TAC). She has been with Cisco Systems for 17 years and
specializes in storage area and data center networks. She is a CCIE R&S, SNA/IP,
and storage. A frequent speaker at Cisco and data center events, Ozden holds a
degree in computer engineering from Istanbul Bogazici University. Currently, she is
focused on application centric infrastructure (ACI) and software-defined storage
(SDS). She is also the lead for Cisco Live Europe Instructor Led and Walk-in Self-
Paced Labs. Connect with Ozden on Twitter via: @okarakok

Matt Saunders
Community Manager for Cisco Learning Network Data Center and Security
Agenda
Overview of Cisco Data Center’s
Physical Infrastructure Technology

Data Center Networking Concepts

Study Materials and Next Steps

Live Q&A

© 2017 Cisco and/or its affiliates. All rights reserved. Cisco Public
Overview of Cisco Data
Center’s Physical
Infrastructure Technology
Data Centre Infrastructure (3 Layers)
WAN Edge
Layer

FC FC
SAN A SAN B Core
SAN
DC Edge
Layer
Director
c
(LAN & SAN)

Aggregation vPC+ L3
FabricPath
& Services
L2
Layer
Network
Services

Access Layer FC FC
SAN A SAN B
SAN Edge

End-of-Row End-of-Row End-of-Row Top-of-Rack UCS FCoE Top-of-Rack


“The number of transistors
incorporated into a chip
will approximately double
every 24 months …”

“Moore’s Law” - 1975


Moore’s Law
It’s all about the Economics
• Increased function, efficiency
• Reduced costs, power
• ~ 1.6 x increase in gates between
process nodes

The new generation of Nexus 9000 is


BCOM 40nm - 2013
leveraging 16nm FF+ (FinFet)
Cisco 28nm - 2014
BCOM 28nm - 2016
Cisco 16FF+ - 2016
Intel 14nm - 2016

http://en.wikipedia.org/wiki/Semiconductor_device_fabrication
SerDes: Serializer + Deserializer
• SerDes Clocking Increases
• 10.3125G (40G, 10G)
• 25.78125(25G/50G/100G) - 2016
Multi Lane Distribution (MLD)
MLD (Multi Lane Distribution)

 40GE/100GE interfaces have multiple lanes (coax cables, fibers, wavelengths)


 MLD provides a simple (common) way to map 40G/100G to physical interfaces of different
lane widths
Parallel Lanes
4 x10 = 40G shifts to 4 x 25 = 100G

100-GbE

Backed by 10G SerDes Backed by 25G SerDes


Metcalfe, Moore and ASIC Pin I/O Rates
The Switch Architectural Challenge
The rate of change for overall network bandwidth is growing
faster than Moore’s Law which in turn is faster than the rate Metcalfe's Law
Technology Impacts on Switch Designs

of change for I/O from the ASIC to off chip components Network Bandwidth
Pressure from the disparity in rates of change has required a
new architectural balance

Moores’ Law
Factor Transistor Density
Year 1990 2000 2010 2016
Switch BW 1 67 2,667 30,000

Moore’s Law 1 32 1,024 8,129 Pin (I/O) Speed


Capacity from ASIC to
DRAM 1 5.6 32 90.5 external components

Time - t
Switching Architecture Changes
Shifting of Internal Architecture

DBUS SOC SOC SOC SOC


RBUS
EOBC CROSSBAR

Linecard Linecard Linecard Linecard Linecard


SOC SOC SOC SOC

Design Shifts Resulting from Increasing Gate Density and Bandwidth

10/100M 100M/1G 1G/10G 10G/100G


Switching Architecture Changes
Consolidation of Functions onto fewer components
40Gbps Fabric 40Gbps Fabric
Channel Channel

EoBC

FABRIC INTERFACE

LC Arbitration
CPU Fabric ASIC Aggregator
Distributed
Forwarding Card
FIRE FIRE FIRE FIRE
ASIC ASIC L2 FWD ASIC ASIC LC Inband

Linecard
to LC
L3 FWD to ARB CPU ASE-4
PO PO PO PO PO PO PO PO
RT RT RT RT RT RT RT RT
ASI ASI ASI ASI ASI ASI ASI ASI
C C C C C C C C

4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G 4 X 10G
CT CT CT CT CT CT CT CT SOC 1 SOC 2 SOC 3 SOC 4 SOC 5 SOC 6 SOC 7 SOC 8 SOC 9 SOC 10 SOC 11 SOC 12
S S S S S S S S
ASI ASI ASI ASI ASI ASI ASI ASI
C C C C C C C C

32 x 10G 48 x 10G 64 x 100G


Ports Ports Ports
Design Shifts Resulting from Increasing Gate Density and Bandwidth
Switch On Chip (SOC)
It is a full multi-stage switch on an ASIC

IO
SOC ASIC Architecture Component

Slice 1 Slice 2 Slice 3


Slice
Central Cross Connect Component
Statistic Network

Slice N

Global
Component
Modular Nexus 9500
A CLOS Based SOC Architecture

SOC SOC SOC SOC


Leverage
Switch on Chip
(SOC) Based
components
SOC SOC SOC SOC

Non Blocking Leaf and Spine based CLOS Network inside the Switch
Responding to Fast Market Changes
Sharing Platforms Among Different Architectures
• Common hardware platforms for ACI and NX-OS fabric
Connection

Creation Expansion

VTS
Reporting Fault Mgmt

DB DB

Web Web App Web App

• Sharing platform with UCS FI


• 3rd Generation FI is based on first gen 9300
• 4th Generation FI will be based on 2nd
Generation 9300EX
Optics Pluggable Multispeed Interfaces
SFP & QSFP
SFP QSFP

Pluggable Options
Pluggable Options • 1G SFP (via QSA)
• 1G SFP • 10G SFP+, Twinax, AOC (via QSA)
• 10G SFP+, Twinax, AOC • 25G SFP+, Twinax, AOC (via SLIC)
• 25G SFP+, Twinax, AOC • 40G QSFP, Twinax, AOC
• 50G Twinax, AOC (via SLIC)
• 100G QSFP, Twinax, AOC
40G BiDi Optics Preserve Existing MM 10G
Cabling
MM Fiber Plant
MMF LC Used Fiber Pair
MMF LC
Patch cord Patch cord
SFP-10G-SR SFP-10G-SR

QSFP-40G-SR4 QSFP-40G-SR4
MM Fiber Plant
Used Fiber Pair

MPO
MPO

Used Fiber Pair


Used Fiber Pair
Used Fiber Pair

QSFP-40G-SR-BD QSFP-40G-SR-BD

MMF LC MM Fiber Plant MMF LC


Used Fiber Pair
Patch cord Patch cord

Distance up to 125m with OM4


Data Center Networking
Concepts
Beyond STP, from Networks to Fabrics
History Lesson: Spanning tree

• Spanning Tree introduced around 1985,


prevents loops
• 32 years ago, we also saw:
• Windows 1.0
• DNS come out of academia
• First Nintendo Entertainment System
Host or
• Successfully deployed for some time Switch

• But since a few years, most DC Designs


built to work around STP
Data Center “Fabric” Journey (Standalone)

Layer-3 HSRP HSRP

Layer-2

Spanning-Tree

Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2

Baremet al Hypervisor Hypervisor Hypervisor Baremet al Hypervisor Baremet al Baremet al Hypervisor Hypervisor
Data Center “Fabric” Journey (Standalone)
Layer-3 HSRP HSRP

Layer-2

Spanning-Tree

Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2

Baremet al Hypervisor Hypervisor Hypervisor Baremet al Hypervisor Baremet al Baremet al Hypervisor Hypervisor
Data Center “Fabric” Journey (Standalone)
Layer-3 HSRP HSRP

Layer-2

Spanning-Tree

Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2

Baremet al Hypervisor Hypervisor Hypervisor Baremet al Hypervisor Baremet al Baremet al Hypervisor Hypervisor
Data Center “Fabric” Journey (Standalone)
Layer-3 HSRP HSRP

Layer-2

Spanning-Tree

Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2

Baremet al Hypervisor Hypervisor Hypervisor Baremet al Hypervisor Baremet al Baremet al Hypervisor Hypervisor
Data Center “Fabric” Journey (Standalone)
Layer-3 HSRP HSRP

Layer-2

Spanning-Tree

Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2

Baremet al Hypervisor Hypervisor Hypervisor Baremet al Hypervisor Baremet al Baremet al Hypervisor Hypervisor
Data Center “Fabric” Journey (Standalone)
Layer-3 HSRP HSRP

Layer-2

Spanning-Tree

Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2 Layer-2

Baremet al Hypervisor Hypervisor Hypervisor Baremet al Hypervisor Baremet al Baremet al Hypervisor Hypervisor
Virtual Port Channel (VPC)
vPC Domain

• VPC invented to overcome STP limitations


• IEEE standard in 2000 (802.3ad)
• Not perfect, but a good workaround
• STP is still there on every link
Host or
• Human error, misconfiguration can still Switch

cause issues
Virtual Port Channel (VPC) “Fabric”
vPC Domain 1

• VPC Northbound & Southbound


• More efficient than native STP

Dual sided vPC
STP is still running Back-to-Back vPC


“mini-fabric”
Another good workaround
• Configuration can become complex as
switch counts grow vPC Domain 2

• vPC makes two switches look as


one….but what about 4 switches?
Host or
Switch
Multi-path Fabric Based Designs - FabricPath

• Natural migration from vPC


• MAC in MAC encapsulation
• Easy to turn on (Nexus 5/6/7K)
• No STP within Fabric; BPDUs don’t FP

traverse fabric L3

• Distributed L3 gateway to edge, VLAN L2


Border
MAN/WAN
anywhere notion
• TRILL=Standard based, limited
capabilities
• FabricPath = Cisco proprietary features
The Leaf / Spine Topology (Clos* Network)

Spine Spine Spine Spine

• Wide ECMP: Unicast or Multicast


• Uniform Reachability
• Deterministic Latency Leaf Leaf Leaf Leaf Leaf Leaf Leaf

• High Redundancy
• On Node or Link Failure

*Clos, Charles (1953) "A study of non-blocking switching networks"


A Scale Out Architecture

More Spine – More Bandwidth – More Resiliency


• Leaf Spine Spine Spine Spine

• Smallest Operational Entity

• Spines
• Wide vs. Big
Leaf Leaf Leaf Leaf Leaf Leaf Leaf

• Uplinks
• Symmetric to all Spines or Pods
More Leaf – More Ports – More Capacity
• SAYG: Scale as You Grow
The Super-Spine

SuperSpine

SuperSpine SuperSpine

Spine Spine Spine Spine Spine Spine Spine Spine

Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf Leaf

POD 1 POD 2
Data Center Fabric Properties

Spine Spine Spine Spine

• Any Subnet, Anywhere, Rapidly


• Any Network on Any Leaf

• Reduced Failure Domain


Leaf Leaf Leaf Leaf Leaf Leaf Leaf
• Any Default Gateway on Any Leaf
- Distributed

• Extensible Scale and Resiliency


Overlay Based Data Center: Fabrics

• Mobility
• Segmentation
Spine Spine Spine Spine

Overlay • Scale

VTEP VTEP VTEP VTEP VTEP VTEP VTEP


• Automated and Programmable
• Abstracted Consumption Model
• Layer-2 and Layer-3 Service
• Physical and Virtual Workloads
Overlay Based Data Center: Edge Devices
Network Overlays Host Overlays

Overlay Overlay
VTEP VTEP VTEP VTEP - - - -

Hybrid Overlays
VTEP VTEP VTEP VTEP
Baremetal Baremetal Baremetal Baremetal Hypervisor Hypervisor Hypervisor Hypervisor

• Router/Switch End-Points
• Virtual End-Points only
• Protocols for Resiliency/Loops Overlay • Single Admin Domain
• Traditional VPNs
• VXLAN, NVGRE, STT
• VXLAN, OTV, VPLS, LISP, FP - - VTEP VTEP

VTEP VTEP
Hypervisor Hypervisor Baremetal Baremetal

• Physical and Virtual


• Resiliency and Scale
• Cross-Organizations/Federation
• Open Standards BRKDCT-3378
Overlay Taxonomy - Underlay

Layer-3
Interface Spine Spine Spine Spine

Peering

Underlay
Edge Device Leaf Leaf Leaf Leaf Leaf Leaf Leaf

LAN
Segment

Hypervisor Baremetal Hypervisor Hypervisor Baremetal Hypervisor Baremetal Baremetal

Virtual
Server Physical
Server
Overlay Taxonomy - Overlay

Spine Spine Spine Spine

Overlay
VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP

LAN
Segment

Hypervisor Baremetal Hypervisor Hypervisor Baremetal Hypervisor Baremetal Baremetal

Virtual
Server Physical VTEP: VXLAN Tunnel End-Point
Server VNI/VNID: VXLAN Network Identifier
Overlay Taxonomy - Overlay

Tunnel Encapsulation
Spine
(VNI Namespace)
Spine Spine Spine

Overlay
VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP

LAN
Segment

Hypervisor Baremetal Hypervisor Hypervisor Baremetal Hypervisor Baremetal Baremetal

Virtual
Server Physical VTEP: VXLAN Tunnel End-Point
Server VNI/VNID: VXLAN Network Identifier
Application Centric Infrastructure Components
Consider the Interaction between the endpoints
Web App DB
External QoS QoS QoS
Network Filter Service Filter

ACI Fabric
Non-Blocking Penalty Free Overlay

APIC
APIC
APIC
Enter Stateless Application Policies
Application
Profile
QoS QoS QoS

Service
Service

Filter
EPG Web Service

Filter
EPG App Filter
EPG DB

End Points Network & Security L4 – L7 Services


Single or Device Groups Quality of Service (QoS) Firewalls
Virtual / Physical Contracts & Filters (TCP/UDP) Load Balancers
Single/Multiple Subnets Redirection Orchestration & Management
Health Monitoring SPAN & Monitoring Network Analysis

There is stateless filtering between End Point Groups (EPGs) that may be
able to eliminate the need for some firewalls within the datacenter. Contracts
define what an EPG exposes to other app tiers and how. In other words,
any communication not explicitly allowed, is denied.
Cisco Data Centre Networking Strategy:
Providing Choice in Automation and Programmability
Application Centric
Programmable Fabric Programmable Network
Infrastructure
Connection

Creation Expansion

VTS
Reporting Fault Mgmt

DB DB

Web Web App Web App

Turnkey integrated solution with security, VxLAN-BGP EVPN Modern NX-OS with enhanced NX-
centralized management, compliance and standard-based APIs
scale
3rd party controller support DevOps toolset used for Network
Automated application centric-policy Management
model with embedded security Cisco Controller for software overlay (Puppet, Chef, Ansible etc.)
provisioning and management
Broad and deep ecosystem across N2K-N9K
Certification and Training
Resources
Get Started Today
• Join the Cisco Learning Network Data Center
community
• Pick your preferred method of training:
• Instructor-led training: DCICN and DCICT
• CCNA Data Center Official Cert Guides
• Cisco Learning Network certification
resources (see slide 23)
• Get certified
Cisco Press CCNA Data Center Official
Certification Guides
Launch special: Save 35% (plus, free U.S. Shipping)
CISCOPRESS.COM | Code: CCNADC35
See CISCOPRESS.COM for the latest specials
CCNA Data Center Training Courses
Acronym Version Course Name
DCICN 6.0 Introducing Cisco Data Center Networking
DCICT 6.0 Introducing Cisco Data Center Technologies

• Instructor-led training
• DCICN and DCICT
• Extensive hands-on learning: configuration, usage
• Taught by certified Cisco Learning Partners specializing in data center
• Good option for focused learning with instructor expertise
Q&A Session…

You might also like