Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Aci Apic - Learn Work It

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

LEARN WORK IT

INFORMATION TECHNOLO GY (NE T WORK )

C I S C O ACI BLO GS VMWARE N SX BLO G S CISCO ROUT ING B LO G

C I S CO SW ITCHIN G BLO G IT INS TITU TES CONTACT US

TERMS & CONDIT ION

7. ACI APIC
 APRIL 7, 2021  LEAVE A COMMENT

Cisco Application
Policy
Infrastructure
Controller
The C
Ci s c o  Application Policy Infrastructure Controller
(C
Ci s c o A P I C ) is the main architectural component of
the C
Ci s c o  ACI solution. It is the uni�ed point of
automation and management for the C
Ci s c o  ACI fabric,
policy enforcement, and health monitoring

Cisco APIC Part


Description
configuration numb er

APIC with
medium-size
CPU, hard
APIC- drive, and
Medium
M3/M4 memory
con�gurations
(up to 1200
edge ports)

APIC with
large CPU,

Large APIC-L3 hard drive,


and memory
con�gurations
Cisco APIC Part
Description
configuration numb er

(more than
1200 edge
ports)

Cluster of
3 APIC-
SERVER-M3
with medium-
APIC-
Medium size CPU, hard
CLUSTER-
cluster drive, and
M3/M4
memory
con�gurations
(up to 1200
edge ports)

Cluster of
3 APIC-
SERVER-L3
with large
APIC- CPU, hard
Large cluster CLUSTER- drive, and
L3 memory
con�gurations
(more than
1200 edge
ports)

1 M3*APIC
with medium-
size CPU, hard
APIC-
drive,
XS Cluster CLUSTER-
memory and
XS
2 Virtual
APICs. XS
Cluster is only
Cisco APIC Part
Description
configuration numb er

available as
part of mini
ACI fabric
bundle part
number –
ACI-C9332-
VAPIC-B1

APIC with
medium-size
CPU, hard
Medium drive, and
APIC-M3=
(spare) memory
con�gurations
(up to 1200
edge ports)

APIC with
large CPU,
hard drive,
and memory
Large (spare) APIC-L3=
con�gurations
(more than
1200 edge
ports)

APIC Controller Hardware:

For APIC controller Cisco uses UCS series


Servers, Below is the speci�cation:

UCS C Series Server: C220 M3 ( used as generation 1 )


UCS C Series Server: C220 M4 ( used as generation 2 )
ACI DC Sizing:

up to 80 leaf: use 3 APIC cluster


up to 300 leaf: use 5 APIC cluster
up to 400 leaf: use 7 APIC cluster
Currently, APIC controller’s uses are between (3-7) N+2,
But the roadmap is to scale up between (3-31).

APIC cluster -Sharding

•APIC’s management tree divided into DB units called


shards to allow load balancing and scaling
•Shards assigned to appliances by a static shard layout
•Each DN is mapped to a shard number (0 –31)
•Data accessed together is put in the same shard
•All data for an application pro�le maps to one
particular shard
•Topology data or switch stats shared by a switch ID
•Appliance vector (AV) : Vector of APIC with their
address, state
•Replica vector (RV) : Vector of shard, replica with state.
Leader for each shard
•Fabric node vector (FNV) : Vector of switch nodes with
address, state
•Check using acidiag[fnvread|avread|rvread]

Some useful Controller Commands

acidiag avread

acidiag fnvread
acidiag rvread

APIC -Cluster Formation

•APICs connected to leaf switches on infra network


•IP of APIC learned via LLDP and distributed through
ISIS
•On discovering new APIC start sending periodic HB
•APICs which are heart-beating with each other are said
to be in a Cluster

APIC Cluster Information : GUI


admin@p o d2-apic1:tmp> show c ontroller
controllers:

ID NAME INFRASTRUCTURE-IP IN-BAND-


MANAGEMENT-IP OUT-OF-BAND-MANAGEMENT-
UP-TIME STATUS

1 pod2-apic1 10.0.0.1 0.0.0.0 10.48.16.171 26:01:54:31.000 in-


service
2 pod2-apic2 10.0.0.2 0.0.0.0 10.48.16.172 26:01:35:44.000
in-service
3 pod2-apic3 10.0.0.3 0.0.0.0 10.48.16.173 26:01:17:00.000
in-service

Why we need minimum 3 APIC?

Since leader election is distributed we need the majority


of replicas to agree to the leader. So minimum number
of replicas/APIC to continue operation with a single
failure is 3.
•1 APIC : obviously no fault tolerance. Data lost for a
single failure
•2 APIC : Write unavailability with single failure. Can
recover if APIC is replaced
•3 APIC : If 1 APIC lost, the other two can elect a new
leader and continue writes. If 2 APIC lost we go to
minority behavior (no writes, only read)

APIC Fabric Failures Scenarios:


All APIC in ACI , works in the majority and minority
function,
I f o n e A P I C i s d o w n : Two are still in Active and
APIC cluster is in majority, the read Write function can
still work properly. You can use HOT standby APIC for
quick manual recovery.
I f t w o A P I C i s d o w n : Cluster is in the minority and
Single left APIC goes in to read mode, ACI fabric will
work properly but there would be no con�guration
change.
I f m o r e t h a n t w o A P I C i s d o w n , you should
immediately call TAC to marge external con�guration
from backup and to restore fabric use command:
trigger fabric re c over y.
All APIC controller has two interfaces to be connected
to the fabric, and it is recommended that this two
interface of APIC should be connected to two separate
leafs. All APIC must be connected to different leafs.

APIC Cluster Fault Tolerance

APIC Images

Link to download Image

https://software.cisco.com/download
/home/285968390/type/286278832/release
/4.2(7f)

You might also like