Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Devnet 20201014

Download as pdf or txt
Download as pdf or txt
You are on page 1of 500
At a glance
Powered by AI
The book provides an overview of topics related to the Cisco DevNet Associate certification exam, including software development, Python, APIs, and several Cisco platforms.

The book discusses topics like the software development lifecycle, common design patterns, version control with Git, Python syntax and data types, working with APIs and REST, and Cisco platforms like DNA Center, Meraki, and Webex.

The book covers Python as its main programming language focus, discussing Python syntax, functions, classes, modules, and working with data in Python.

Contents

1. Cover Page
2. About This eBook
3. Title Page
4. Copyright Page
5. About the Authors
6. About the Technical Reviewers
7. Dedications
8. Acknowledgments
9. Contents at a Glance
10. Reader Services
11. Contents
12. Icons Used in This Book
13. Command Syntax Conventions
14. Introduction

1. Goals and Methods


2. Who Should Read This Book?
3. Strategies for Exam Preparation
4. The Companion Website for Online Content Review
5. How This Book Is Organized
6. Certification Exam Topics and This Book

15. Figure Credits


16. Chapter 1. Introduction to Cisco DevNet Associate
Certification

1. Do I Know This Already?


2. Foundation Topics
3. Why Get Certified
4. Cisco Career Certification Overview
5. Cisco DevNet Certifications
6. Cisco DevNet Overview
7. Summary

17. Chapter 2. Software Development and Design

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Software Development Lifecycle
4. Common Design Patterns
5. Linux BASH
6. Software Version Control
7. Git
8. Conducting Code Review
9. Exam Preparation Tasks
10. Review All Key Topics
11. Define Key Terms
18. Chapter 3. Introduction to Python

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Getting Started with Python
4. Understanding Python Syntax
5. Data Types and Variables
6. Input and Output
7. Flow Control with Conditionals and Loops
8. Exam Preparation Tasks
9. Review All Key Topics
10. Define Key Terms
11. Additional Resources

19. Chapter 4. Python Functions, Classes, and Modules

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Python Functions
4. Using Arguments and Parameters
5. Object-Oriented Programming and Python
6. Python Classes
7. Working with Python Modules
8. Exam Preparation Tasks
9. Review All Key Topics
10. Define Key Terms

20. Chapter 5. Working with Data in Python

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. File Input and Output
4. Parsing Data
5. Error Handling in Python
6. Test-Driven Development
7. Unit Testing
8. Exam Preparation Tasks
9. Review All Key Topics
10. Define Key Terms
11. Additional Resources

21. Chapter 6. Application Programming Interfaces (APIs)

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Application Programming Interfaces (APIs)
4. Exam Preparation Tasks
5. Review All Key Topics
6. Define Key Terms

22. Chapter 7. RESTful API Requests and Responses

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. RESTful API Fundamentals
4. REST Constraints
5. REST Tools
6. Exam Preparation Tasks
7. Review All Key Topics
8. Define Key Terms

23. Chapter 8. Cisco Enterprise Networking Management


Platforms and APIs

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. What Is an SDK?
4. Cisco Meraki
5. Cisco DNA Center
6. Cisco SD-WAN
7. Exam Preparation Tasks
8. Review All Key Topics
9. Define Key Terms

24. Chapter 9. Cisco Data Center and Compute Management


Platforms and APIs

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Cisco ACI
4. UCS Manager
5. Cisco UCS Director
6. Cisco Intersight
7. Exam Preparation Tasks
8. Review All Key Topics
9. Define Key Terms

25. Chapter 10. Cisco Collaboration Platforms and APIs

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Introduction to the Cisco Collaboration Portfolio
4. Webex Teams API
5. Cisco Finesse
6. Webex Meetings APIs
7. Webex Devices
8. Cisco Unified Communications Manager
9. Exam Preparation Tasks
10. Review All Key Topics
11. Define Key Terms

26. Chapter 11. Cisco Security Platforms and APIs

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Cisco’s Security Portfolio
4. Cisco Umbrella
5. Cisco Firepower
6. Cisco Advanced Malware Protection (AMP)
7. Cisco Identity Services Engine (ISE)
8. Cisco Threat Grid
9. Exam Preparation Tasks
10. Review All Key Topics
11. Define Key Terms

27. Chapter 12. Model-Driven Programmability

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. NETCONF
4. YANG
5. RESTCONF
6. Model-Driven Telemetry
7. Exam Preparation Tasks
8. Review All Key Topics
9. Define Key Terms

28. Chapter 13. Deploying Applications

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Application Deployment Models
4. NIST Definition
5. Application Deployment Options
6. Application Deployment Methods
7. Bare-Metal Application Deployment
8. Virtualized Applications
9. Cloud-Native Applications
10. Containerized Applications
11. Serverless
12. DevOps
13. What Is DevOps?
14. Putting DevOps into Practice: The Three Ways
15. DevOps Implementation
16. Docker
17. Understanding Docker
18. Docker Architecture
19. Using Docker
20. Docker Hub
21. Exam Preparation Tasks
22. Review All Key Topics
23. Define Key Terms
24. Additional Resources

29. Chapter 14. Application Security

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Identifying Potential Risks
4. Protecting Applications
5. Exam Preparation Tasks
6. Review All Key Topics
7. Define Key Terms

30. Chapter 15. Infrastructure Automation

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Controller Versus Device-Level Management
4. Infrastructure as Code
5. Continuous Integration/Continuous Delivery Pipelines
6. Automation Tools
7. Cisco Network Services Orchestrator (NSO)
8. Exam Preparation Tasks
9. Review All Key Topics
10. Define Key Terms

31. Chapter 16. Network Fundamentals

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Network Reference Models
4. Switching Concepts
5. Routing Concepts
6. Exam Preparation Tasks
7. Review All Key Topics
8. Define Key Terms

32. Chapter 17. Networking Components

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. What Are Networks?
4. Elements of Networks
5. Software-Defined Networking
6. Exam Preparation Tasks
7. Review All Key Topics
8. Define Key Terms

33. Chapter 18. IP Services

1. “Do I Know This Already?” Quiz


2. Foundation Topics
3. Common Networking Protocols
4. Network Address Translation (NAT)
5. Layer 2 Versus Layer 3 Network Diagrams
6. Troubleshooting Application Connectivity Issues
7. Exam Preparation Tasks
8. Review All Key Topics
9. Define Key Terms

34. Chapter 19. Final Preparation

1. Getting Ready
2. Tools for Final Preparation
3. Suggested Plan for Final Review/Study
4. Summary

35. Appendix A. Answers to the “Do I Know This Already?” Quiz


Questions
36. Appendix B. DevNet Associate DEVASC 200-901 Official Cert
Guide Exam Updates

1. Always Get the Latest at the Book’s Product Page


2. Technical Content

37. Glossary
38. Index
39. Appendix C. Study Planner
40. Where are the companion content files? - Register
41. Inside Front Cover
42. Inside Back Cover
43. Code Snippets

1. i
2. ii
3. iii
4. iv
5. v
6. vi
7. vii
8. viii
9. ix
10. x
11. xi
12. xii
13. xiii
14. xiv
15. xv
16. xvi
17. xvii
18. xviii
19. xix
20. xx
21. xxi
22. xxii
23. xxiii
24. xxiv
25. xxv
26. xxvi
27. xxvii
28. xxviii
29. xxix
30. xxx
31. xxxi
32. xxxii
33. xxxiii
34. xxxiv
35. xxxv
36. 2
37. 3
38. 4
39. 5
40. 6
41. 7
42. 8
43. 9
44. 10
45. 11
46. 12
47. 13
48. 14
49. 15
50. 16
51. 17
52. 18
53. 19
54. 20
55. 21
56. 22
57. 23
58. 24
59. 25
60. 26
61. 27
62. 28
63. 29
64. 30
65. 31
66. 32
67. 33
68. 34
69. 35
70. 36
71. 37
72. 38
73. 39
74. 40
75. 41
76. 42
77. 43
78. 44
79. 45
80. 46
81. 47
82. 48
83. 49
84. 50
85. 51
86. 52
87. 53
88. 54
89. 55
90. 56
91. 57
92. 58
93. 59
94. 60
95. 61
96. 62
97. 63
98. 64
99. 65
100. 66
101. 67
102. 68
103. 69
104. 70
105. 71
106. 72
107. 73
108. 74
109. 75
110. 76
111. 77
112. 78
113. 79
114. 80
115. 81
116. 82
117. 83
118. 84
119. 85
120. 86
121. 87
122. 88
123. 89
124. 90
125. 91
126. 92
127. 93
128. 94
129. 95
130. 96
131. 97
132. 98
133. 99
134. 100
135. 101
136. 102
137. 103
138. 104
139. 105
140. 106
141. 107
142. 108
143. 109
144. 110
145. 111
146. 112
147. 113
148. 114
149. 115
150. 116
151. 117
152. 118
153. 119
154. 120
155. 121
156. 122
157. 123
158. 124
159. 125
160. 126
161. 127
162. 128
163. 129
164. 130
165. 131
166. 132
167. 133
168. 134
169. 135
170. 136
171. 137
172. 138
173. 139
174. 140
175. 141
176. 142
177. 143
178. 144
179. 145
180. 146
181. 147
182. 148
183. 149
184. 150
185. 151
186. 152
187. 153
188. 154
189. 155
190. 156
191. 157
192. 158
193. 159
194. 160
195. 161
196. 162
197. 163
198. 164
199. 165
200. 166
201. 167
202. 168
203. 169
204. 170
205. 171
206. 172
207. 173
208. 174
209. 175
210. 176
211. 177
212. 178
213. 179
214. 180
215. 181
216. 182
217. 183
218. 184
219. 185
220. 186
221. 187
222. 188
223. 189
224. 190
225. 191
226. 192
227. 193
228. 194
229. 195
230. 196
231. 197
232. 198
233. 199
234. 200
235. 201
236. 202
237. 203
238. 204
239. 205
240. 206
241. 207
242. 208
243. 209
244. 210
245. 211
246. 212
247. 213
248. 214
249. 215
250. 216
251. 217
252. 218
253. 219
254. 220
255. 221
256. 222
257. 223
258. 224
259. 225
260. 226
261. 227
262. 228
263. 229
264. 230
265. 231
266. 232
267. 233
268. 234
269. 235
270. 236
271. 237
272. 238
273. 239
274. 240
275. 241
276. 242
277. 243
278. 244
279. 245
280. 246
281. 247
282. 248
283. 249
284. 250
285. 251
286. 252
287. 253
288. 254
289. 255
290. 256
291. 257
292. 258
293. 259
294. 260
295. 261
296. 262
297. 263
298. 264
299. 265
300. 266
301. 267
302. 268
303. 269
304. 270
305. 271
306. 272
307. 273
308. 274
309. 275
310. 276
311. 277
312. 278
313. 279
314. 280
315. 281
316. 282
317. 283
318. 284
319. 285
320. 286
321. 287
322. 288
323. 289
324. 290
325. 291
326. 292
327. 293
328. 294
329. 295
330. 296
331. 297
332. 298
333. 299
334. 300
335. 301
336. 302
337. 303
338. 304
339. 305
340. 306
341. 307
342. 308
343. 309
344. 310
345. 311
346. 312
347. 313
348. 314
349. 315
350. 316
351. 317
352. 318
353. 319
354. 320
355. 321
356. 322
357. 323
358. 324
359. 325
360. 326
361. 327
362. 328
363. 329
364. 330
365. 331
366. 332
367. 333
368. 334
369. 335
370. 336
371. 337
372. 338
373. 339
374. 340
375. 341
376. 342
377. 343
378. 344
379. 345
380. 346
381. 347
382. 348
383. 349
384. 350
385. 351
386. 352
387. 353
388. 354
389. 355
390. 356
391. 357
392. 358
393. 359
394. 360
395. 361
396. 362
397. 363
398. 364
399. 365
400. 366
401. 367
402. 368
403. 369
404. 370
405. 371
406. 372
407. 373
408. 374
409. 375
410. 376
411. 377
412. 378
413. 379
414. 380
415. 381
416. 382
417. 383
418. 384
419. 385
420. 386
421. 387
422. 388
423. 389
424. 390
425. 391
426. 392
427. 393
428. 394
429. 395
430. 396
431. 397
432. 398
433. 399
434. 400
435. 401
436. 402
437. 403
438. 404
439. 405
440. 406
441. 407
442. 408
443. 409
444. 410
445. 411
446. 412
447. 413
448. 414
449. 415
450. 416
451. 417
452. 418
453. 419
454. 420
455. 421
456. 422
457. 423
458. 424
459. 425
460. 426
461. 427
462. 428
463. 429
464. 430
465. 431
466. 432
467. 433
468. 434
469. 435
470. 436
471. 437
472. 438
473. 439
474. 440
475. 441
476. 442
477. 443
478. 444
479. 445
480. 446
481. 447
482. 448
483. 449
484. 450
485. 451
486. 452
487. 453
488. 454
489. 455
490. 456
491. 457
492. 458
493. 459
494. 460
495. 461
496. 462
497. 463
498. 464
499. 465
500. 466
501. 467
502. 468
503. 469
504. 470
505. 471
506. 472
507. 473
508. 474
509. 475
510. 476
511. 477
512. 478
513. 479
514. 480
515. 481
516. 482
517. 483
518. 484
519. 485
520. 486
521. 487
522. 488
523. 489
524. 490
525. 491
526. 492
527. 493
528. 494
529. 495
530. 496
531. 497
532. 498
533. 499
534. 500
535. 501
536. 502
537. 503
538. 504
539. 505
540. 506
541. 507
542. 508
543. 509
544. 510
545. 511
546. 512
547. 513
548. 514
549. 515
550. 516
551. 517
552. 518
553. 519
554. 520
555. 521
556. 522
557. 523
558. 524
559. 525
560. 526
561. 527
562. 528
563. 529
564. 530
565. 531
566. 532
567. 533
568. 534
569. 535
570. 536
571. 537
572. 538
573. 539
574. 540
575. 541
576. 542
577. 543
578. 544
579. 545
580. 546
581. 547
582. 548
583. 549
584. 550
585. 551
586. 552
587. 553
588. 554
589. 555
590. 556
591. 557
592. 558
593. 559
594. 560
595. 561
596. 562
597. 563
598. 564
599. 565
600. 566
601. 567
602. 568
603. 569
604. 570
605. 571
606. 572
607. 573
608. 574
609. 575
610. 576
611. 577
612. 578
613. 579
614. 580
615. 581
616. 582
617. 583
618. 584
619. 585
620. 586
621. 587
622. 588
623. 589
624. 590
625. 591
626. 592
627. 593
628. 594
629. 595
630. 596
631. 597
632. 598
633. 599
634. 600
635. 601
636. 602
637. 603
638. 604
639. 605
640. 606
641. 607
642. 608
643. 609
644. 610
645. 611
646. 612
647. 613
648. 614
649. 615
650. 616
651. 617
652. 618
653. 619
654. 620
655. 621
656. 622
657. 623
658. 624
659. 625
660. 626
661. 627
662. 628
663. 629
664. 630
665. 631
666. 632
667. 633
668. 634
669. 635
670. 636
671. 637
672. 638
About This eBook
ePUB is an open, industry-standard format for eBooks.
However, support of ePUB and its many features varies
across reading devices and applications. Use your device
or app settings to customize the presentation to your
liking. Settings that you can customize often include font,
font size, single or double column, landscape or portrait
mode, and figures that you can click or tap to enlarge.
For additional information about the settings and
features on your reading device or app, visit the device
manufacturer’s Web site.

Many titles include programming code or configuration


examples. To optimize the presentation of these
elements, view the eBook in single-column, landscape
mode and adjust the font size to the smallest setting. In
addition to presenting code and configurations in the
reflowable text format, we have included images of the
code that mimic the presentation found in the print
book; therefore, where the reflowable format may
compromise the presentation of the code listing, you will
see a “Click here to view code image” link. Click the link
to view the print-fidelity code image. To return to the
previous page viewed, click the Back button on your
device or app.
Cisco Certified DevNet
Associate DEVASC 200-901
Official Cert Guide

Chris Jackson, CCIEX2 (RS, SEC) [CCIE NO.


6256]
Jason Gooley, CCIEX2 (RS, SP) [CCIE NO.
38759]
Adrian Iliesiu, CCIE RS [CCIE NO. 43909]
Ashutosh Malegaonkar

Cisco Press
Cisco Certified DevNet Associate DEVASC
200-901 Official Cert Guide
Chris Jackson, Jason Gooley, Adrian Iliesiu, Ashutosh
Malegaonkar

Copyright© 2021 Cisco Systems, Inc.

Published by:
Cisco Press

All rights reserved. No part of this book may be


reproduced or transmitted in any form or by any means,
electronic or mechanical, including photocopying,
recording, or by any information storage and retrieval
system, without written permission from the publisher,
except for the inclusion of brief quotations in a review.

ScoutAutomatedPrintCode

Library of Congress Control Number: 2020937218

ISBN-13: 978-01-3664296-1

ISBN-10: 01-3664296-9

Warning and Disclaimer


This book is designed to provide information about the
Cisco DevNet Associate DEVASC 200-901 exam. Every
effort has been made to make this book as complete and
as accurate as possible, but no warranty or fitness is
implied.

The information is provided on an “as is” basis. The


authors, Cisco Press, and Cisco Systems, Inc. shall have
neither liability nor responsibility to any person or entity
with respect to any loss or damages arising from the
information contained in this book or from the use of the
discs or programs that may accompany it.

The opinions expressed in this book belong to the


authors and are not necessarily those of Cisco Systems,
Inc.

Trademark Acknowledgments
All terms mentioned in this book that are known to be
trademarks or service marks have been appropriately
capitalized. Cisco Press or Cisco Systems, Inc., cannot
attest to the accuracy of this information. Use of a term
in this book should not be regarded as affecting the
validity of any trademark or service mark.

Special Sales
For information about buying this title in bulk
quantities, or for special sales opportunities (which may
include electronic versions; custom cover designs; and
content particular to your business, training goals,
marketing focus, or branding interests), please contact
our corporate sales department at
corpsales@pearsoned.com or (800) 382-3419.

For government sales inquiries, please contact


governmentsales@pearsoned.com.

For questions about sales outside the U.S., please contact


intlcs@pearson.com.

Feedback Information
At Cisco Press, our goal is to create in-depth technical
books of the highest quality and value. Each book is
crafted with care and precision, undergoing rigorous
development that involves the unique expertise of
members from the professional technical community.

Readers’ feedback is a natural continuation of this


process. If you have any comments regarding how we
could improve the quality of this book, or otherwise alter
it to better suit your needs, you can contact us through
email at feedback@ciscopress.com. Please make sure to
include the book title and ISBN in your message.
We greatly appreciate your assistance.

Editor-in-Chief: Mark Taub

Alliances Manager, Cisco Press: Arezou Gol

Director, ITP Project Management: Brett Bartow

Executive Editor: James Manly

Managing Editor: Sandra Schroeder

Development Editor: Ellie Bru

Technical Editors: Bryan Byrne, John McDonough

Project Editor: Lori Lyons

Copy Editor: Catherine D. Wilson

Editorial Assistant: Cindy Teeters

Cover Designer: Chuti Prasertsith

Production Manager: Vaishnavi Venkatesan,


codeMantra

Composition: codeMantra

Indexer: Ken Johnson

Proofreader: Donna Mulder

Americas Headquarters
Cisco Systems, Inc.
San Jose, CA

Asia Pacific Headquarters


Cisco Systems (USA) Pte. Ltd.
Singapore

Europe Headquarters
Cisco Systems International BV Amsterdam,
The Netherlands

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and
fax numbers are listed on the Cisco Website at
www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of
Cisco and/or its affiliates in the U.S. and other countries. To view a list of
Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third
party trademarks mentioned are the property of their respective owners.
The use of the word partner does not imply a partnership relationship
between Cisco and any other company. (1110R)
About the Authors
Chris Jackson, CCIE No. 6256 (R&S and SEC), is a
Distinguished Architect and CTO for Global Sales
Training at Cisco. Chris is focused on digital
transformation and showing customers how to leverage
the tremendous business value Cisco technologies can
provide. He is the author of Network Security Auditing
(Cisco Press, 2010), CCNA Cloud CLDADM 210-455
Official Cert Guide (Cisco Press, 2016), and various
online video courses for Cisco Press. He holds dual
CCIEs in security and routing and switching, CISA,
CISSP, ITIL v3, seven SANS certifications, and a
bachelor’s degree in business administration. Residing in
Franklin, Tennessee, Chris enjoys tinkering with
electronics, robotics, and anything else that can be
programmed to do his bidding. In addition, he is a 3rd
Degree Black Belt in Taekwondo, rabid Star Wars fan,
and has a ridiculous collection of Lego. His wife Piper
and three children Caleb, Sydney, and Savannah are the
true joy of his life and proof that not everything has to
plug into a wall outlet to be fun.

Jason Gooley, CCIE No. 38759 (R&S and SP), is a


very enthusiastic and spontaneous person who has more
than 20 years of experience in the industry. Currently,
Jason works as a Technical Evangelist for the Worldwide
Enterprise Networking Sales team at Cisco Systems.
Jason is very passionate about helping others in the
industry succeed. In addition to being a Cisco Press
author, Jason is a distinguished speaker at Cisco Live,
contributes to the development of the Cisco CCIE and
DevNet exams, provides training for Learning@Cisco, is
an active CCIE mentor, is a committee member for the
Cisco Continuing Education Program (CE), and is a
program committee member of the Chicago Network
Operators Group (CHI-NOG), www.chinog.org. Jason
also hosts a show called “MetalDevOps.” Jason can be
found at www.MetalDevOps.com, @MetalDevOps, and
@Jason_Gooley on all social media platforms.

Adrian Iliesiu, CCIE No. 43909 (R&S), is a network


engineer at heart with more than 15 years of professional
IT experience. Currently, Adrian works as a Technical
Leader with the Cisco DevNet Co-Creations team. During
his career, Adrian has worked in several roles, including
team leader and network, systems, and QA engineer
across multiple industries and international
organizations. When not working on innovative projects
with customers and partners, Adrian advocates the
advantages of network programmability and automation
with a focus on enterprise and data center infrastructure.
He is an established blog author, distinguished speaker
at Cisco Live, and a recipient of the coveted Cisco
Pioneer award. Adrian also appeared on Cisco
TechWiseTV, Cisco Champion podcasts, and DevNet
webinars. He holds a bachelor’s degree in Electronics
and Telecommunications from Technical University of
Cluj-Napoca and a master’s degree in
Telecommunication Networks from Politehnica
University of Bucharest.

Ashutosh Malegaonkar is a Cisco Distinguished


Engineer, a senior technical contributor, and an industry
thought leader. His experience spans across different
technology domains: ISR Platforms, Voice, Video,
Search, Video Analytics, and Cloud. Over two decades at
Cisco, he has done two startups and has won several
accolades, including the Pioneer awards. He has
delivered several keynotes and talks at Cisco Connect
and Cisco Live. He has also been a Tech Field Day
Speaker. With more than 25 years of professional
experience, he currently leads the DevNet Co-Creations
team whose mission is to co-create, innovate, and inspire
alongside our strategic customers, partners, and
developers. Ashutosh inspires those around him to
innovate, and he is continually developing creative new
ways to use software and Cisco APIs to solve real
problems for our customers. He has a deep
understanding of the breadth of Cisco products and
technologies and where they can best be applied to serve
our customers. Ashutosh has 16 approved patents and
two publications.
About the Technical Reviewers
Bryan Byrne, CCIE No. 25607 (R&S), is a Technical
Solutions Architect in Cisco’s Global Enterprise segment.
With more than 20 years of data networking experience,
his current focus is helping his customers transition from
traditional LAN/WAN deployments toward Cisco’s next-
generation Software-Defined network solutions. Prior to
joining Cisco, Bryan spent the first 13 years of his career
in an operations role with a global service provider
supporting large-scale IP DMVPN and MPLS networks.
Bryan is multi-time Cisco Live Distinguished Speaker
covering topics on NETCONF, RESTCONF, and YANG.
He is a proud graduate of The Ohio State University and
currently lives outside Columbus, Ohio, with his wife
Lindsey and their two children Evan and Kaitlin.

John McDonough has more than 30 years of


development experience and is currently a Developer
Advocate for Cisco’s DevNet. As a Developer Advocate,
John writes code and creates DevNet Learning Labs
about how to write code. He writes blogs about writing
code and presents at Cisco Live, SXSW, AnsibleFest, and
other industry events. John focuses on Cisco’s
Computing Systems Products, Cisco UCS, and Cisco
Intersight. John’s career at Cisco has varied from
Product Engineer to Custom Application Developer,
Technical Marketing Engineer, and now a Developer
Advocate.
Dedications
Chris Jackson:

Writing a book is a solitary effort, but the real work is


shared by everyone who loves and supports you. This
book is just the latest project that my beautiful wife Piper
has provided infinite patience, love, and understanding
as I wrote late into the evening and on weekends. She is
my rock, my light, and my greatest cheerleader. My life is
amazing because of her and her love. My children Caleb,
Sydney, and Savannah have been so forgiving of my time
commitments and allowed me to focus on delivering
something I could be proud of. Each of you are so
wonderful and the time away from you has been a
sacrifice that I do not make lightly. Now it’s time to
celebrate! Last, but certainly not least, are all my friends
and co-workers who take up the slack when my inability
to say no to new projects rears its head. They drive me to
be better, and I am fortunate to work with some of the
most professional and high-quality individuals in the
industry.

Jason Gooley:

This book is dedicated to my wife, Jamie, and my


children, Kaleigh and Jaxon. Without their support,
these books would not be possible. I can’t believe they let
me write four books in one year! To my father and
brother, thank you for always supporting me. In
addition, I want to thank my extended family and friends
for all the unwavering support over the years. It is
because of all of you that I get to do these things! Huge
thank-you to Thom Hazaert, Melody Myers, and David
Ellefson for supporting and believing in MetalDevOps!
Thank you for giving us a home at EMP Label Group,
Combat Records, and Ellefson Coffee Co.! Can’t wait to
see what the future holds for us!
Adrian Iliesiu:

I dedicate this book to my family and especially my wife,


Martina. This book wouldn’t have been possible without
her continuous support through so many challenges and
sacrifices over the years. I am grateful she agreed to let
me write this book, especially after the CCIE exam
preparation “experience.” I promise I’ll be back at doing
dishes for the foreseeable future. Special thank-you to
my parents, Grigore and Ana, for all their support and
encouragement through the years. Vă
pentru tot, mamă tată! Big thank-you to my sister,
and especially my grandmother, for shaping and
instilling in me a set of values that has made me the
person I am today. Thanks also to Susie Wee for her
continuous support and leadership.

Ashutosh Malegaonkar:

I want to dedicate this book to my Guru, Shri.


Gondavlekar Maharaj, for giving me this opportunity and
letting me follow through with this opportunity.

I would also like to dedicate this book to my wife Medha.


She has been my strength and biggest supporter. It is
because of some of her sacrifices that I am where I am
today. Our sons, Jai and Yash, and their constant
positivity keep making me feel special in whatever I do.

I would like to thank my Mom (she would have been


proud), Dad, my brother, and my sister, for shaping me
during the years.

Last but not the least, I sincerely thank Susie Wee for
believing in me and letting me be part of DevNet since
the very early days of DevNet.
Acknowledgments
Chris Jackson:

This book would not have been written if it hadn’t been


for the team of amazing people at Cisco Press; you guys
make us sound coherent, fix our silly mistakes, and
encourage us to get the project done! James, Ellie, and
Brett are the best in the industry. Thanks as well to our
tech editors, John McDonough and Bryan Byrne, for
making sure our code is tight and works.

I am so very thankful to my manager Jeff Cristee for


being an incredible mentor and supporting me in so
many ways. You are the best, and I feel blessed to work
with you. Linda Masloske, you are an amazing friend and
have been one of my biggest supporters during my 20-
year career at Cisco. I could write an entire chapter on
how much you have done for me over the years. Thank
you for everything, but most importantly for giving a kid
from Kentucky the chance to shine.

A big Thanks to my SNAP crew Jodi, Deonna, Angie, and


Doug, for giving me time to work on all of my many
projects. You guys are the best and I LOVE working with
you. Virtual or live, you bring the magic.

Jason Gooley:

Big thank-you to Brett and Marianne Bartow, Ellie Bru,


and everyone else involved at Cisco Press! You are all
amazing to work with, and six books later, I’m not sure
how you put up with me! Shout out to my brother in
metal, Stuart Clark (@bigevilbeard), for letting me use
his code examples! Thanks, brother!

Adrian Iliesiu:
Huge thank-you to Casey Tong, designer in chief, for all
her help with images and graphics for this book. Big
thank-you to Ashutosh for all his support. Thanks to
Chris and Jason for allowing me to embark on this
journey with them; Ellie Bru and James Manly from
Cisco Press for editing and trying to keep the project on
track; and to John and Bryan for their feedback and
insight. I would also like to thank Mike Mackay for
believing in me when it mattered and for giving me a
chance to prove myself.

Ashutosh Malegaonkar:

Thanks to the entire Cisco DevNet team for being the


soul of the program. Adrian—we did it! A special thanks
to Susie Wee for the support and encouragement from
day one. This being the first for me, thanks to Jason and
Chris for the mentorship; Ellie Bru for keeping up with
my novice questions; and finally John McDonough and
Bryan Byrne for the excellent technical reviews.
Contents at a Glance
Introduction
Chapter 1 Introduction to Cisco DevNet Associate
Certification

Chapter 2 Software Development and Design


Chapter 3 Introduction to Python

Chapter 4 Python Functions, Classes, and Modules


Chapter 5 Working with Data in Python

Chapter 6 Application Programming Interfaces (APIs)


Chapter 7 RESTful API Requests and Responses

Chapter 8 Cisco Enterprise Networking Management


Platforms and APIs
Chapter 9 Cisco Data Center and Compute Management
Platforms and APIs
Chapter 10 Cisco Collaboration Platforms and APIs

Chapter 11 Cisco Security Platforms and APIs


Chapter 12 Model-Driven Programmability

Chapter 13 Deploying Applications


Chapter 14 Application Security

Chapter 15 Infrastructure Automation


Chapter 16 Network Fundamentals

Chapter 17 Networking Components


Chapter 18 IP Services

Chapter 19 Final Preparation


Appendix A Answers to the “Do I Know This Already?”
Quiz Questions

Appendix B DevNet Associate DEVASC 200-901 Official


Cert Guide Exam Updates
Glossary
Index
Online Elements

Appendix C Study Planner


Glossary
Reader Services
Other Features

In addition to the features in each of the core chapters,


this book has additional study resources on the
companion website, including the following:

Practice exams: The companion website contains an


exam engine that enables you to review practice exam
questions. Use these to prepare with a sample exam and
to pinpoint topics where you need more study.

Flash Cards: An online interactive application to help


you drill on Key Terms by chapter.

Glossary quizzes: The companion website contains


interactive quizzes that enable you to test yourself on
every glossary term in the book.

Video training: The companion website contains unique


test-prep videos.

To access this additional content, simply register your


product. To start the registration process, go to
www.ciscopress.com/register and log in or create an
account.* Enter the product ISBN 9780136642961 and
click Submit. After the process is complete, you will find
any available bonus content under Registered Products.

*Be sure to check the box that you would like to hear from us to receive exclusive discounts on
future editions of this product.
Contents
Introduction
Chapter 1 Introduction to Cisco DevNet Associate
Certification

Do I Know This Already?


Foundation Topics

Why Get Certified


Cisco Career Certification Overview

Cisco DevNet Certifications


Cisco Certified DevNet Associate
Certification (DEVASC)

Cisco Certified DevNet Professional


Certification
Cisco DevNet Overview
Discover

Technologies
Community

Support
Events

DevNet Automation Exchange


Summary

Chapter 2 Software Development and Design


“Do I Know This Already?” Quiz

Foundation Topics
Software Development Lifecycle

Waterfall
Lean

Agile
Common Design Patterns
Model-View-Controller (MVC) Pattern

Observer Pattern
Linux BASH

Getting to Know BASH


Directory Navigation

cd
pwd

ls
mkdir

File Management
cp

mv
rm

touch
cat

Environment Variables
Software Version Control

Git
Understanding Git

Using Git
Cloning/Initiating Repositories

Adding and Removing Files


Committing Files

Pushing and Pulling Files


Working with Branches

Merging Branches
Handling Conflicts

Comparing Commits with diff


Conducting Code Review
Exam Preparation Tasks

Review All Key Topics


Define Key Terms

Chapter 3 Introduction to Python


“Do I Know This Already?” Quiz

Foundation Topics
Getting Started with Python

Understanding Python Syntax


Data Types and Variables

Variables
Data Types

Integers, Floating Point, and Complex


Numbers
Booleans

Strings
Lists

Tuples
Dictionaries

Sets
Input and Output

Getting Input from the User


The Mighty print() Function

Flow Control with Conditionals and Loops


If Statements

For Loops
While Loops

Exam Preparation Tasks


Review All Key Topics
Define Key Terms
Additional Resources

Chapter 4 Python Functions, Classes, and


Modules
“Do I Know This Already?” Quiz

Foundation Topics
Python Functions

Using Arguments and Parameters


Object-Oriented Programming and Python

Python Classes
Creating a Class

Methods
Inheritance

Working with Python Modules


Importing a Module

The Python Standard Library


Importing Your Own Modules

Useful Python Modules for Cisco


Infrastructure
Exam Preparation Tasks

Review All Key Topics


Define Key Terms

Chapter 5 Working with Data in Python


“Do I Know This Already?” Quiz

Foundation Topics
File Input and Output

Parsing Data
Comma-Separated Values (CSV)

JavaScript Object Notation (JSON)


Extensible Markup Language (XML)
YAML Ain’t Markup Language (YAML)
Error Handling in Python

Test-Driven Development
Unit Testing

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Additional Resources

Chapter 6 Application Programming Interfaces


(APIs)
“Do I Know This Already?” Quiz

Foundation Topics
Application Programming Interfaces (APIs)

Northbound APIs
Southbound APIs

Synchronous Versus Asynchronous APIs


Representational State Transfer (REST)
APIs

RESTful API Authentication


Basic Authentication

API Keys
Custom Tokens

Simple Object Access Protocol (SOAP)


Remote-Procedure Calls (RPCs)

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Chapter 7 RESTful API Requests and Responses

“Do I Know This Already?” Quiz


Foundation Topics
RESTful API Fundamentals
API Types

API Access Types


HTTP Basics

Uniform Resource Locator (URL)


Method

REST Methods and CRUD


Deep Dive into GET and POST

HTTP Headers
Request Headers

Response Headers
Response Codes

XML
JSON

YAML
Webhooks

Tools Used When Developing with


Webhooks
Sequence Diagrams

REST Constraints
Client/Server

Stateless
Cache

Uniform Interface
Layered System

Code on Demand
REST API Versioning

Pagination
Rate Limiting and Monetization
Rate Limiting on the Client Side
REST Tools

Postman
curl

HTTPie
Python Requests

REST API Debugging Tools for Developing


APIs
Exam Preparation Tasks

Review All Key Topics


Define Key Terms

Chapter 8 Cisco Enterprise Networking


Management Platforms and APIs
“Do I Know This Already?” Quiz

Foundation Topics
What Is an SDK?

Cisco Meraki
Cisco DNA Center

Cisco SD-WAN
Exam Preparation Tasks

Review All Key Topics


Define Key Terms

Chapter 9 Cisco Data Center and Compute


Management Platforms and APIs
“Do I Know This Already?” Quiz

Foundation Topics
Cisco ACI

Building Blocks of Cisco ACI Fabric Policies


APIC REST API

UCS Manager
Cisco UCS Director
Cisco Intersight

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Chapter 10 Cisco Collaboration Platforms and
APIs

“Do I Know This Already?” Quiz


Foundation Topics

Introduction to the Cisco Collaboration


Portfolio
Unified Communications

Cisco Webex Teams


Cisco Unified Communications Manager
(Unified CM)

Unified Contact Center


Cisco Webex

Cisco Collaboration Endpoints


API Options in the Cisco Collaboration
Portfolio

Webex Teams API


API Authentication

Personal Access Tokens


Integrations

Bots
Guest Issuer

Webex Teams SDKs


Cisco Finesse

Cisco Finesse API


API Authentication
Finesse User APIs
Finesse Team APIs

Dialog APIs
Finesse Gadgets

Webex Meetings APIs


Authentication

Integration API Keys


Webex XML APIs

Creating a New Meeting


Listing All My Meetings Meeting

Setting or Modifying Meeting Attributes


Deleting a Meeting

Webex Devices
xAPI

xAPI Authentication
xAPI Session Authentication

Creating a Session
Getting the Current Device Status

Setting Device Attributes


Registering an Event Notification
Webhook

Room Analytics People Presence Detector


Cisco Unified Communications Manager

Administrative XML
Cisco AXL Toolkit

Accessing the AXL SOAP API


Using the Zeep Client Library

Using the CiscoAXL SDK


Exam Preparation Tasks
Review All Key Topics
Define Key Terms

Chapter 11 Cisco Security Platforms and APIs


“Do I Know This Already?” Quiz

Foundation Topics
Cisco’s Security Portfolio

Potential Threats and Vulnerabilities


Most Common Threats

Cisco Umbrella
Understanding Umbrella

Cisco Umbrella APIs


Authentication

Cisco Firepower
Firepower Management Center APIs

Cisco Advanced Malware Protection (AMP)


Listing All Computers

Listing All Vulnerabilities


Cisco Identity Services Engine (ISE)

ISE REST APIs


ERS API Authentication

Creating an Endpoint Group


Creating an Endpoint and Adding It to a
Group

Other ISE APIs


Cisco Threat Grid

Threat Grid APIs


Threat Grid API Format

API Keys
Who Am I
The Data, Sample, and IOC APIs
Feeds

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Chapter 12 Model-Driven Programmability

“Do I Know This Already?” Quiz


Foundation Topics

NETCONF
YANG

RESTCONF
Model-Driven Telemetry

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Chapter 13 Deploying Applications

“Do I Know This Already?” Quiz


Foundation Topics

Application Deployment Models


NIST Definition

Essential Characteristics
Service Models

Application Deployment Options


Private Cloud

Public Cloud
Hybrid Cloud

Community Cloud
Edge and Fog Computing

Application Deployment Methods


Bare-Metal Application Deployment
Virtualized Applications

Cloud-Native Applications
Containerized Applications

Serverless
DevOps

What Is DevOps?
Putting DevOps into Practice: The Three
Ways

First Way: Systems and Flow


Second Way: Feedback Loop

Third Way: Continuous Experimentation


and Learning
DevOps Implementation

Docker
Understanding Docker

Namespaces
Cgroups

Union File System


Docker Architecture

Using Docker
Working with Containers

Dockerfiles
Docker Images

Docker Hub
Exam Preparation Tasks

Review All Key Topics


Define Key Terms

Additional Resources
Chapter 14 Application Security
“Do I Know This Already?” Quiz
Foundation Topics

Identifying Potential Risks


Common Threats and Mitigations

Open Web Application Security Project


Using Nmap for Vulnerability Scanning

Basic Nmap Scan Against an IP Address


or a Host
CVE Detection Using Nmap

Protecting Applications
Tiers of Securing and Protecting

Encryption Fundamentals
Public Key Encryption

Data Integrity (One-Way Hash)


Digital Signatures

Data Security
Secure Development Methods

Securing Network Devices


Firewalls

Intrusion Detection Systems (IDSs)


Intrusion Prevention Systems (IPSs)

Domain Name System (DNS)


Load Balancing

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Chapter 15 Infrastructure Automation

“Do I Know This Already?” Quiz


Foundation Topics
Controller Versus Device-Level Management
Infrastructure as Code

Continuous Integration/Continuous Delivery


Pipelines
Automation Tools

Ansible
Puppet

Chef
Cisco Network Services Orchestrator (NSO)

Cisco Modeling Labs/Cisco Virtual


Internet Routing Laboratory
(CML/VIRL)
Python Automated Test System (pyATS)

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Chapter 16 Network Fundamentals

“Do I Know This Already?” Quiz


Foundation Topics

Network Reference Models


The OSI Model

The TCP/IP Model


Switching Concepts

Ethernet
MAC Addresses

Virtual Local-Area Networks (VLANs)


Switching

Routing Concepts
IPv4 Addresses

IPv6 Addresses
Routing
Exam Preparation Tasks

Review All Key Topics


Define Key Terms

Chapter 17 Networking Components


“Do I Know This Already?” Quiz

Foundation Topics
What Are Networks?

Elements of Networks
Hubs

Bridges
Switches

Virtual Local Area Networks (VLANs)


Routers

Routing in Software
Functions of a Router

Network Diagrams: Bringing It All


Together
Software-Defined Networking

SDN Controllers
Cisco Software-Defined Networking
(SDN)

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Chapter 18 IP Services

“Do I Know This Already?” Quiz


Foundation Topics

Common Networking Protocols


Dynamic Host Configuration Protocol
(DHCP)
Server Discovery

Lease Offer
Lease Request

Lease Acknowledgment
Releasing

Domain Name System (DNS)


Network Address Translation (NAT)

Simple Network Management Protocol


(SNMP)
Network Time Protocol (NTP)

Layer 2 Versus Layer 3 Network Diagrams


Troubleshooting Application Connectivity
Issues

Exam Preparation Tasks


Review All Key Topics

Define Key Terms


Chapter 19 Final Preparation

Getting Ready
Tools for Final Preparation

Pearson Cert Practice Test Engine and


Questions on the Website
Accessing the Pearson Test Prep Software
Online

Accessing the Pearson Test Prep Software


Offline
Customizing Your Exams

Updating Your Exams


Premium Edition

Chapter-Ending Review Tools


Suggested Plan for Final Review/Study
Summary

Appendix A Answers to the “Do I Know This Already?”


Quiz Questions
Appendix B DevNet Associate DEVASC 200-901
Official Cert Guide Exam Updates

Glossary
Index

Online Elements
Appendix C Study Planner

Glossary
Icons Used in This Book
Command Syntax Conventions
The conventions used to present command syntax in this
book are the same conventions used in the IOS
Command Reference. The Command Reference
describes these conventions as follows:

Boldface indicates commands and keywords that are entered literally


as shown. In actual configuration examples and output (not general
command syntax), boldface indicates commands that are manually
input by the user (such as a show command).

Italic indicates arguments for which you supply actual values.

Vertical bars (|) separate alternative, mutually exclusive elements.

Square brackets ([ ]) indicate an optional element.

Braces ({ }) indicate a required choice.

Braces within brackets ([{ }]) indicate a required choice within an


optional element.
Introduction
This book was written to help candidates improve their
network programmability and automation skills—not
only for preparation of the DevNet Associate DEVASC
200-901 exam, but also for real-world skills for any
production environment.

Readers of this book can expect that the blueprint for the
DevNet Associate DEVASC 200-901 exam tightly aligns
with the topics contained in this book. This was by
design. Candidates can follow along with the examples in
this book by utilizing the tools and resources found on
the DevNet website and other free utilities such as
Postman and Python.

This book is targeted for all learners who are learning


these topics for the first time, as well as for those who
wish to enhance their network programmability and
automation skillset.

Be sure to visit www.cisco.com to find the latest


information on DevNet Associate DEVASC 200-901
exam requirements and to keep up to date on any new
exams that are announced.

GOALS AND METHODS


The most important and somewhat obvious goal of this
book is to help you pass the DevNet Associate DEVASC
(200-901) exam. Additionally, the methods used in this
book to help you pass the DevNet Associate exam are
designed to also make you much more knowledgeable
about how to do your job. While this book and the
companion website together have more than enough
questions to help you prepare for the actual exam, the
method in which they are used is not to simply make you
memorize as many questions and answers as you
possibly can.
One key methodology used in this book is to help you
discover the exam topics that you need to review in more
depth, to help you fully understand and remember those
details, and to help you prove to yourself that you have
retained your knowledge of those topics. So, this book
does not try to help you pass by memorization but helps
you truly learn and understand the topics. The DevNet
Associate exam is just one of the foundation exams in the
DevNet certification suite, and the knowledge contained
within is vitally important to consider yourself a truly
skilled network developer. This book would do you a
disservice if it didn’t attempt to help you learn the
material. To that end, the book will help you pass the
DevNet Associate exam by using the following methods:

Helping you discover which test topics you have not mastered

Providing explanations and information to fill in your knowledge gaps

Supplying exercises and scenarios that enhance your ability to recall


and deduce the answers to test questions

WHO SHOULD READ THIS BOOK?


This book is intended to help candidates prepare for the
DevNet Associate DEVASC 200-901 exam. Not only can
this book help you pass the exam, but also it can help you
learn the necessary topics to provide value to your
organization as a network developer.

Passing the DevNet Associate DEVASC 200-901 exam is


a milestone toward becoming a better network
developer. This in turn can help with becoming more
confident with these technologies.

STRATEGIES FOR EXAM PREPARATION


The strategy you use for the DevNet Associate exam
might be slightly different than strategies used by other
readers, mainly based on the skills, knowledge, and
experience you already have obtained.
Regardless of the strategy you use or the background you
have, this book is designed to help you get to the point
where you can pass the exam with the least amount of
time required. However, many people like to make sure
that they truly know a topic and thus read over material
that they already know. Several book features will help
you gain the confidence that you know some material
already and to also help you know what topics you need
to study more.

THE COMPANION WEBSITE FOR ONLINE


CONTENT REVIEW
All the electronic review elements, as well as other
electronic components of the book, are provided on this
book’s companion website.

How to Access the Companion Website


To access the companion website, start by establishing a
login at www.ciscopress.com and registering your book.
To do so, simply go to www.ciscopress.com/register and
enter the ISBN of the print book: 9780136642961. After
you have registered your book, go to your account page
and click the Registered Products tab. From there, click
the Access Bonus Content link to get access to the book’s
companion website.

Note that if you buy the Premium Edition eBook and


Practice Test version of this book from Cisco Press, your
book will automatically be registered on your account
page. Simply go to your account page, click the
Registered Products tab, and select Access Bonus
Content to access the book’s companion website.

How to Access the Pearson Test Prep (PTP) App


You have two options for installing and using the
Pearson Test Prep application: a web app and a desktop
app. To use the Pearson Test Prep application, start by
finding the registration code that comes with the book.
You can find the code in these ways:

Print book: Look in the cardboard sleeve in the back of the book for a
piece of paper with your book’s unique PTP code.

Premium Edition: If you purchase the Premium Edition eBook and


Practice Test directly from the Cisco Press website, the code will be
populated on your account page after purchase. Just log in at
www.ciscopress.com, click Account to see details of your account, and
click the Digital Purchases tab.

Amazon Kindle: For those who purchase a Kindle edition from


Amazon, the access code will be supplied directly from Amazon.

Other Bookseller eBooks: Note that if you purchase an eBook


version from any other source, the practice test is not included because
other vendors to date have not chosen to vend the required unique
access code.

Note
Do not lose the activation code because it is the only
means with which you can access the QA content with
the book.

Once you have the access code, to find instructions about


both the PTP web app and the desktop app, follow these
steps:

Step 1. Open this book’s companion website, as shown


earlier in this Introduction under the heading
“How to Access the Companion Website.”

Step 2. Click the Practice Exams button.


Step 3. Follow the instructions listed there both for
installing the desktop app and for using the web
app.

If you want to use the web app only at this point, just
navigate to www.pearsontestprep.com, establish a free
login if you do not already have one, and register this
book’s practice tests using the registration code you just
found. The process should take only a couple of minutes.

Note
Amazon eBook (Kindle) customers: It is easy to miss
Amazon’s email that lists your PTP access code. Soon
after you purchase the Kindle eBook, Amazon should
send an email. However, the email uses very generic
text and makes no specific mention of PTP or practice
exams. To find your code, read every email from
Amazon after you purchase the book. Also do the usual
checks for ensuring your email arrives, like checking
your spam folder.

Note
Other eBook customers: As of the time of publication,
only the publisher and Amazon supply PTP access
codes when you purchase their eBook editions of this
book.

HOW THIS BOOK IS ORGANIZED


Although this book can be read cover-to-cover, it is
designed to be flexible and allow you to easily move
between chapters and sections of chapters to cover just
the material that you need more work with. Chapter 1
provides an overview of the Cisco career certifications
and offers some strategies for how to prepare for the
exams. The dichotomy of that is in Chapters 2 through
18. These chapters are the core chapters and can be
covered in any order. If you do intend to read them all,
the order in the book is an excellent sequence to use.

The core chapters, Chapters 2 through 18, cover the


following topics:

Chapter 2, “Software Development and Design”: This chapter


introduces key software development methods, like Waterfall and Agile,
and includes the common design patterns MVC and Observer. Software
version control systems, how to use Git, and how to conduct code
reviews are covered as well.

Chapter 3, “Introduction to Python”: This chapter provides an


overview of Python syntax, working with various data types, getting
input and producing output, and how to use conditionals and loops to
control program flow.
Chapter 4, “Python Functions, Classes, and Modules”: This
chapter introduces Python functions and Object-Oriented
Programming techniques. In addition, it also covers Python classes and
how to work with modules to extend Python capabilities.

Chapter 5, “Working with Data in Python”: This chapter covers


the various ways you can input data into your Python program, parse
data, and handle errors. Finally, test-driven development is introduced
as well as how to perform unit tests.

Chapter 6, “Application Programming Interfaces (APIs)”: This


chapter covers a high-level overview of some common API types, REST
API Authentication, Simple Object Access Protocol (SOAP), and
Remote-Procedure Call (RPC) protocol as well as common examples of
when and where each protocol is used.

Chapter 7, “RESTful API Requests and Responses”: This


chapter presents a detailed overview of REST APIs. It discusses several
aspects of REST APIs including URL, methods, headers, return codes,
data formats, architectural constraints, and various tools used for
working with REST APIs.

Chapter 8, “Cisco Enterprise Networking Management


Platforms and APIs”: This chapter starts with what SDKs are and
then covers Cisco Enterprise Networking Platforms and their APIs,
including examples of how to interact with the APIs. The platforms
covered in this chapter are Cisco Meraki, Cisco DNA Center, and Cisco
SD-WAN.

Chapter 9, “Cisco Data Center and Compute Management


Platforms and APIs”: This chapter introduces key Cisco Data Center
and Compute Management Platforms and their associated APIs. The
following platforms are covered in this chapter: Cisco ACI, Cisco UCS
Manager, Cisco UCS Director, and Cisco Intersight. Examples of API
consumption for all these platforms are also included in this chapter.

Chapter 10, “Cisco Collaboration Platforms and APIs”: This


chapter discusses in detail Cisco’s Collaboration platforms and their
associated APIs, along with examples. Platforms under consideration
are Webex Teams, Cisco Finesse, Webex Meetings, Webex Devices, and
Cisco Unified Call Manager.

Chapter 11, “Cisco Security Platforms and APIs”: This chapter


discusses in detail Cisco’s Security platforms, their associated APIs
along with examples. Platforms under consideration are Cisco
Firepower, Cisco Umbrella, Cisco Advanced Malware Protection—AMP,
Cisco Identity Services Engine—ISE, and Cisco ThreatGrid.

Chapter 12, “Model-Driven Programmability”: This chapter


introduces key model-driven programmability concepts and protocols.
An in-depth look at YANG, YANG data models, NETCONF,
RESTCONF, and Model-Driven telemetry is covered in this chapter.

Chapter 13, “Deploying Applications”: This chapter covers


numerous application deployment models and methods. It also
introduces the core concepts of DevOps as well as an introduction to
Docker and how to use it.
Chapter 14, “Application Security”: This chapter introduces
application security issues, the methods of how to secure applications
via modern networking components, and various tools used.
Additionally, this chapter also discusses the Open Web Application
Security Project (OWASP)’s top ten.

Chapter 15, “Infrastructure Automation”: This chapter


introduces several infrastructure automation concepts including
controller versus device-level management, infrastructure as code,
continuous integration/continuous delivery pipelines, and automation
tools such as Ansible, Puppet, and Chef. An overview of Cisco-related
products such as Cisco NSO, Cisco VIRL, and pyATS is also presented.

Chapter 16, “Network Fundamentals”: This chapter presents


several key networking concepts including networking reference
models, switching, and routing concepts. OSI and TCP/IP reference
models, Ethernet, MAC addresses, and VLANs—as well as IPv4 and
IPv6 addressing concepts—are discussed in this chapter.

Chapter 17, “Networking Components”: This chapter introduces


some basic networking concepts, including network definitions, types,
and elements such as hubs, switches, and routers. Further, it presents
and differentiates between process, fast, and CEF switching. It also
introduces software-defined networking discussing management, data,
and control planes.

Chapter 18, “IP Services”: This chapter starts by covering several


protocols and technologies that are critical to networking: DHCP, DNS,
NAT, SNMP, and NTP. The chapter continues with an overview of
Layer 2 versus Layer 3 network diagrams and ends with a look at how
to troubleshoot application connectivity issues.

CERTIFICATION EXAM TOPICS AND THIS BOOK


The questions for each certification exam are a closely
guarded secret. However, we do know which topics you
must know to successfully complete this exam. Cisco
publishes them as an exam blueprint for DevNet
Associate DEVASC 200-901 exam. Table I-1 lists each
exam topic listed in the blueprint along with a reference
to the book chapter that covers the topic. These are the
same topics you should be proficient in when working
with network programmability and automation in the
real world.

Table I-1 DEVASC Exam 200-901 Topics and


Chapter References
DEVASC 200-901 Exam Chapter(s) in Which Topic Is
Topic Covered

1.0 Software Development and Design 2

1.1 Compare data formats (XML, JSON, YAML) 2

1.2 Describe parsing of common data formats (XML, 5


JSON, YAML) to Python data structures

1.3 Describe the concepts of test-driven development 5

1.4 Compare software development methods (Agile, 2


Lean, Waterfall)

1.5 Explain the benefits of organizing code into 4


methods/ functions, classes, and modules

1.6 Identify the advantages of common design 2


patterns (MVC and Observer)

1.7 Explain the advantages of version control 2

1.8 Utilize common version control operations with 2


Git:

1.8.a Clone 2

1.8.b Add/remove 2

1.8.c Commit 2

1.8.d Push/pull 2

1.8.e Branch 2

1.8.f Merge and handling conflicts 2

1.8.g diff 2
2.0 Understanding and Using APIs

2.1 Construct a REST API request to accomplish a 6


task given API documentation

2.2 Describe common usage patterns related to 6,


webhooks 7

2.3 Identify the constraints when consuming APIs 6

2.4 Explain common HTTP response codes associated 6


with REST APIs

2.5 Troubleshoot a problem given the HTTP response 6


code, request, and API documentation

2.6 Identify the parts of an HTTP response (response 6


code, headers, body)

2.7 Utilize common API authentication mechanisms: 6


basic, custom token, and API keys

2.8 Compare common API styles (REST, RPC, 6


synchronous, and asynchronous)

2.9 Construct a Python script that calls a REST API 7


using the requests library

3.0 Cisco Platforms and Development

3.1 Construct a Python script that uses a Cisco SDK 8,


given SDK documentation 9

3.2 Describe the capabilities of Cisco network 8,


management platforms and APIs (Meraki, Cisco 9,
DNA Center, ACI, Cisco SD-WAN, and NSO) 1
5

3.3 Describe the capabilities of Cisco compute 9


management platforms and APIs (UCS Manager,
UCS Director, and Intersight)
3.4 Describe the capabilities of Cisco collaboration 1
platforms and APIs (Webex Teams, Webex 0
devices, Cisco Unified Communication Manager
including AXL and UDS interfaces, and Finesse)

3.5 Describe the capabilities of Cisco security 11


platforms and APIs (Firepower, Umbrella, AMP,
ISE, and ThreatGrid)

3.6 Describe the device level APIs and dynamic 1


interfaces for IOS XE and NX-OS 2

3.7 Identify the appropriate DevNet resource for a 7,


given scenario (Sandbox, Code Exchange, support, 8,
forums, Learning Labs, and API documentation) 9,
1
0,
11
,
1
2

3.8 Apply concepts of model driven programmability 1


(YANG, RESTCONF, and NETCONF) in a Cisco 2
environment

3.9 Construct code to perform a specific operation 8,


based on a set of requirements and given API 9,
reference documentation such as these: 1
5

3.9.a Obtain a list of network devices by using 8,


Meraki, Cisco DNA Center, ACI, Cisco SD- 9,
WAN, or NSO 1
5

3.9.b Manage spaces, participants, and messages in 1


Webex Teams 0

3.9.c Obtain a list of clients/hosts seen on a network 8


using Meraki or Cisco DNA Center

4.0 Application Deployment and Security 1


3
4.1 Describe benefits of edge computing 1
3

4.2 Identify attributes of different application 1


deployment models (private cloud, public cloud, 3
hybrid cloud, and edge)

4.3 Identify the attributes of these application 1


deployment types: 3

4.3.a Virtual machines 1


3

4.3.b Bare metal 1


3

4.3.c Containers 1
3

4.4 Describe components for a CI/CD pipeline in 1


application deployments 3,
1
5

4.5 Construct a Python unit test 5

4.6 Interpret contents of a Dockerfile 1


3

4.7 Utilize Docker images in local developer 1


environment 3

4.8 Identify application security issues related to 1


secret protection, encryption (storage and 4
transport), and data handling

4.9 Explain how firewall, DNS, load balancers, and 1


reverse proxy in application deployment 4

4.10 Describe top OWASP threats (such as XSS, SQL 1


injections, and CSRF) 4

4.11 Utilize Bash commands (file management, 2


directory navigation, and environmental
variables)

4.12 Identify the principles of DevOps practices 1


3

5.0 Infrastructure and Automation 1


5

5.1 Describe the value of model driven 1


programmability for infrastructure automation 2

5.2 Compare controller-level to device-level 1


management 5

5.3 Describe the use and roles of network simulation 1


and test tools (such as VIRL and pyATS) 5

5.4 Describe the components and benefits of CI/CD 1


pipeline in infrastructure automation 3,
1
5

5.5 Describe principles of infrastructure as code 1


5

5.6 Describe the capabilities of automation tools such 1


as Ansible, Puppet, Chef, and Cisco NSO 5

5.7 Identify the workflow being automated by a 8,


Python script that uses Cisco APIs including ACI, 9
Meraki, Cisco DNA Center, or RESTCONF

5.8 Identify the workflow being automated by an 1


Ansible playbook (management packages, user 5
management related to services, basic service
configuration, and start/stop)

5.9 Identify the workflow being automated by a bash 2


script (such as file management, app install, user
management, directory navigation)

5.10 Interpret the results of a RESTCONF or 1


NETCONF query 2
5.11 Interpret basic YANG models 1
2

5.12 Interpret a unified diff 2

5.13 Describe the principles and benefits of a code 2


review process

5.14 Interpret sequence diagram that includes API 7


calls

6.0 Network Fundamentals

6.1 Describe the purpose and usage of MAC addresses 1


and VLANs 6

6.2 Describe the purpose and usage of IP addresses, 1


routes, subnet mask/prefix, and gateways 6

6.3 Describe the function of common networking 1


components (such as switches, routers, firewalls, 6,
and load balancers) 1
7

6.4 Interpret a basic network topology diagram with 1


elements such as switches, routers, firewalls, load 6,
balancers, and port values 1
7

6.5 Describe the function of management, data, and 1


control planes in a network device 7

6.6 Describe the functionality of these IP services: 1


DHCP, DNS, NAT, SNMP, NTP 8

6.7 Recognize common protocol port values (such as, 1


SSH, Telnet, HTTP, HTTPS, and NETCONF) 2

6.8 Identify cause of application connectivity issues 1


(NAT problem, Transport Port blocked, proxy, and 8
VPN)
6.9 Explain the impacts of network constraints on 1
applications 8

Each version of the exam can have topics that emphasize


different functions or features, and some topics can be
rather broad and generalized. The goal of this book is to
provide the most comprehensive coverage to ensure that
you are well prepared for the exam. Although some
chapters might not address specific exam topics, they
provide a foundation that is necessary for a clear
understanding of important topics. Your short-term goal
might be to pass this exam, but your long-term goal
should be to become a qualified network developer.

It is also important to understand that this book is a


“static” reference, whereas the exam topics are dynamic.
Cisco can and does change the topics covered on
certification exams often.

This exam guide should not be your only reference when


preparing for the certification exam. You can find a
wealth of information available at Cisco.com that covers
each topic in great detail. If you think that you need more
detailed information on a specific topic, read the Cisco
documentation that focuses on that topic.

Note that as automation technologies continue to


develop, Cisco reserves the right to change the exam
topics without notice. Although you can refer to the list
of exam topics in Table I-1, always check Cisco.com to
verify the actual list of topics to ensure that you are
prepared before taking the exam. You can view the
current exam topics on any current Cisco certification
exam by visiting the Cisco.com website, choosing Menu,
and Training & Events, then selecting from the
Certifications list. Note also that, if needed, Cisco Press
might post additional preparatory content on the web
page associated with this book at
http://www.ciscopress.com/title/9780136642961. It’s a
good idea to check the website a couple of weeks before
taking your exam to be sure that you have up-to-date
content.
Figure Credits
Page NoSelection TitleAttribution

Cover image Cisco Brand


Exchange, Cisco
Systems, Inc.

1 “YAML is a human-friendly data YAML Ain’t


5 serialization standard for all Markup
7 programming languages.” Language,
YAML

1 Figure 7-12: Postman: HTTP GET from Screenshot of


6 the Postman Echo Server Postman: HTTP
5 GET from the
Postman Echo
Server ©2020
Postman, Inc.

1 Figure 7-13: Postman: HTTP POST to Screenshot of


6 the Postman Echo Server Postman: HTTP
6 POST to the
Postman Echo
Server ©2020
Postman, Inc.

1 Figure 7-14: Postman Collection Screenshot of


6 Postman
6 Collection
©2020
Postman, Inc.

1 Figure 7-15: Postman Automatic Code Screenshot of


6 Generation Postman
7 Automatic Code
Generation
©2020
Postman, Inc.

1 Figure 8-4: Output of the Python Script Screenshot of


8 from Example 8-4 Output of the
9 Python Script
from Example 8-
4 ©2020
Postman, Inc.

2 Figure 8-10: Output of the Python Screenshot of


0 Script from Example 8-7 Output of the
1 Python Script
from Example 8-
7 ©2020
Postman, Inc.

2 Figure 8-15: Output of the Python Screenshot of


1 Script from Example 8-10 Output of the
1 Python Script
from Example 8-
10 ©2020
Postman, Inc.

3 Figure 12-6: Getting the REST API Screenshot of


7 Root Resource Getting the
0 REST API root
resource ©2020
Postman, Inc.

3 Figure 12-7: Top-Level Resource Screenshot of


7 Available in RESTCONF Top level
0 resource
available in
RESTCONF
©2020
Postman, Inc.

3 Figure 12-8: Getting Interface Statistics Screenshot of


7 with RESTCONF Getting interface
1 statistics with
RESTCONF
©2020
Postman, Inc.

3 Figure 13-1: NIST Cloud Computing Source: NIST


7 Definition Cloud
7 Computing
Definitions

3 Figure 13-2: Cloud Service Models Source: NIST


7 Cloud
8 Computing
Service Models
3 Figure 13-3: Private Cloud Source: NIST
7 Special
9 Publication 800-
146 (May 2012)

3 Figure 13-4: Public Cloud Source: NIST


8 Special
0 Publication 800-
146 (May 2012)

3 Figure 13-5: Hybrid Cloud Source: NIST


8 Special
0 Publication 800-
146 (May 2012)

3 Figure 13-6: Community Cloud Source: NIST


8 Special
1 Publication 800-
146 (May 2012)

3 Figure 13-20: XebiaLabs Periodic Source:


9 Table of DevOps Tools https://xebialab
5 s.com/periodic-
table-of-devops-
tools/

4 Figure 13-25: Docker Architecture source:https://d


0 ocs.docker.com/
0 introduction/un
derstanding-
docker/

4 Figure 13-30: Kitematic Screenshot of


1 Kitematic ©
6 2020 Docker Inc

4 Figure 14-1: NIST Cybersecurity NIST,


2 Framework CYBERSECURI
2 TY
FRAMEWORK.
U.S. Department
of Commerce

5 “Information system(s) implemented NIST,


1 with a collection of interconnected COMPUTER
2 components. Such components may SECURITY
include routers, hubs, cabling, RESOURCE
telecommunications controllers, key CENTER. NIST
distribution centers, and technical SP 800-53 Rev.
control devices” 4 under Network
(CNSSI 4009)
CNSSI 4009-
2015 (NIST SP
800-53 Rev. 4).
U.S. Department
of Commerce

5 “an emerging architecture that is Open


2 dynamic, manageable, cost-effective, Networking
9 and adaptable, making it ideal for the Foundation,
high-bandwidth, dynamic nature of Copyright ©
applications. This architecture 2020
decouples the network control and
forwarding functions, enabling the
network control to become directly
programmable and the underlying
infrastructure to be abstracted for
applications and network services.”
Chapter 1

Introduction to Cisco DevNet


Associate Certification
This chapter covers the following topics:
Why Get Certified: This section covers the benefits and advantages
of becoming Cisco certified.

Cisco Career Certification Overview: This section provides a high-


level overview of the Cisco career certification portfolio.

Cisco DevNet Certifications: This section covers various aspects of


the Cisco Certified DevNet Associate, Professional, and Specialist
certifications and how they fit into the overall Cisco career certification
portfolio.

Cisco DevNet Overview: This section provides an overview of


DevNet, discusses the value DevNet provides to the industry, and
covers the resources available and how to best leverage them.

DO I KNOW THIS ALREADY?


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly.
If you are in doubt about your answers to these questions
or your own assessment of your knowledge of the topics,
read the entire chapter. Table 1-1 lists the major headings
in this chapter and their corresponding “Do I Know This
Already?” quiz questions. You can find the answers in
Appendix A, “Answers to the ‘Do I Know This Already?’
Quiz Questions.”

Table 1-1 “Do I Know This Already?” Foundation


Topics Section-to-Question Mapping

Foundations Topic SectionQuestions


Why Get Certified 2

Cisco Career Certification Overview 1, 4

Cisco DevNet Certifications 3

Cisco DevNet Overview 5

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. Which of the following are levels of accreditation for


Cisco certification? (Choose three.)
1. Associate
2. Entry
3. Authority
4. Expert
5. Maestro

2. What are some benefits certification provides for


candidates? (Choose three.)
1. Highlights skills to employer
2. Increases confidence
3. Makes candidate appear smarter than peers
4. Reduces workload
5. Improves credibility

3. What type of exams are necessary to obtain DevNet


Professional certification? (Choose two.)
1. Technology Core exam
2. Lab exam
3. CCT
4. Expert-level written exam
5. Concentration exam
4. True or false: In the new certification model, only a
single exam is required to become CCNA certified.
1. True
2. False

5. Which of the following are part of DevNet? (Choose


all that apply.)
1. Community
2. Technologies
3. Events
4. Cisco Automation Platform
5. Support

FOUNDATION TOPICS
WHY GET CERTIFIED
The IT industry is constantly changing and evolving. As
time goes on, an ever-increasing number of technologies
are putting strain on networks. New paradigms are
formed as others fall out of favor. New advances are
being developed and adopted in the networking realm.
These advances provide faster innovation and the ability
to adopt relevant technologies in a simplified way. We
therefore need more intelligence and the capability to
leverage the data from connected and distributed
environments such as the campus, branch, data center,
and WAN. Data is being used in interesting and more
powerful ways than ever in the past. The following are
some of the advances driving these outcomes:

Artificial intelligence (AI)

Machine learning (ML)

Cloud services

Virtualization

Internet of Things (IoT)

The influx of these technologies is putting strain on IT


operations staff, who are required to do more robust
planning, find relevant use cases, and provide detailed
adoption journey materials for easy consumption. All
these requirements are becoming critical to success.
Another area of importance is the deployment and day-
to-day operations of these technologies as well as how
they fit within the network environment. Some of these
technologies tend to disrupt typical operations and
present challenges in terms of how they will be
consumed by the business. Some advances in technology
are being adopted to reduce cost of operations as well as
complexity. It can be said that every network, to some
degree, has inherent complexity. Having tools to help
manage this burden is becoming a necessity.

Many in the industry are striving for automation to


handle networks as they become more and more
complicated. Businesses are often forced to operate with
lean IT staffs and flat or shrinking budgets; they must
struggle to find ways to increase the output of what the
network can do for the business. Another driver for the
adoption of these technologies is improving the overall
user experience within the environment. Users often
need the flexibility and capability to access any business-
critical application from anywhere in the network and
want to have an exceptional experience. In addition to
trying to improving user experience, operations staff
seek ways to simplify the operations of the network.
There are many inherent risks associated with manually
configuring networks. One risk is not being able to move
fast enough when deploying new applications or services
to the network. In addition, misconfigurations can cause
outages or suboptimal network performance, which can
impact business operations and potentially cause
financial repercussions. Finally, there is risk in that a
business relies on its network for business-critical
services but those services might not be available due to
the IT operations staff not being able to scale to keep up
with the demands of the business.

According to a 2016 Cisco Technical Assistance Center


(TAC) survey, 95% of Cisco customers are performing
configuration and deployment tasks manually in their
networks. The survey also found that 70% of TAC cases
created are related to misconfigurations. This means that
typos or improperly used commands are the culprits in a
majority of issues in the network environment. Dealing
with such issues is where automation shines. Automation
makes it possible to signify the intent of a change that
needs to be made, such as deploying quality of service
across the network, and then having the network
configure it properly and automatically. Automation can
configure services or features with great speed and is a
tremendous value to a business. Simplifying operations
while reducing human error can ultimately reduce risk
and potentially lower complexity.

As a simple analogy, think of an automobile. The reason


most people use an automobile is to meet a specific
desired outcome: to get from point A to point B. An
automobile is operated as a holistic system, not as a
collection of parts that make up that system. For
example, the dashboard provides the user all the
necessary information about how the vehicle is operating
and the current state of the vehicle. To use an
automobile, a driver must take certain operational steps,
such as putting it in gear and then using the system to
get from point A to point B. Figure 1-1 illustrates this
analogy.

Figure 1-1 Automobile as a System

We can think of networks as systems much as we think of


automobiles as systems. For over 30 years, the industry
has thought of a network as a collection of devices such
as routers, switches, and wireless components. The shift
in mindset to look at a network as a holistic system is a
more recent concept that stems from the advent of
network controllers, which split role and functionality
from one another. This is often referred to as separating
the control plane from the data plane. At a high level, the
control plane is where all the instructions on a device live
(for example, the routing protocols that exchange routing
updates). The data plane is where all the user or data
traffic flows (for example, the traffic between users on a
network). Having a controller that sits on top of the rest
of the devices makes it possible to operate the network as
a whole from a centralized management point—much
like operating an automobile from the driver’s seat rather
than trying to manage the automobile from all the pieces
and components of which it is composed. To put this in
more familiar terms, think of the command-line
interface (CLI). The CLI was not designed to make
massive-scale configuration changes to multiple devices
at the same time. Traditional methods of managing and
maintaining the network aren’t sufficient to keep up with
the pace and demands of the networks of today.
Operations staff need to be able to move faster and
simplify all the operations and configurations that have
traditionally gone into networking. Software-defined
networking (SDN) and controller capabilities are
becoming areas of focus in the industry, and they are
evolving to a point where they can address the challenges
faced by IT operations teams. Controllers offer the ability
to manage a network as a system, which means the policy
management can be automated and abstracted. They
provide the capability of supporting dynamic policy
changes rather than requiring manual policy changes
and device-by-device configurations when something
requires a change within the environment.

It is important from a career and skills perspective to


adapt to the changes within the industry. Keeping on top
of new skillsets is critical to maintaining relevance in the
industry or job market. Becoming Cisco certified helps
with this for multiple reasons, including the following:
Highlighting skills to employers

Highlighting skills to industry peers

Providing value to employers

Providing credibility

Providing a baseline of understanding

Building confidence

Enabling career advancement

Increasing salary

When pursuing certification, it is imperative to


understand why getting certified is beneficial in the first
place. Many people pursue certifications as a way to
break into the job market. Having certifications can be a
great differentiator in terms of skillset and discipline.
Pursuing a certification may also be valuable from a
continuing education perspective and to bolster the
candidate’s current job role. Pursuing certifications can
also help candidates evolve their skillsets to keep up with
the ever-changing advancements in the technology
industry. As mentioned earlier in this chapter, new
network operations techniques are rapidly becoming the
norm. Automation and programmability are at the center
of this paradigm shift. Becoming certified not only helps
embrace this type of momentum but also prepares the
candidate for the world of tomorrow.

CISCO CAREER CERTIFICATION


OVERVIEW
Cisco has evolved the way it offers certifications. In the
past, there were many separate tracks for each discipline,
such as Routing and Switching, Collaboration, Service
Provider, Data Center, and Security. Although there are
still separate disciplines, the number of disciplines has
been greatly reduced, and the process candidates go
through to achieve those certifications has changed
significantly. The traditional path for Routing and
Switching was Cisco Certified Network Associate
(CCNA), Cisco Certified Network Professional (CCNP),
and then Cisco Certified Internetwork Expert (CCIE). In
the past, in order to become CCNP certified, a candidate
would had to have previously completed the CCNA
certification and be in current certified status prior to
completing the CCNP. In addition, for the CCIE, a
written qualification exam had to be completed prior to
attempting the CCIE lab exam. However, having CCNA
or CCNP certification was not necessary in order to
pursue the CCIE certification. Today, the certification
process has been greatly simplified. As mentioned
previously, Cisco has evolved the certification structure
and the prerequisites for each track.

There are five levels of accreditation for Cisco career


certifications. The following sections cover how the
process has evolved and ultimately created the ability for
candidates to “choose their own destiny” when it comes
to what certifications and skillsets they want to pursue.
Figure 1-2 shows the pyramid hierarchy used to describe
the five levels of accreditation in Cisco certifications:

Architect

Expert

Professional

Associate

Entry
Figure 1-2 Cisco Career Certification Levels of
Accreditation

Each level of accreditation has multiple certifications and


exams that pertain to that specific level. The higher the
level, the more skill and rigorous hands-on experience
are required. For example, the CCIE lab exam is
experience based, and completing it requires hands-on
expertise. In addition to the five tiers of the pyramid,
there are also specialist certifications that candidates can
achieve in specific technologies to showcase their
knowledge and base level of understanding. (These
specialist certifications are covered later in this chapter.)
Simplifying the certification portfolio reduced the
number of certifications available in general and also
improved the process of achieving these certifications.
Table 1-2 lists some of the certifications and tracks that
were available in the past, as they relate to the five-level
pyramid. You can see that there were a tremendous
number of options available for each track.

Table 1-2 Cisco Career Certifications Tracks Prior to


Restructuring
EntryAssociateProfessionalExpertArchitect

Cisco Cisco Cisco Cisco Cisco


Certified Certifie Certified Certifie Certifi
Entry d Design d ed
Networking Design Professio Design Archit
Technician Associat nal Expert ect
(CCENT) e (CCDP) (CCDE) (CCAr
(CCDA) )

Cisco CCNA CCNP


Certified Cloud Cloud
Technician
(CCT)

CCNA CCNP CCIE


Collabor Collabor Collabo
ation ation ration

CCNA
Cyber
Ops

CCNA CCNP CCIE


Data Data Data
Center Center Center

CCNA
Industri
al

CCNA CCNP CCIE


Routing Routing Routing
and and and
Switchi Switchin Switchi
ng g ng

CCNA CCNP CCIE


Security Security Securit
y

CCNA CCNP CCIE


Service Service Service
Provider Provider
Provide
r

CCNA CCNP CCIE


Wireless Wireless Wireles
s

Table 1-3 shows the new and simplified certification


portfolio and how the certifications now fit into each
level of the pyramid. You can see that there has been a
significant reduction in the number of available
certifications, and there is now a succinct path to follow
through these certifications.

Table 1-3 Cisco Career Certifications Tracks After


Restructuring

EntryAssociateProfessionalExpertArchitect

Cisco Cisco
Certified Certified
Design Architect
Expert (CCAr)
(CCDE)

Dev DevNe DevNet


Net t Expert
Ass Profes (TBA)
ocia sional
te

Cisco CC CCNP CCIE


Certified NA Enter Enterprise
Technicia prise Infrastructu
n (CCT) re

CCIE
Enterprise
Wireless

CCNP CCIE
Collab Collaboratio
oratio n
n

CCNP CCIE Data


Data Center
Center

CCNP CCIE
Securi Security
ty

CCNP CCIE
Servic Service
e Provider
Provid
er

As changes were being made in the certification


portfolio, some certifications were completely removed.
Table 1-3 shows that there is now only a single CCNA
certification. Prior to this change, there were nine CCNA
certifications, and multiple exams had to be completed in
order to become a CCNA in any of the tracks. Now with
the new CCNA, a candidate need pass only a single exam
to become CCNA certified. An additional certification
that was removed was the CCENT. Now that the CCNA is
a broader exam and covers many introductory-level
topics, the CCENT topics have been absorbed into the
new CCNA. Furthermore, the CCDA and CCDP
certifications were retired as that design information has
been incorporated into other certifications within each
track, and separate certifications are no longer required
for the Associate and Professional levels of design
knowledge.

The CCNP has changed significantly as well. Previously,


for example, the CCNP Routing and Switching exam
consisted of three exams:

300-101 ROUTE

300-115 SWITCH
300-135 TSHOOT

A candidate would have to successfully pass all three of


these exams as well as the CCNA in the same track in
order to become CCNP Routing and Switching certified.
Today, only two exams are required in order to become
CCNP Enterprise certified. Candidates can now start
wherever they want; there are no prerequisites, and a
candidate can start earning any level of certification—
even Associate, Specialist, Professional, or Expert level
certification. For the CCNP Enterprise certification, the
first exam is the 300-401 ENCOR exam, which covers
core technologies in enterprise networks, including the
following:

Dual-stack (IPv4 and IPv6) architecture

Virtualization

Infrastructure

Network assurance

Security

Automation

Once the ENCOR exam is completed, a concentration


exam must be taken. This is perhaps the most important
and fundamental change made to the CCNP. The
available concentration exams include a variety of
different technology specialties and allow candidates to
build their own certification (or “choose their own
destiny”). Each CCNP track has its own core exam and
concentrations.

Cisco made a number of changes to the specialist


certifications, which allow candidates to get certified in
specific areas of expertise. For example, a candidate who
is proficient at Cisco Firepower can pursue a specialist
certification for Firepower. The specialist certifications
are important because candidates, especially consultants,
often have to use many different technologies in many
different customer environments. Specialist
certifications can help show a candidate’s ability to work
on a variety of projects. They also help build credibility
on a plethora of different platforms and technologies.
For example, a candidate looking to focus on routing and
Cisco SD-WAN could take the CCNP 300-401 ENCOR
exam and then take the 300-415 ENSDWI concentration
exam to become a CCNP with an SD-WAN specialty. In
essence, the concentration exams are the new specialist
exams, and a candidate can simply take a single
specialist exam and become certified in that technology
(for example, the 300-710 SNCF exam for certification in
network security and Firepower).

Table 1-4 lists and describes the different type of CCNP


concentration and specialist exams currently available.

Table 1-4 CCNP Core and Concentration Exams

TrackDescriptionConcentration Exam

Enterpri Cisco Certified Specialist–Enterprise 300-


se Core 401
ENCO
R

Enterpri Cisco Certified Specialist–Enterprise 300-


se Advanced Infrastructure 410
Implementation ENAR
SI

Enterpri Cisco Certified Specialist–Enterprise 300-


se SD-WAN Implementation 415
ENSD
WI

Enterpri Cisco Certified Specialist–Enterprise 300-


se Design 420
ENSL
D

Enterpri Cisco Certified Specialist–Enterprise 300-


se Wireless Design 425
ENWL
SD

Enterpri Cisco Certified Specialist–Enterprise 300-


se Wireless Implementation 430
ENWL
SI

Data Cisco Certified Specialist–Data 300-


Center Center Core 601
DCCO
R

Data Cisco Certified Specialist–Data 300-


Center Center Design 610
DCID

Data Cisco Certified Specialist–Data 300-


Center Center Operations 615
DCIT

Data Cisco Certified Specialist–Data 300-


Center Center ACI Implementation 620
DCACI

Data Cisco Certified Specialist–Data 300-


Center Center SAN Implementation 625
DCSA
N

Security Cisco Certified Specialist–Security 300-


Core 701
SCOR

Security Cisco Certified Specialist–Network 300-


Security Firepower 710
SNCF

Security Cisco Certified Specialist–Network 300-


Security VPN Implementation 730
SVPN

Security Cisco Certified Specialist–Email 300-


Content Security 720
SESA
Security Cisco Certified Specialist–Web 300-
Content Security 725
SWSA

Security Cisco Certified Specialist–Security 300-


Identity Management 715
Implementation SISE

Service Cisco Certified Specialist–Service 300-


Provider Provider Core 501
SPCO
R

Service Cisco Certified Specialist–Service 300-


Provider Provider Advanced Routing 510
Implementation SPRI

Service Cisco Certified Specialist–Service 300-


Provider Provider VPN Services 515
Implementation SPVI

Collabor Cisco Certified Specialist– 300-


ation Collaboration Core 801
CLCO
R

Collabor Cisco Certified Specialist– 300-


ation Collaboration Applications 810
Implementation CLICA

Collabor Cisco Certified Specialist– 300-


ation Collaboration Call Control & Mobility 815
Implementation CLAC
CM

Collabor Cisco Certified Specialist– 300-


ation Collaboration Cloud & Edge 820
Implementation CLCEI

DevNet, Cisco Certified DevNet Specialist– 300-


Enterpri Enterprise Automation and 435
se Programmability ENAU
TO
DevNet, Cisco Certified DevNet Specialist– 300-
Data Data Center Automation and 635
Center Programmability DCAU
TO

DevNet, Cisco Certified DevNet Specialist– 300-


Security Security Automation and 735
Programmability SAUT
O

DevNet, Cisco Certified DevNet Specialist– 300-


Service Service Provider Automation and 535
Provider Programmability SPAU
TO

DevNet, Cisco Certified DevNet Specialist– 300-


Collabor Collaboration Automation and 835
ation Programmability CLAU
TO

DevNet Cisco Certified DevNet Specialist– 300-


Core 901
DEVC
OR

DevNet Cisco Certified DevNet Specialist– 300-


DevOps 910
DEVO
PS

DevNet Cisco Certified DevNet Specialist–IoT 300-


915
DEVI
OT

DevNet Cisco Certified DevNet Specialist– 300-


Webex 920
DEVW
BX

Note
The exams listed in Table 1-4 were available at the time
of publication. Please visit
http://www.cisco.com/go/certifications to keep up on
all the latest available certifications and associated
tracks.

In addition to the Associate- and Professional-level


certifications, the Cisco certified specialist certifications
have changed as well. Previously, in some cases multiple
exams had to be completed to become certified as a
specialist in a specific topic or discipline. With the new
changes, however, candidates can take and complete any
one of the specialist exams mentioned in Table 1-4 to
become certified in that technology area. For example, a
candidate who is proficient with Cisco Identity Services
Engine (ISE) could pursue a specialist certification for
security identity management implementation by taking
the 300-715 SISE exam.

Another major change to the certification program is that


changes have been made to the flagship CCIE program.
The CCIE Routing and Switching certification and the
CCIE Wireless certification have both been rebranded as
CCIE Enterprise certifications: CCIE Routing and
Switching became CCIE Enterprise Infrastructure, and
CCIE Wireless became CCIE Enterprise Wireless. The
goal of this change was to align the certifications with the
current technologies that candidates are seeing in their
work environments as well as the industry trends that
are changing the way networking is being consumed and
managed. As mentioned earlier in this chapter, software-
defined networking, automation, programmability, IoT,
and other trends are drastically changing the approach
network operations teams are taking to networking in
general. The business outcomes and use case–driven
adoption of these new technologies are shaping the
industry, as are the approaches vendors are taking to
building and designing their products. User experience
as well as adoption are now critical and are top-of-mind
priority topics for many customers. Cisco therefore
wanted to align its career certification portfolio with
what candidates and the industry are seeing in their
networking environments. For all other Expert-level
certifications, there are now currently only the following
specialties:

Cisco Certified Design Expert (CCDE)

CCIE Enterprise Infrastructure

CCIE Enterprise Wireless

CCIE Collaboration

CCIE Data Center

CCIE Security

CCIE Service Provider

CISCO DEVNET CERTIFICATIONS


The following sections provide an overview of the new
Cisco DevNet certifications. It also explains the required
skillsets necessary to achieve these new certifications. As
you will see, there are many different options available
for candidates to pursue.

Note
The DevNet Expert certification will be announced in
the future. Please visit
http://www.cisco.com/go/certifications to keep up on
all the latest available certifications and associated
tracks.

Cisco Certified DevNet Associate Certification


(DEVASC)
Considering everything covered up to this point in the
chapter and the main focus of this book, this section
covers the Cisco DevNet Associate certification at a high
level. Although there was previously a very broad and
robust Cisco career certification portfolio that was long
established and well known, it had a gap—and that gap
was becoming more and more noticeable with the
changes that were happening in the industry, such as the
need for automation in the network environment across
all tracks and areas of the network, ranging from
enterprise and data center networks to large-scale
service provider networks. Today, all areas of the
business must work together, and it is important to
remove the silos that once compartmentalized different
departments. Applications are being instantiated at
speeds that have never been seen before. Furthermore,
with user experience becoming the benchmark for how a
business measures success, it is paramount to deploy
applications, network services, and security in an agile,
consistent, and repeatable manner. Much like the CCNA,
the DevNet Associate certification requires only a single
exam. The DevNet Associate certification covers multiple
knowledge domains, as shown in Figure 1-3.

Figure 1-3 Cisco DevNet Associate Knowledge


Domains

It is recommended that candidates attempting the Cisco


DevNet Associate exam have at least one year of
experience developing and maintaining applications
built on top of Cisco platforms. In addition, they must
have hands-on experience with programming languages
such as Python. This certification was designed for early-
in-career developers and for experienced network
engineers looking to expand their skillsets to include
software and automation practices. It is important to
note that the line is blurring between network engineers
and developers. The two skillsets are slowly merging, and
candidates are becoming “network developers.” Having a
certification like the DevNet Associate can open doors for
candidates to approach new job roles that didn’t
necessarily exist in the past including the following:

Junior or entry-level DevOps engineer

Cloud developer

Automation engineer

When pursuing any certification, a candidate should


remember the reason the certification is important in the
first place. Certifications can help expand a candidate’s
current skillset as well as ensure a baseline level of
knowledge around specific topics. The Cisco DevNet
Associate certification can help businesses find
individuals who possess a certain level of
programmability or automation skills. It gives businesses
a clear way to determine the base level of skills when
looking at hiring a candidate and ensure that new hires
have the necessary relevant skills. The upcoming
chapters of this book align directly with the DevNet
Associate blueprint. This book covers the topics
necessary to build a foundational level of understanding
for candidates to feel comfortable in pursuing the Cisco
DevNet Associate certification.

Cisco Certified DevNet Professional Certification


The next certification in the path after Cisco DevNet
Associate would the Cisco DevNet Professional. This
more robust certification requires a more advanced
skillset. Figure 1-4 illustrates some of the high-level
requirements for this certification and their associated
topic domains.
Figure 1-4 Cisco DevNet Professional Knowledge
Domains

It is recommended that candidates attempting the Cisco


DevNet Professional exam have a minimum of three to
five years of experience designing and implementing
applications built on top of Cisco platforms. It is also
critical that they have hands-on experience with
programming languages such as Python. This
certification was designed for experienced network
engineers looking to expand their capabilities and
include software and automation on their resume. It is
also designed for developers moving into automation
and DevOps roles as well as for solution architects who
leverage the Cisco ecosystem. Infrastructure developers
designing hardened production environments will also
benefit from the Cisco DevNet Professional certification.
Because the DevNet Professional provides many avenues
for a candidate to create a unique journey, it is one of the
most eagerly anticipated certifications and will be
integral to aligning candidates’ skillsets with their daily
job tasks.

Table 1-5 lists the CCNP concentration and specialist


exams currently available.

Table 1-5 DevNet Concentration and Specialist


Exams

TrackDescriptionSpecialist Exam
DevNet, Cisco Certified DevNet Specialist– 300-
Enterpri Enterprise Automation and 435
se Programmability ENAU
TO

DevNet, Cisco Certified DevNet Specialist– 300-


Data Data Center Automation and 635
Center Programmability DCAU
TO

DevNet, Cisco Certified DevNet Specialist– 300-


Security Security Automation and 735
Programmability SAUT
O

DevNet, Cisco Certified DevNet Specialist– 300-


Service Service Provider Automation and 535
Provider Programmability SPAU
TO

DevNet, Cisco Certified DevNet Specialist– 300-


Collabor Collaboration Automation and 835
ation Programmability CLAU
TO

DevNet Cisco Certified DevNet Specialist– 300-


Core 901
DEVC
OR

DevNet Cisco Certified DevNet Specialist– 300-


DevOps 910
DEVO
PS

DevNet Cisco Certified DevNet Specialist–IoT 300-


915
DEVI
OT

DevNet Cisco Certified DevNet Specialist– 300-


Webex 920
DEVW
BX
You might notice that some of the specializations listed
in Table 1-5 were listed earlier in this chapter for the
CCNP exams as well. This is because they can be used for
both the CCNP exams and the DevNet Professional
exams. In addition, the DevNet Specialist exams can be
taken independently for Cisco Certified DevNet
Specialist certification. This is similar to the CCNP
concentration exams covered earlier. Figure 1-5
illustrates the entire Cisco career certification structure.
As you can see, regardless of what track a candidate
decides to pursue—whether it’s a Specialist or a
Professional level—it is possible to choose a variety of
DevNet skills as part of the journey.

Figure 1-5 Cisco Career Certification Overview

Note
The DevNet Expert certification is a planned offering
that was not available as this book went to press.

CISCO DEVNET OVERVIEW


This section looks at the tools and resources available for
Cisco DevNet certification candidates. These tools help
candidates to learn, practice, and share ideas as well as
experience.

The examples and tools discussed in this chapter are all


available to use and practice at
http://developer.cisco.com, which is the home for Cisco
DevNet. This site provides a single place for network
operators to go when looking to enhance or increase
their skills with APIs, coding, Python, or even controller
concepts. DevNet makes it easy to find learning labs and
content to help build or solidify current knowledge in
network programmability. Whether a candidate is just
getting started or a seasoned programmatic professional,
DevNet is the place to be! This section provides a high-
level overview of DevNet. It describes the different
sections of DevNet, some of the labs available, and other
content that is available. Figure 1-6 shows the DevNet
main page.

Figure 1-6 DevNet Main Page

Across the top of the main DevNet page, you can see that
the following menu options:

Discover

Technologies

Community

Support

Events

The following sections cover these menu options


individually.

Discover
The Discover page shows the different offerings that
DevNet has available. This page includes the subsection
Learning Tracks; the learning tracks on this page guide
you through various different technologies and
associated API labs. Some of the available labs are
Programming the Cisco Digital Network Architecture
(DNA), ACI Programmability, Getting Started with Cisco
WebEx Teams APIs, and Introduction to DevNet.

When you choose a learning lab and start a module,


DevNet tracks all your progress and allows you to go
away and then come back and continue where you left
off. This is an excellent feature if you are continuing your
education over the course of multiple days or weeks.
Being able to keep track of your progress means you can
easily see what you have already learned and also
determine what might be the next logical step in your
learning journey.

Technologies
The Technologies page allows you to pick relevant
content based on the technology you want to study and
dive directly into the associated labs and training for that
technology. Figure 1-7 shows some of the networking
content that is currently available in DevNet.

Figure 1-7 DevNet Technologies Page

Note
Available labs may differ from those shown in this
chapter’s figures. Please visit
http://developer.cisco.com to see the latest content
available and to interact with the current learning labs.

Community
Perhaps one of the most important section of DevNet is
the Community page, where you have access to many
different people at various stages of learning. You can
find DevNet ambassadors and evangelists to help at
various stages of your learning journey. The Community
page puts the latest events and news at your fingertips.
This is also the place to read blogs, sign up for developer
forums, and follow DevNet on all major social media
platforms. This is the safe zone for asking any questions,
regardless of how simple or complex they might seem.
Everyone has to start somewhere. The DevNet
Community page is the place to start for all things Cisco
and network programmability. Figure 1-8 shows some of
the options currently available on the Community page.

Figure 1-8 DevNet Community Page

Support
On the DevNet Support page you can post questions and
get answers from some of the best in the industry.
Technology-focused professionals are available to answer
questions from both technical and theoretical
perspectives. You can ask questions about specific labs or
overarching technologies, such as Python or YANG
models. You can also open a case with the DevNet
Support team, and your questions will be tracked and
answered in a minimal amount of time. This is a great
place to ask the Support team questions and to tap into
the expertise of the Support team engineers. Figure 1-9
shows the DevNet Support page, where you can open a
case. Being familiar with the options available from a
support perspective is key to understanding the types of
information the engineers can help provide.

Figure 1-9 DevNet Support Page Events

Events
The Events page, shown in Figure 1-10, provides a list of
all events that have happened in the past and will be
happening in the future. This is where you can find the
upcoming DevNet Express events as well as any
conferences where DevNet will be present or
participating. Be sure to bookmark this page if you plan
on attending any live events. DevNet Express is a one- to
three-day event led by Cisco developers for both
customers and partners. Attending one of these events
can help you with peer learning and confidence as well as
with honing your development skills.

Figure 1-10 DevNet Events Page


Note
Keep in mind that the schedule shown in Figure 1-10
will differ from the schedule you see when you read
this chapter.

DevNet gives customers the opportunity to learn modern


development and engineering skills and also get hands-
on experience with them in a safe environment. DevNet
Express offers foundational development skills training
to expose attendees to the latest languages and tools
available. Once the foundational skills have been
covered, specific learning tracks or technology-specific
modules are then covered so attendees can apply their
newly learned skills to working with APIs on Cisco
products. These events are guided, which helps ensure
that attendees have the support they need to get started
in the world of APIs and programmability.

DevNet Automation Exchange


DevNet Automation Exchange makes code available for
consumption. This code is based on consumable use
cases; that is, use case–specific solutions have been
uploaded by various developers and are designed to
accomplish particular business outcomes. For example,
whereas one solution may contain the steps to fully
automate the provisioning of devices in Cisco DNA
Center, and another may make it possible to deploy a
fabric, the specific use case for both solutions might be to
increase the speed of onboarding new site locations or
improve the user experience for mobile users moving
from one area of a campus network to another,
regardless of whether they are connected via wire or
wirelessly. The use cases in the DevNet Automation
Exchange are divided by three different categories:

Walk

Run

Fly
Figure 1-11 shows the landing page for the DevNet
Automation Exchange. You can see that you can view the
use case library as well as share any use cases that you
have created.

Figure 1-11 DevNet Automation Exchange

When searching the use case library, you can search


using the Walk, Run, or Fly categories as well as by type
of use case. In addition, you can find use cases based on
the automation lifecycle stage or the place in the
network, such as data center, campus, or collaboration.
Finally, you can simply choose the product for which you
want to find use cases, such as IOS XE, Cisco DNA
Center, or ACI (see Figure 1-12).

Figure 1-12 DevNet Automation Exchange Use Case


Library

The Walk category allows you to gain visibility and


insights into your network. You can find various projects
involving gathering telemetry and insight data in a read-
only fashion. These projects can provide auditing
capabilities to ensure the network’s security and
compliance. Because the projects are read-only,
gathering the information has minimal risk of impacting
a network negatively. You could, for example, use
programmability to do a side-by-side configuration
comparison to see what has changed in the configuration
on a device. Using tools like this would be the next step
past using the DevNet sandboxes to write code in a
production environment.

The Run in Automation Exchange is where read/write


actions start taking place in the network environment,
such as when a network operations team begins to
activate policies and signify intent across different
network domains. These types of projects can also allow
for self-service network operations and ensure
compliance with security policies and operational
standards. Automation tools are key to ensuring
consistency and simplicity in day-to-day operations.

Finally, the Fly category is for proactively managing


applications, users, and devices by leveraging a DevOps
workflow. With such projects, you can deploy
applications using continuous integration and delivery
(CI/CD) pipelines while at the same time configuring the
network and keeping consistent application policies. By
combining machine learning capabilities with
automation, a business can shift from a reactive
application development approach to a more holistic
proactive approach—which lowers risk and increases
agility. Each of the Automation Exchange use cases
adheres to the automation lifecycle, which consists of
Day 0–2 operations. Table 1-6 lists the functions of the
automation lifecycle.

Table 1-6 Automation Lifecycle

DayFunctionDescription
0 In Bringing devices into an initial operational state
sta
ll

1 Co Applying configurations to devices


nfi
gu
re

2 Op Implementing dynamic services, optimizing


ti network behavior, and troubleshooting issues
mi
ze

N M Ensuring consistent and continuous operation of


an the network, with reduced risk and human error
ag
e

SUMMARY
This chapter provides a high-level overview of Cisco’s
career certifications and how candidates can choose their
own destiny by picking the areas where they want to
build experience and become certified. This chapter
describes Cisco’s new specialist exams, which focus on
many different technologies, such as Firepower, SD-
WAN, and IoT. This chapter also discusses some of the
benefits of becoming certified, from career advancement
to building confidence to commanding a higher salary in
the workplace. This chapter also details at a high level
the components of Cisco DevNet, the value of the DevNet
community, and DevNet events such as Cisco DevNet
Express. Finally, this chapter introduces DevNet tools
such as DevNet Automation Exchange and DevNet
learning labs.
Chapter 2

So ware Development and


Design
This chapter covers the following topics:
Software Development Lifecycle: This section covers the Software
Development Lifecycle (SDLC) and some of the most popular SDLC
models, including Waterfall, Agile, and Lean.

Common Design Patterns: This section covers common software


design patterns, including the Model-View-Controller (MVC) and
Observer design patterns.

Linux BASH: This section covers key aspects of the Linux BASH shell
and how to use it.

Software Version Control: This section includes the use of version


control systems in software development.

Git: This section discusses the use of the Git version control system.

Conducting Code Review: This section discusses using peer review


to check the quality of software.

Are you a software developer? This has become an


existential question for traditional network engineers, as
programming and automation have become more
pervasive in the industry. Professional programmers
have picked up software integration with infrastructure
gear as a new service or capability they can add to their
applications. Traditional infrastructure engineers are
being expected to know how to use APIs and automation
tool sets to achieve more agility and speed in IT
operations. The bottom line is that we are all being asked
to pick up new skills to accomplish the business goals
that keep us relevant and employed. This chapter
discusses a few of the fundamental principles and tools
that modern software development requires. You will
learn about Agile, Lean, and common software design
patterns that are used to enable a whole new operational
model. In addition, you will see the importance of
version control and how to use Git to collaborate with
others and share your work with the world. These core
concepts are essential to understanding the influence of
software development methodologies as they pertain to
infrastructure automation.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 2-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 2-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

Software Development Lifecycle 1, 2

Common Design Patterns 3, 4

Linux BASH 5, 6

Software Version Control 7

Git 8–10

Conducting Code Review 11


Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. What is Waterfall?
1. A description of how blame flows from management on failed
software projects
2. A type of SDLC
3. A serial approach to software development that relies on a fixed
scope
4. All of the above

2. What is Agile?
1. A form of project management for Lean
2. An implementation of Lean for software development
3. A strategy for passing the CCNA DevNet exam
4. A key benefit of automation in infrastructure

3. The Model-View-Controller pattern is often used in


which of the following applications? (Choose three.)
1. Web applications with graphical interfaces
2. Client/server applications with multiple client types
3. PowerShell scripts
4. Django

4. Which of the following are true of the Observer


pattern? (Choose two.)
1. It is a publisher/subscriber pattern.
2. It is a multicast pattern.
3. It is used for configuration management applications and event
handling.
4. It is not commonly used in infrastructure application design.

5. What does BASH stand for?


1. Born Again Shell
2. Basic Shell
3. Bourne Again Shell
4. None of the above
6. Which of the following is the best option for
displaying your current environment variables?
1. env | cat >env.txt
2. env | more
3. export $ENV | cat
4. echo env

7. Which of the following is true of software version


control?
1. It is a naming convention for software releases.
2. It is also known as source code management.
3. It is the same thing as BitKeeper.
4. None of the above are true.

8. Who created Git?


1. Junio Hamano
2. Marc Andreesen
3. John Chambers
4. Linus Torvalds

9. What are the three main structures tracked by Git?


1. Index, head, and local repo
2. Local workspace, index, and local repository
3. Remote repository, head, and local index
4. None of the above

10. What command do you use to add the specific


filename file.py to the Git index?
1. git add .
2. git index file.py
3. git index add .
4. git add file.py

11. Which is one of the key benefits of conducting a


code review?
1. It helps you create higher-quality software
2. You can find weak programmers who need more training
3. It can be used to identify defects in software that are obvious
4. None of the above

FOUNDATION TOPICS
SOFTWARE DEVELOPMENT
LIFECYCLE
Anyone can program. Once you learn a programming
language’s syntax, it’s just a matter of slapping it all
together to make your application do what you want it to
do, right? The reality is, software needs to be built using
a structure to give it sustainability, manageability, and
coherency. You may have heard the phrase “cowboy
coding” to refer to an unstructured software project,
where there is little formal design work, and the
programmer just sort of “shoots from the hip” and slaps
code in with little or no forethought. This is a path that
leads straight to late-night support calls and constant
bug scrubbing. Heaven forbid if you inherit a ball of
spaghetti like this and you are asked to try to fix, extend,
or modernize it. You will more than likely be updating
your resume or packing your parachute for a quick
escape.

To prevent problems from slapdash approaches such as


cowboy coding, disciplines such as architecture and
construction establish rules and standards that govern
the process of building. In the world of software, the
Software Development Lifecycle (SDLC) provides sanity
by providing guidance on building sustainable software
packages. SDLC lays out a plan for building, fixing,
replacing, and making alterations to software.

As shown in Figure 2-1, these are the stages of the SDLC:

Stage 1—Planning: Identify the current use case or problem the


software is intended to solve. Get input from stakeholders, end users,
and experts to determine what success looks like. This stage is also
known as requirements analysis.

Stage 2—Defining: This stage involves analyzing the functional


specifications of the software—basically defining what the software is
supposed to do.

Stage 3—Designing: In this phase, you turn the software


specifications into a design specification. This is a critical stage as
stakeholders need to be in agreement in order to build the software
appropriately; if they aren’t, users won’t be happy, and the project will
not be successful.

Stage 4—Building: Once the software design specification is


complete, the programmers get to work on making it a reality. If the
previous stages are completed successfully, this stage is often
considered the easy part.

Stage 5—Testing: Does the software work as expected? In this stage,


the programmers check for bugs and defects. The software is
continually examined and tested until it successfully meets the original
software specifications.

Stage 6—Deployment: During this stage, the software is put into


production for the end users to put it through its paces. Deployment is
often initially done in a limited way to do any final tweaking or detect
any missed bugs. Once the user has accepted the software and it is in
full production, this stage morphs into maintenance, where bug fixes
and software tweaks or smaller changes are made at the request of the
business user.

Figure 2-1 Software Development Lifecycle

Note
ISO/IEC 12207 is the international standard for
software lifecycle processes, and there are numerous
organizations around the globe that use it for
certification of their software development efforts. It is
compatible with any SDLC models and augments them
from a quality and process assurance standpoint. It
does not, however, replace your chosen SDLC model.

There are quite a few SDLC models that further refine


the generic process just described. They all use the same
core concepts but vary in terms of implementation and
utility for different projects and teams. The following are
some of the most popular SDLC models:

Waterfall

Lean

Agile

Iterative model

Spiral model

V model

Big Bang model

Prototyping models

Luckily, you don’t need to know all of these for the 200-
901 DevNet Associate DEVASC exam. The following
sections cover the ones you should know most about:
Waterfall, Lean, and Agile.

Waterfall

Back in the 1950s, when large companies started to


purchase large mainframe computers to crunch
numbers, no one really knew how to run an IT
organization. It really wasn’t anything that had been
done before, and for the most part, computers were only
really understood by an elite group of scientists.
Programming a mainframe required structure and a
process. This caused a problem for businesses looking to
tap into the capabilities of these new systems since there
wasn’t a well-known method to create business
applications. So they looked around at other industries
for guidance.
The construction industry was booming at the time. The
construction industry followed a rigid process in which
every step along the way was dependent on the
completion of the previous step in the process. If you
want to end up with a building that stays standing and
meets the original design, you can’t start construction
until you have a plan and analyze the requirements for
the building. This thought process mapped nicely to
software development, and the complexity of designing
and constructing a building was similar to that of
creating software applications. Waterfall, which is based
on the construction industry approach, became one of
the most popular SDLC approaches.

As illustrated in Figure 2-2, Waterfall is a serial approach


to software development that is divided into phases:

Requirements/analysis: Software features and functionality needs


are cataloged and assessed to determine the necessary capabilities of
the software.

Design: The software architecture is defined and documented.

Coding: Software coding begins, based on the previously determined


design.

Testing: The completed code is tested for quality and customer


acceptance.

Maintenance: Bug fixes and patches are applied.


Figure 2-2 Waterfall

While this approach has worked successfully over the


years, a number of shortcomings have become
weaknesses in this approach. First, the serial nature of
Waterfall, while easy to understand, means that the
scope of a software project is fixed at the design phase. In
construction, making changes to the first floor of a
building after you have begun the fifth floor is extremely
difficult—and may even be impossible unless you knock
down the building and start from scratch. In essence, the
Waterfall approach does not handle change well at all.
When you finally get to the coding phase of the
application development process, you might learn that
the feature you are building isn’t needed anymore or
discover a new way of accomplishing a design goal;
however, you cannot deviate from the predetermined
architecture without redoing the analysis and design.
Unfortunately, it is often more painful to start over than
to keep building. It is similar to being stuck building a
bridge over a river that no one needs anymore.

The second aspect of Waterfall that is challenging is that


value is not achieved until the end of the whole process.
We write software to automate some business function
or capability—and value is only realized when the
software is in production and producing results. With the
Waterfall approach, even if you are halfway done with a
project, you still have no usable code or value to show to
the business. Figure 2-3 shows this concept.
Figure 2-3 The Value Problem of Waterfall

The third aspect of Waterfall that is challenging is


quality. As mentioned earlier, time is the enemy when it
comes to delivering value. If we had unlimited time, we
could create perfect software every time, but we simply
don’t live in that world. When software developers run
out of time, testing often suffers or is sacrificed in the
name of getting the project out the door.

The three challenges for Waterfall led to the


development of a new way of creating software that was
faster, better, and more adaptive to a rapidly changing
environment.

Lean

After World War II, Japan was in desperate need of


rebuilding. Most of Japan’s production capabilities had
been destroyed, including those in the auto industry.
When Japan tackled this rebuilding, it didn’t concentrate
on only the buildings and infrastructure; it looked at
ways to do things differently. Out of this effort, the
Toyota Production System (TPS) was born. Created by
Taiichi Ohno and Sakichi Toyoda (founder of Toyota),
this management and manufacturing process focuses on
the following important concepts:

Elimination of waste: If something doesn’t add value to the final


product, get rid of it. There is no room for wasted work.

Just-in-time: Don’t build something until the customer is ready to


buy it. Excess inventory wastes resources.

Continuous improvement (Kizan): Always improve your


processes with lessons learned and communication.

While these concepts seem glaringly obvious and


practical, TPS was the first implementation of these
principles as a management philosophy. TPS was the
start of the more generalized Lean manufacturing
approach that was introduced to the Western world in
1991 through a book written by Womack, Jones, and
Roos, The Machine That Changed the World. This book
was based on a five-year study MIT conducted on TPS,
and it has been credited with bringing Lean concepts and
processes beyond the auto industry.

Why spend this time talking about moldy old


management books? Lean led to Agile software
development, which has served as a lightning rod of
change for IT operations.

Agile

Agile is an application of Lean principles to software


development. With Agile, all the lessons learned in
optimizing manufacturing processes have been applied
to the discipline of creating software. In 2001, 17
software developers converged on the Snowbird resort in
Utah to discuss new lightweight development methods.
Tired of missing deadlines, endless documentation, and
the inflexibility of existing software development
practices, these Agile pioneers created the “Manifesto for
Agile Software Development,” which codifies the guiding
principles for Agile development practices. The following
12 principles are the core of the Agile Manifesto:

Customer satisfaction is provided through early and continuous


delivery of valuable software.

Changing requirements, even in late development, are welcome.

Working software is delivered frequently (in weeks rather than


months).

The process depends on close, daily cooperation between business


stakeholders and developers.

Projects are built around motivated individuals, who should be trusted.

Face-to-face conversation is the best form of communication (co-


location).

Working software is the principal measure of progress.

Sustainable development requires being able to maintain a constant


pace.

Continuous attention is paid to technical excellence and good design.

Simplicity—the art of maximizing the amount of work not done—is


essential.

The best architectures, requirements, and designs emerge from self-


organizing teams.

A team regularly reflects on how to become more effective and adjusts


accordingly.

These core tenets were the main spark of the Agile


movement. Mary Poppendieck and Tom Poppendieck
wrote Lean Software Development: An Agile Toolkit in
2003, based on the principles of the Agile Manifesto and
their many years of experience developing software. This
book is still considered one of the best on the practical
uses of Agile.

Developing software through Agile results in very


different output than the slow serial manner used with
Waterfall. With Waterfall, a project is not “finished” and
deployable until the very end. With Agile, the time frame
is changed: Agile uses smaller time increments (often 2
weeks), or “sprints,” that encompass the full process of
analysis, design, code, and test but on a much smaller
aspect of an application. The goal is to finish a feature or
capability for each sprint, resulting in a potentially
shippable incremental piece of software. Therefore, with
Agile, if you are 40% finished with a project, you have
100% usable code. Figure 2-4 shows how this process
looks on a timeline.

Figure 2-4 Agile Development Practices

By leveraging Agile, you can keep adding value


immediately and nimbly adapt to change. If a new
capability is needed in the software, or if a feature that
was planned is determined to no longer be necessary, the
project can pivot quickly and make those adjustments.

COMMON DESIGN PATTERNS


When creating software, you will often run into the same
problem over and over again. You don’t want to reinvent
the wheel each time you need a rolling thing to make
something move. In software engineering, many
common design paradigms have already been created,
and you can reuse them in your software project. These
design patterns make you faster and provide tried-and-
true solutions that have been tested and refined. The
following sections introduce a couple of design patterns
that are really useful for network automation projects:
the Model-View-Controller (MVC) and Observer
patterns. While there are many more that you may be
interested in learning about, these are the ones you will
most likely see on the 200-901 DevNet Associate
DEVASC exam.

Model-View-Controller (MVC) Pattern

The Model-View-Controller (MVC) pattern was one of


the first design patterns to leverage the separation of
concerns (SoC) principle. The SoC principle is used to
decouple an application’s interdependencies and
functions from its other parts. The goal is to make the
various layers of the application—such as data access,
business logic, and presentation (to the end user)—
modular. This modularity makes the application easier to
construct and maintain while also allowing the flexibility
to make changes or additions to business logic. It also
provides a natural organization structure for a program
that anyone can follow for collaborative development. If
you have used a web-based application, more than likely
the app was constructed using an MVC pattern.

Note
Numerous web frameworks use MVC concepts across
many programming languages. Angular, Express, and
Backbone are all written in JavaScript. Django and
Flask are two very popular examples written in Python.

The classical MVC pattern has three main parts:

Model: The model is responsible for retrieving and manipulating data.


It is often tied to some type of database but could be data from a simple
file. It conducts all data operations, such as select, insert, update, and
delete operations. The model receives instructions from the controller.

View: The view is what the end users see on the devices they are using
to interact with the program. It could be a web page or text from the
command line. The power of the view is that it can be tailored to any
device and any representation without changing any of the business
logic of the model. The view communicates with the controller by
sending data or receiving output from the model through the controller.
The view’s primary function is to render data.

Controller: The controller is the intermediary between what the user


sees and the backend logic that manipulates the data. The role of the
controller is to receive requests from the user via the view and pass
those requests on to the model and its underlying data store.

Figure 2-5 shows the interactions between components


of the MVC pattern.

Figure 2-5 MVC Pattern Interactions

Observer Pattern

The Observer pattern was created to address the problem


of sharing information between one object to many other
objects. This type of pattern describes a very useful
behavior for distributed systems that need to share
configuration information or details on changes as they
happen. The Observer pattern is actually very simple and
consists of only two logical components (see Figure 2-6):

Subject: The subject refers to the object state being observed—in other
words, the data that is to be synchronized. The subject has a
registration process that allows other components of an application or
even remote systems to subscribe to the process. Once registered, a
subscriber is sent an update notification whenever there is a change in
the subject’s data so that the remote systems can synchronize.

Observer: The observer is the component that registers with the


subject to allow the subject to be aware of the observer and how to
communicate to it. The only function of the observer is to synchronize
its data with the subject when called. The key thing to understand about
the observer is that it does not use a polling process, which can be very
inefficient with a larger number of observers registered to a subject.
Updates are push only.

Figure 2-6 Observer Pattern

The Observer pattern is often used to handle


communications between the model and the view in the
MVC pattern. Say, for example, that you have two
different views available to an end user. One view
provides a bar graph, and the other provides a scatter
plot. Both use the same data source from the model.
When that data changes or is updated, the two views
need to be updated. This is a perfect job for the Observer
pattern.

LINUX BASH
Knowing how to use Linux BASH is a necessary skill for
working with open-source technologies as well as many
of the tools you need to be proficient with to be
successful in the development world. Linux has taken
over the development world, and even Microsoft has
jumped into the game by providing the Windows
Subsystem for Linux for Windows 10 pro. For the
DEVASC exam, you need to know how to use BASH and
be familiar with some of the key commands.
Getting to Know BASH

BASH is a shell, and a shell is simply a layer between a


user and the internal workings of an operating system. A
user can use the shell to input commands that the
operating system will interpret and perform. Before
graphical user interfaces (GUI) became common, the
shell reigned supreme, and those who knew its
intricacies were revered as some tech wizards. Today that
skill is still in high demand, and without it you will find
yourself struggling as the GUI simply doesn’t make
possible many of the powerful operations available
through the shell.

While there are many shells you can use, BASH, which
stands for Bourne Again Shell, is one of the most
popular. It has been around since 1989 and is the default
shell on most Linux operating systems. Until recently, it
was also the default for the Mac operating system, but
Apple has replaced BASH with Z shell. The commands
and syntax you will learn with BASH are transferable to
Z shell, as it was built to maintain compatibility with
BASH.

BASH is not only a shell for command processing using


standard Linux operating system commands. It can also
read and interpret scripts for automation. These
capabilities are beyond the scope of what you need to
know for the DEVASC exam but would be worth looking
into as you continue your journey as these automation
scripts are where BASH really shines. Like all other
UNIX shells, BASH supports features such as piping
(which involves feeding output from one command as
the input of another command), variables, evaluation of
conditions, and iteration (repeated processing of a
command with if statements). You also have a command
history as part of the shell, where you can use the arrow
keys to cycle through and edit previous commands.

UNIX platforms such as Linux and OSX have built-in


documentation for each command the operating system
uses. To access help for any command, type man (short
for manual) and then the command you are curious
about. The output gives you a synopsis of the command,
any optional flags, and required attributes. Example 2-1
shows the man page for the man command.

Example 2-1 Example of the Manual Page for the man


Command

Click here to view code image

$ man man

man(1)
man(1)

NAME

man - format and display the on-line


manual pages

SYNOPSIS
man [-acdfFhkKtwW] [--path] [-m
system] [-p string] [-C config_file]
[-M pathlist] [-P pager] [-B browser] [-
H htmlpager] [-S section_list]
[section] name ...

DESCRIPTION
man formats and displays the on-line
manual pages. If you specify sec-
tion, man only looks in that section of
the manual. name is normally
the name of the manual page, which is
typically the name of a command,
function, or file. However, if name
contains a slash (/) then man
interprets it as a file specification,
so that you can do man ./foo.5
or even man /cd/foo/bar.1.gz.

See below for a description of where man


looks for the manual page
files.
<output cut for brevity>

Not every command is intended to be run with user-level


privileges. You can temporarily upgrade your privileges
by prepending the sudo command before you execute a
command that needs higher-level access. You will often
be prompted for a password to verify that you have the
right to use sudo. You need to be careful when using
sudo, as the whole idea of reduced privileges is to
increase security and prevent average users from
running commands that are dangerous. Use sudo only
when required, such as when you need to kick off an
update for your Linux distribution, which you do by
using the apt-get update command:

$ sudo apt-get update

As mentioned earlier, one of the most powerful features


of BASH is something called piping. This feature allows
you to string together commands. For example, the cat
command displays the contents of a file to the screen.
What if a file contains too much to fit on a single screen?
The cat command will happily spew every character of
the file at the screen until it reaches the end, regardless
of whether you could keep up with it. To address this,
you can pipe the output of cat to the more command to
stream the content from cat to more, which gives you a
prompt to continue one page at a time. To use the piping
functionality, you use the pipe character (|) between
commands, as shown in Example 2-2.

Example 2-2 Output of the cat Command Piped to the


more Command
Click here to view code image

$cat weather.py | more

import json
import urllib.request
from pprint import pprint

def get_local_weather():

weather_base_url =
'http://forecast.weather.gov/MapClick.php?
FcstType=json&'

places = {
'Austin': ['30.3074624',
'-98.0335911'],
'Portland': ['45.542094',
'-122.9346037'],
'NYC': ['40.7053111', '-74.258188']
}

for place in places:


latitude, longitude = places[place][0],
places[place][1]
weather_url = weather_base_url + "lat="
+ latitude + "&lon=" + longitude
# Show the URL we use to get the
weather data. (Paste this URL into your
browser!)
# print("Getting the current weather
for", place, "at", weather_url, ":")

page_response =
urllib.request.urlopen(weather_url).read()
<output cut for brevity>

Directory Navigation
A UNIX-based file system has a directory tree structure.
The top of the tree is called the root (as it’s an upside-
down tree), and you use the forward slash (/) to refer to
root. From root you have numerous directories, and
under each directory you can have more directories or
files. Figure 2-7 shows a UNIX directory structure.
Figure 2-7 UNIX Directory Structure

Whenever you call a file, you have to supply its path.


Everything you execute in UNIX is in relationship to
root. To execute a file in the directory you are in, you can
use ./filename.sh, where the leading . is simply an alias
for the current directory.

In addition to the root file system, each user has a home


directory that the user controls and that stores the user’s
individual files and applications. The full path to the
home directory often looks something like
/home/username on Linux and /Users/username on
Mac OS X, but you can also use the tilde shortcut (~/) to
reference the home directory.

The following sections describe some of the commands


most commonly used to interact with the BASH shell and
provide examples of their options and use.

cd
The cd command is used to change directories and move
around the file system. You can use it as follows:

$ cd / Changes directory to the root directory

$ cd Changes directory to the


/home/usernam /home/username directory
e
$ cd test Changes directory to the test folder

$ cd .. Moves up one directory

pwd
If you ever get lost while navigating around the file
system, you can use the pwd command to print out your
current working directory path. You can use it as follows:

$ pwd Print your current working directory

ls
Once you have navigated to a directory, you probably
want to know what is in it. The ls command gives you a
list of the current directory. If you execute it without any
parameters, it just displays whatever is in the directory.
It doesn’t show any hidden files (such as configuration
files). Anything that starts with a . does not show up in a
standard directory listing, and you need to use the -a flag
to see all files, including hidden files. By using the -l flag,
you can see permissions and the user and group that own
the file or directory. You can also use the wildcard * to
list specific filename values; for example, to find any files
with test as a part of the name, you can use ls *test*,
which would match both 1test and test1. You can use the
ls command as follows:

$ ls Lists files and directories in the current


working directory

$ ls -a Lists everything in the current directory,


including hidden files
$ ls Lists everything in the /home/username
/home/usern directory
ame

$ ls -l Lists permissions and user and group


ownership

$ ls -F Displays files and directories and denotes


which are which

mkdir
To create a directory, you use the mkdir command. If
you are in your home directory or in another directory
where you have the appropriate permissions, you can use
this command without sudo. You can use the mkdir
command as follows:

$ mkdir test Makes a new directory called test in the


current working directory if you have
permission

$ mkdir Makes a new directory called test at


/home/usern /home/username/test
ame/test

File Management
Working with files is easy with BASH. There are just a
few commands that you will use often, and they are
described in the following sections.

cp
The purpose of the cp command is to copy a file or folder
someplace. It does not delete the source file but instead
makes an identical duplicate. When editing configuration
files or making changes that you may want to roll back,
you can use the cp command to create a copy as a sort of
backup. The command requires several parameters: the
name of the file you want to copy and where you want to
copy it to and the name of the copy. When copying the
contents of a folder, you need to use the -r, or recursive,
flag. You can use the cp command as follows:

$ cp sydney.txt Copies a file called sydney.txt from


sydney2.txt the current directory and names the
copy sydney2.txt

$ cp Copies a file as described above but


/home/username/ using the full path and the home
sydney.txt directory path
~/sydney2.txt

$ cp -r folder Copies a folder


folder.old

mv
The mv command allows you to move a file or folder
from one directory to another, and it is also used to
rename files or folders from the command line, as BASH
does not have a dedicated renaming function. The mv
command takes a source and destination, just as cp
does. You can use the -i flag to create an interactive
option prompt when moving files that exist at the
destination. The -f flag forces the move and overwrites
any files at the destination. Wildcards also work to select
multiple source files or directories. You can use the mv
command as follows:

$ mv caleb.txt Renames a file called caleb.txt to


calebfinal.txt calebfinal.txt

$ mv Renames a file as described above


/home/username/cal but using full paths
eb.txt
~/calebfinal.txt

$ mv -i * Moves all files and directories in


/home/username/ne the current folder to a directory
w/ called new

rm
To delete a file or directory, you use the rm command. If
the item you are deleting is a file or an empty directory,
you just need to supply the name and press Enter. On the
other hand, if you try to delete a directory that has files
in it, rm tells you that the directory is not empty. In that
case, you can use the -rf flag to force the deletion. You
can use the rm command as follows:

$ rm Deletes the file test.txt in the current working


test.txt directory

$ rm -rf Forces the deletion of the folder test and


test everything in it

touch
The touch command is used to create a file and/or
change the timestamps on a file’s access without opening
it. This command is often used when a developer wants
to create a file but doesn’t want to put any content in it.
You can use the touch command as follows:

$ touch Creates an empty file named


emptyfile.txt emptyfile.txt

$ touch Bulk creates files from file1.txt to


file{1..20}.txt file20.txt
cat
The cat (which stands for concatenate) command allows
you to view or create files and also pipe to other
commands. It’s one of the most useful commands in
UNIX when it comes to working with files. You can use
the cat command as follows:

$cat Displays the contents of file1.txt


file1.txt

$cat Displays the contents of file1.txt and pipes the


file1.txt | output to more to add page breaks
more

$cat Sends a user’s typed or copied content from the


>file2.txt command line to file2.txt

Environment Variables
BASH environment variables contain information about
the current session. Environment variables are available
in all operating systems and are typically set when you
open your terminal from a configuration file associated
with your login. You set these variables with similar
syntax to how you set them when programming. You do
not often use these variables directly, but the programs
and applications you launch do. You can view all of your
currently set environment variables by entering the env
command. Since there can be more entries than you have
room to display on a single terminal page, you can pipe
the results to the more command to pause between
pages:

$env | Shows all environment variables with page


more breaks
If you execute this command, you are likely to notice a
lot of keywords with the = sign tied to values. One
environment variable that you use every time you
execute a command is the PATH variable. This is where
your shell looks for executable files. If you add a new
command and can’t execute it, more than likely the place
where the command was copied is not listed in your
PATH. To view any variable value, you can use the echo
command and the variable you want to view. You also
need to tell BASH that it’s a variable by using the $ in
front of it. Here’s an example:

Click here to view code image

$ echo $PATH

/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/VMware

Fusion.app/Contents/Public:/opt/X11/bin

To add a new value to the PATH variable, you can’t just


type $PATH=/new_directory because the operating
system reads the environment variables only when the
terminal session starts. To inform Linux that an
environment variable needs to be updated, you use the
export command. This command allows you to append
your additional path to BASH and exists for the duration
of the session. Make sure you add the : or , in front of
your new value, depending on your operating system.
The following example is for a Linux-style OS:

Click here to view code image

$ export PATH=$PATH:/Home/chrijack/bin

When you end your terminal session, the changes you


made are not saved. To retain the changes, you need to
write the path statement to your .bashrc (or .zshrc if
using Z shell) profile settings. Anything written here will
be available anytime you launch a terminal. You can
simply copy, add the previous command to the end of the
.bashrc with your favorite text editor, or use the
following command:

Click here to view code image

$ echo "export PATH=$PATH:/Home/chrijack/bin" >>


.bashrc

This addition becomes active only after you close your


current session or force it to reload the variable. The
source command can be used to reload the variables
from the hidden configuration file .bashrc:

$ source ~/.bashrc

The following command does the same thing as the


previous one because . is also an alias for the source
command:

$ . ~/.bashrc

There are many more tricks you can uncover with BASH.
You will get plenty of chances to use it as you study for
the DEVASC exam.

SOFTWARE VERSION CONTROL

The term version control is used to describe the process


of saving various copies of a file or set of files in order to
track changes made to those files. This description
highlights how incredibly useful it is to the world of
programming. Software version control (SVC) typically
involves a database that stores current and historical
versions of source code to allow multiple people or teams
to work on it at the same time. If a mistake is made and
you want to go back a revision (or to any previous
version), SVC is the answer. You may also hear it called
revision control or source control, but it all falls under
the topic of software configuration management. Once
you have used an SVC, you will quickly become a believer
in its power and won’t want to go back to using manual
processes again.

If you have a team of developers working on a project, all


writing new code or changing previously written code,
losing those files could set you back weeks or months.
Version control can protect your code by allowing
changes to be checked in (through a process known as a
code commit) to a hierarchical tree structure of folders
with files in them. You often don’t know what a
developer might need to change or has changed, but the
version control system does. Each check-in is tagged with
who made the change and what the person changed
within the code. Instead of using inefficient techniques
such as file locking, a version control system handles
concurrent check-ins, allowing two programmers to
commit code at the same time.

Another aspect of a version control system is the ability


to branch and merge code built independently. This is
very useful if you are writing code on a part of an
application that could conflict with another part written
by another team. By creating a branch, you effectively
create a separate work stream that has its own history
and does not impact the main “trunk” of the code base.
Once the code is written and any conflicts are resolved,
the code from the branch can be merged back into the
main trunk. Many application developers use this
technique for new features or application revisions.

Can you write software without a version control system?


Sure, but why would you? A lot of version control
software options are available, many of them free, and it
is good practice to always use a version control system to
store your code. Git is one of the most commonly used
version control systems today, and the 200-901 DevNet
Associate DEVASC exam will test your knowledge of it,
so the next section covers how to use Git.

GIT
If you are working with version control software, chances
are it is Git. A staggering number of companies use Git,
which is free and open source. In 2005, Linus Torvalds
(the father of Linux) created Git as an alternative to the
SCM system BitKeeper, when the original owner of
BitKeeper decided to stop allowing free use of the system
for Linux kernel development. With no existing open-
source options that would meet his needs, Torvalds
created a distributed version control system and named
it Git. Git was created to be fast and scalable, with a
distributed workflow that could support the huge
number of contributors to the Linux kernel. His creation
was turned over to Junio Hamano in 2006, and it has
become the most widely used source management
system in the world.

Note
GitHub is not Git. GitHub is a cloud-based social
networking platform for programmers that allows
anyone to share and contribute to software projects
(open source or private). While GitHub uses Git as its
version control system underneath the graphical front
end, it is not directly tied to the Git open-source
project (much as a Linux distribution, such as Ubuntu
or Fedora, uses the Linux kernel but is independently
developed from it).

Understanding Git
Git is a distributed version control system built with
scalability in mind. It uses a multi-tree structure, and if
you look closely at the design, you see that it looks a lot
like a file system. (Linus Torvalds is an operating system
creator after all.) Git keeps track of three main
structures, or trees (see Figure 2-8):

Local workspace: This is where you store source code files, binaries,
images, documentation, and whatever else you need.

Staging area: This is an intermediary storage area for items to be


synchronized (changes and new items).

Head, or local repository: This is where you store all committed


items.

Figure 2-8 Git Tree Structure

Another very important concept with Git is the file


lifecycle. Each file that you add to your working directory
has a status attributed to it. This status determines how
Git handles the file. Figure 2-9 shows the Git file status
lifecycle, which includes the following statuses:

Untracked: When you first create a file in a directory that Git is


managing, it is given an untracked status. Git sees this file but does not
perform any type of version control operations on it. For all intents and
purposes, the file is invisible to the rest of the world. Some files, such as
those containing settings or passwords or temporary files, may be
stored in the working directory, but you may not want to include them
in version control. If you want Git to start tracking a file, you have to
explicitly tell it to do so with the git add command; once you do this,
the status of the file changes to tracked.

Unmodified: A tracked file in Git is included as part of the repository,


and changes are watched. This status means Git is watching for any file
changes that are made, but it doesn’t see any yet.

Modified: Whenever you add some code or make a change to the file,
Git changes the status of the file to modified. Modified status is where
Git sees that you are working on the file but you are not finished. You
have to tell Git that you are ready to add a changed (modified) file to
the index or staging area by issuing the git add command again.

Staged: Once a changed file is added to the index, Git needs to be able
to bundle up your changes and update the local repository. This process
is called staging and is accomplished through git commit. At this
point, your file status is moved back to the tracked status, and it stays
there until you make changes to the file in the future and kick off the
whole process once again.
Figure 2-9 Git File Status Lifecycle

If at any point you want to see the status of a file from


your repository, you can use the extremely useful
command git status to learn the status of each file in
your local directory.

You can pull files and populate your working directory


for a project that already exists by making a clone. Once
you have done this, your working directory will be an
exact match of what is stored in the repository. When
you make changes to any source code or files, you can
add your changes to the index, where they will sit in
staging, waiting for you to finish all your changes or
additions. The next step is to perform a commit and
package up the changes for submission (or pushing) to
the remote repository (usually a server somewhere local
or on the Internet). This high-level process uses
numerous commands that are covered in the next
section. If you understand Git’s tree structure, figuring
out what command you need is simple. Figure 2-10
shows the basic Git workflow.
Figure 2-10 Git Workflow

Using Git

Git may not come natively with your operating system. If


you are running a Linux variation, you probably already
have it. For Mac and Windows you need to install it. You
can go to the main distribution website (https://git-
scm.com) and download builds for your operating
system directly. You can install the command-line
version of Git and start using and practicing the
commands discussed in this section. There are also GUI-
based Git clients, but for the purposes of the DEVASC
exam, you should focus your efforts on the command
line. Git commands come in two different flavors. The
standard user-friendly commands are called “porcelain,”
and the more complicated inner workings of Git
manipulating commands are called “plumbing.” At its
core, Git is a content-addressable file system. The
version control system part was layered on top to make it
easier to use. For the DEVASC exam, you need to know
your way around Git at a functional level (by using the
porcelain). There is a significant amount of manipulation
you can do with Git at the plumbing level. Most of the
plumbing commands and tools are not ones you will be
using on a regular basis and are not covered on the exam.

Cloning/Initiating Repositories
Git operates on a number of processes that enable it to
do its magic. The first of these processes involves
defining a local repository by using either git clone or
git init. The git clone command has the following
syntax:

Click here to view code image

git clone (url to repository) (directory to clone


to)

If there is an existing repository you are planning to start


working on, like one from GitHub that you like, you use
git clone. This command duplicates an existing Git
project from the URL provided into your current
directory with the name of the repository as the directory
name. You can also specify a different name with a
command-line option. Example 2-3 shows an example of
cloning a repository and listing the files in the newly
created local repository.

Example 2-3 Cloning a GitHub Repository

Click here to view code image

#git clone
https://github.com/CiscoDevNet/pyats-coding-
101.git
Cloning into 'pyats-coding-101'...
remote: Enumerating objects: 71, done.
remote: Total 71 (delta 0), reused 0 (delta 0),
pack-reused 71
Unpacking objects: 100% (71/71), done.
#cd pyats-coding-101
#pyats-coding-101 git:(master) ls
COPYRIGHT coding-102-parsers
LICENSE coding-103-
yaml
README.md coding-201-
advanced-parsers
coding-101-python
git init (directory name)

To create a completely new repository, you need to create


a directory. Luckily, git init can be supplied with a
directory name as an option to do this all in one
command:

Click here to view code image

#git init newrepo


Initialized empty Git repository in
/Users/chrijack/Documents/
GitHub/newrepo/.git/

#newrepo git:(master)

What you just created is an empty repository, and you


need to add some files to it. By using the touch
command, you can create an empty file. The following
example shows how to view the new file in the repository
with the directory (ls) command:

Click here to view code image

#newrepo git:(master) touch newfile


#newrepo git:(master) ls
newfile

Once the file is added, Git sees that there is something


new, but it doesn’t do anything with it at this point. If
you type git status, you can see that Git identified the
new file, but you have to issue another command to add
it to index for Git to perform version control on it. Here’s
an example:

Click here to view code image

# git status

On branch master
No commits yet

Untracked files:
(use "git add <file>..." to include in what
will be committed)

newfile

nothing added to commit but untracked files


present (use "git add"
to track)

Git is helpful and tells you that it sees the new file, but
you need to do something else to enable version control
and let Git know to start tracking it.

Adding and Removing Files


When you are finished making changes to files, you can
add them to the index. Git knows to then start tracking
changes for the files you identified. You can use the
following commands to add files to an index:

git add . or -A: Adds everything in the entire local workspace.

git add (filename): Adds a single file.

The git add command adds all new or deleted files and
directories to the index. Why select an individual file
instead of everything with the . or -A option? It comes
down to being specific about what you are changing and
adding to the index. If you accidently make a change to
another file and commit everything, you might
unintentionally make a change to your code and then
have to do a rollback. Being specific is always safest. You
can use the following commands to add the file newfile to
the Git index (in a process known as staging):

Click here to view code image

# git add newfile


# git status
On branch master
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)

new file: newfile

Removing files and directories from Git is not as simple


as just deleting them from the directory itself. If you just
use file system commands to remove files, you may
create headaches as the index can become confused. You
can remove files and directories from Git, but it requires
an extra step of adding the file deletion to the index
(which sounds counterintuitive, right?). The best way is
to use the git rm command, which has the following
syntax:

Click here to view code image

git rm (-r) (-f) (folder/file.py)

This command removes a file or directory and syncs it


with the index in one step. If you want to remove a
directory that is not empty or has subdirectories, you can
use the -r option to remove recursively. In addition, if
you add a file to Git and then decide that you want to
remove it, you need to use the -f option to force removal
from the index. This is required only if you haven’t
committed the changes to the local repository. Here is an
example:

# touch removeme.py
# git add .
# ls
newfile removeme.py
# git rm -f removeme.py
rm 'removeme.py'

git mv is the command you use to move or rename a file,


directory, or symbolic link. It has the following syntax:
Click here to view code image

git mv (-f) (source) (destination)

For this command you supply a source argument and a


destination argument to indicate which file or directory
you want to change and where you want to move it.
(Moving in this case is considered the same as
renaming.) Keep in mind that when you use this
command, it also updates the index at the same time, so
there is no need to issue git add to add the change to
Git. You can use the -f argument if you are trying to
overwrite an existing file or directory where the same
target exists. The following example shows how to
change a filename in the same directory:

Click here to view code image

# ls
oldfile.py
# git mv oldfile.py newfile.py
# ls
newfile.py

Committing Files
When you commit a file, you move it from the index or
staging area to the local copy of the repository. Git
doesn’t send entire updates; it sends just changes. The
commit command is used to bundle up those changes to
be synchronized with the local repository. The command
is simple, but you can specify a lot of options and tweaks.
In its simplest form, you just need to type git commit.
This command has the following syntax:

Click here to view code image

git commit [-a] [-m] <"your commit message">

The -a option tells Git to add any changes you make to


your files to the index. It’s a quick shortcut instead of
using git add -A, but it works only for files that have
been added at some point before in their history; new
files need to be explicitly added to Git tracking. For every
commit, you will need to enter some text about what
changed. If you omit the -m option, Git automatically
launches a text editor (such as vi, which is the default on
Linux and Mac) to allow you to type in the text for your
commit message. This is an opportunity to describe the
changes you made so others know what you did. It’s
tempting to type in something silly like “update” or “new
change for a quick commit,” but don’t fall into that trap.
Think about the rest of your team. Here is an example of
the commit command in action:

Click here to view code image

# git commit -a -m "bug fix 21324 and 23421"


[master e1fec3d] bug fix 21324 and 23421
1 file changed, 0 insertions(+), 0 deletions(-)
delete mode 100644 newfile

Note
As a good practice, use the first 50 characters of the
commit message as a title for the commit followed by a
blank line and a more detailed explanation of the
commit. This title can be used throughout Git to
automate notifications such as sending an email
update on a new commit with the title as the subject
line and the detailed message as the body.

Pushing and Pulling Files


Up until this point in the chapter, you have seen how Git
operates on your local computer. Many people use Git in
just this way, as a local version control system to track
documents and files. Its real power, however, is in its
distributed architecture, which enables teams from
around the globe to come together and collaborate on
projects.
In order to allow Git to use a remote repository, you have
to configure Git with some information so that it can find
it. When you use the command git clone on a
repository, Git automatically adds the remote repository
connection information via the URL entered with the
clone command.

When using the git init command, however, you need to


make sure that you enter the information to find the
remote location for the server with the git remote add
command, which has the following syntax:

git remote add (name) (url)

git remote -v can be used to show which remote


repository is configured. The following example shows
how to add a remote repository and then display what is
configured:

Click here to view code image

# git remote add origin


https://github.com/chrijack/devnetccna.git
# git remote -v
origin
https://github.com/chrijack/devnetccna.git (fetch)
origin
https://github.com/chrijack/devnetccna.git (push)

What if you make a mistake or want to remove remote


tracking of your repository? This can easily be done with
the git remote rm command, which has the following
syntax:

git remote rm (name)

Here is an example of this command in action:


# git remote rm origin
# git remote -v

In order for your code to be shared with the rest of your


team or with the rest of the world, you have to tell Git to
sync your local repository to the remote repository (on a
shared server or service like GitHub). The command git
push, which has the following syntax, is useful in this
case:

Click here to view code image

git push (remotename) (branchname)

This command needs a remote name, which is an alias


used to identify the remote repository. It is common to
use the name origin, which is the default if a different
name is not supplied. In addition, you can reference a
branch name with git push in order to store your files in
a separately tracked branch from the main repository.
(You can think of this as a repository within a
repository.) The sole purpose of the git push command
is to transfer your files and any updates to your Git
server. The following is an example of the git push
command in use:

Click here to view code image

# git push origin master

Enumerating objects: 3, done.

Counting objects: 100% (3/3), done.

Writing objects: 100% (3/3), 210 bytes | 210.00


KiB/s, done.

Total 3 (delta 0), reused 0 (delta 0)

To https://github.com/chrijack/devnetccna.git

* [new branch] master -> master

Branch 'master' set up to track remote branch


'master' from 'ori-
gin'.
The command git pull syncs any changes that are on the
remote repository and brings your local repository up to
the same level as the remote one. It has the following
syntax:

Click here to view code image

git pull (remotename) (branchname)

Whenever you begin to work with Git, one of the first


commands you want to issue is pull so you can get the
latest code from the remote repository and work with the
latest version of code from the master repository. git
pull does two things: fetches the latest version of the
remote master repository and merges it into the local
repository. If there are conflicts, they are handled just as
they would be if you issued the git merge command,
which is covered shortly. Example 2-4 shows an example
of using the git pull command.

Example 2-4 git pull Command

Click here to view code image

# git pull origin master


remote: Enumerating objects: 9, done.
remote: Counting objects: 100% (9/9), done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 1), reused 0 (delta 0),
pack-reused 0
Unpacking objects: 100% (8/8), done.
From https://github.com/chrijack/devnetccna
* branch master -> FETCH_HEAD
8eb16e3..40aaf1a master ->
origin/master
Updating 8eb16e3..40aaf1a
Fast-forward
2README.md | 3 +++
Picture1.png | Bin 0 -
> 83650 bytes
Picture2.jpg | Bin 0 -
> 25895 bytes
Picture3.png | Bin 0 -
> 44064 bytes
4 files changed, 3 insertions(+)
create mode 100644 2README.md
create mode 100644 Picture1.png
create mode 100644 Picture2.jpg
create mode 100644 Picture3.png

Working with Branches


Branches are an important workflow in software
development. Say you want to add a new feature to your
software or want to fix a bug. You can create a branch in
order to add a separate development workspace for your
project and prevent changes from destabilizing the main
project (the master branch in Git). Remember that Git
keeps a running history of every commit you make. This
history (called a snapshot in Git terminology) details all
the changes to the software over time and ensures the
integrity of this record by applying an SHA-1 hash. This
hash is a 40-character string that is tied to each and
every commit. Example 2-5 shows an example of three
commits with a hash, displayed using the git log
command.

Example 2-5 git log Command Output

Click here to view code image

#git log

commit 40aaf1af65ae7226311a01209b62ddf7f4ef88c2
(HEAD -> master, origin/master)
Author: Chris Jackson <chrijack@cisco.com>
Date: Sat Oct 19 00:00:34 2019 -0500

Add files via upload

commit 1a9db03479a69209bf722b21d8ec50f94d727e7d
Author: Chris Jackson <chrijack@cisco.com>
Date: Fri Oct 18 23:59:55 2019 -0500

Rename README.md to 2README.md

commit 8eb16e3b9122182592815fa1cc029493967c3bca
Author: Chris Jackson <chrijack@me.com>
Date: Fri Oct 18 20:03:32 2019 -0500

first commit
Notice that the first entry is the current commit state, as
it is referenced by HEAD. The other entries show the
chronological history of the commits. Figure 2-11 shows a
visual representation of this simple three-step commit
history; for brevity, only the first four values of the hash
are used.

Figure 2-11 Git Commit History

To add a Git branch, you simply issue the git branch


command and supply the new branch with a name, using
the following syntax:

Click here to view code image

git branch (-d) <branchname> [commit]

You can alternatively specify a commit identified by a tag


or commit hash if you want to access a previous commit
from the branch history. By default, Git selects the latest
commit. In addition, you can delete a branch when you
no longer need it but using the -d argument. The
following example shows how to create a branch and
display the current branches with the git branch
command with no argument:

# git branch newfeature


# git branch
* master
newfeature
The * next to master shows that the branch you are
currently in is still master, but you now have a new
branch named newfeature. Git simply creates a pointer
to the latest commit and uses that commit as the start for
the new branch. Figure 2-12 shows a visual
representation of this change.

Figure 2-12 Adding a Branch

In order to move to the new branch and change your


working directory, you have to use the git checkout
command, which has the following syntax:

Click here to view code image

git checkout [-b] (branchname or commit)

The -b argument is useful for combining the git branch


command with the checkout function and saves a bit of
typing by creating the branch and checking it out
(switching to it) all at the same time. This example
moves Head on your local machine to the new branch, as
shown in Figure 2-13:

Click here to view code image

#git checkout newfeature

Switched to branch 'newfeature'


Figure 2-13 Changing Head to a New Branch

Now you have a separate workspace where you can build


your feature. At this point, you will want to perform a git
push to sync your changes to the remote repository.
When the work is finished on the branch, you can merge
it back into the main code base and then delete the
branch by using the command git branch -d
(branchname).

Merging Branches
The merge process in Git is used to handle the combining
of multiple branches into one. The git merge command
is used to make this easier on the user and provide a
simple way to manage the changes. It has the following
syntax:

Click here to view code image

git merge (branch to merge with current)

To understand the merge process, it helps to take a look


at your two branches. Figure 12-14 shows all of the
commits that have taken place as part of the feature
build. In addition, you can see other commits that have
also occurred on the master branch.
Figure 2-14 Two Branches with Commits

In order to get the two branches merged, Git has to


compare all the changes that have occurred in the two
branches. You have a text file that exists in both the
master branch and the newfeature branch, and for
simplicity’s sake, there are just a couple of lines of text.
Figure 12-15 shows the master branch text file.

Figure 2-15 Master Branch Text File

In the newfeature branch, this text file has been modified


with some new feature code. Figure 12-16 shows a simple
change made to the text file.
Figure 2-16 Changes to the Text File in the
newfeature Branch

On the newfeature branch, you can issue the following


commands to add the changes to the index and then
commit the change:

Click here to view code image

#git add .
#git commit -a -m "new feature"

Now the branch is synced with the new changes, and you
can switch back to the master branch with the following
command:

Click here to view code image

#git checkout master


Switched to branch 'master'

From the master branch, you can then issue the git
merge command and identify the branch to merge with
(in this case, the newfeature branch):

Click here to view code image


# git merge newfeature
Updating 77f786a..dd6bce5
Fast-forward
text1 | 1 +
1 file changed, 1 insertion(+)

In this very simple example, no changes were made to


the master branch, and Git was able to automatically
merge the two branches and create a new combined
commit that had the new content from the newfeature
branch. Notice that the output above says “Fast-
forward”; this refers to updating past the changes in the
branch, which is much like fast-forwarding through the
boring parts of a movie. At this point, you can delete the
branch newfeature, as the code in it has been moved to
master. Figure 2-17 illustrates how this is done: Git
creates a new commit that has two sources.

Figure 2-17 Git Merge Between Two Branches

Handling Conflicts
Merging branches is a very useful capability, but what
happens if the same file is edited by two different
developers? You can have a conflict in terms of which
change takes precedence. Git attempts to handle merging
automatically, but where there is conflict, Git relies on
human intervention to decide what to keep. In the
previous example, if there had been changes made to
text1 in both the master branch and the newfeature
branch, you would have seen the following message after
using the command git merge:

Click here to view code image

#git merge newfeature


Auto-merging text1
CONFLICT (content): Merge conflict in text1
Automatic merge failed; fix conflicts and then
commit the result.

In addition, text1 would look as shown in Figure 2-18


(which shows the conflicting merge).

Figure 2-18 Git Conflicting Merge

Git shows you that “line 3” was added to text1 on the


master branch and “new feature code” was added to text1
on the newfeature branch. Git is letting you delete one or
keep both. You can simply edit the file, remove the parts
that Git added to highlight the differences, and save the
file. Then you can use git add to index your changes and
git commit to save to the local repository, as in the
following example:

Click here to view code image


#git add .
#git commit -m "merge conflict fixed"
[master fe8f42d] merge conflict fixed

Comparing Commits with diff


The diff command is one of the most powerful Git tools.
It allows you to compare files and text to see which you
want to use if there are multiple options. The ability to
compare specific commits in Git makes it easier to know
what to keep and what to discard between two similar
versions of code.

The diff command takes two sets of inputs and outputs


the differences or changes between them. This is its
syntax:

Click here to view code image

git diff [--stat] [branchname or commit]

git diff looks at the history of your commits, individual


files, branches, and other Git resources. It’s a very useful
tool for troubleshooting issues as well as comparing code
between commits. It has a lot of options and command-
line parameters, which makes it a bit of a Swiss Army
knife in terms of functionality. One of the most useful
functions of diff is to be able to see the differences
between the three Git tree structures. The following are
variations of the git diff command that you can use:

git diff: This command highlights the differences between your


working directory and the index (that is, what isn’t yet staged).

git diff --cached: This command shows any changes between the
index and your last commit.

git diff HEAD: This command shows the differences between your
most recent commit and your current working directory. It is very
useful for seeing what will happen with your next commit.

The following is an example of executing git diff --


cached after text2 is added to the index:
Click here to view code image

#git diff --cached


diff --git a/text2 b/text2
new file mode 100644
index 0000000..b9997e5
--- /dev/null
+++ b/text2
@@ -0,0 +1 @@
+new bit of code

git diff identified the new file addition and shows the
a/b comparison. Since this is a new file, there is nothing
to compare it with, so you see --- /dev/null as the a
comparison. In the b comparison, you see +++ b/text2,
which shows the addition of the new file, followed by
stacks on what was different. Since there was no file
before, you see -0,0 and +1. (The + and - simply denote
which of the two versions you are comparing. It is not
actually a -0, which would be impossible.) The last line
shows the text that was added to the new file. This is a
very simple example with one line of code. In a big file,
you might see a significant amount of text.

Another very useful capability of git diff is to compare


branches. By using git diff (branchname), you can see
the differences between a file in the branch you are
currently in and one that you supply as an argument. The
following compares text1 between the branches master
and newfeature, where you can see that line 3 is present
on newfeature branch’s text1 file:

Click here to view code image

#git diff newfeature text1


diff --git a/text1 b/text1
index 45c2489..ba0a07d 100644
--- a/text1
+++ b/text1
@@ -1,3 +1,4 @@
line 1
line 2
+line 3
new feature code

This section has covered quite a bit of the syntax and


commands that you will see on a regular basis when
working with Git. You need to spend some time working
with these commands on your local machine and become
familiar with them so you will be ready for the 200-901
DevNet Associate DEVASC exam. Make sure you are
using resources such as Git documentation on any
command you don’t understand or for which you want to
get deeper insight. All of these commands have a
tremendous amount of depth for you to explore.

CONDUCTING CODE REVIEW

Every good author needs an editor. This book wouldn’t


have been even half as understandable if it hadn’t been
for the fact that we had other people check our work for
comprehension and technical accuracy. Why should code
you write be treated any differently? The intent behind a
code review process is to take good code and make it
better by showing it to others and having them critique it
and look for potential errors. When you develop
software, the vast majority of your time is spent by
yourself—just you and the keyboard. Sometimes when
you are this close to a software project, you miss errors
or use ineffective coding techniques; a simple code
review can quickly uncover such issues.

Beyond the aspects mentioned above, why should you


conduct code reviews? The following are a few common
benefits of code review:
It helps you create higher-quality software.

It enables your team to be more cohesive and deliver software projects


on time.

It can help you find more defects and inefficient code that unit tests and
functional tests might miss, making your software more reliable.

There are many ways to conduct code reviews. Some


organizations use specialized applications such as Gerrit,
and others conduct reviews as if they were professors
grading college papers. Whatever process you use, the
following are some good practices to help make your
code review effective:

Use a code review checklist that includes organization-specific practices


(naming conventions, security, class structures, and so on) and any
areas that need special consideration. The goal is to have a repeatable
process that is followed by everyone.

Review the code, not the person who wrote it. Avoid being robotic and
harsh so you don’t hurt people’s feeling and discourage them. The goal
is better code, not disgruntled employees.

Keep in mind that code review is a gift. No one is calling your baby ugly.
Check your ego at the door and listen; the feedback you receive will
make you a better coder in the long run.

Make sure the changes recommended are committed back into the code
base. You should also share findings back to the organization so that
everyone can learn from mistakes and improve their techniques.

EXAM PREPARATION TASKS


As mentioned in the section “How to Use This Book” in
the Introduction, you have a couple of choices for exam
preparation: the exercises here, Chapter 19, “Final
Preparation,” and the exam simulation questions on the
companion website.

REVIEW ALL KEY TOPICS


Review the most important topics in this chapter, noted
with the Key Topic icon in the outer margin of the page.
Table 2-2 lists these key topics and the page number on
which each is found.
Table 2-2 Key Topics

Key Topic ElementDescriptionPage Number

Paragraph Waterfall 27

Paragraph Lean 28

Paragraph Agile 29

Paragraph Model-View-Controller (MVC) pattern 30

Section Observer Pattern 31

Paragraph Getting to Know BASH 32

Paragraph Software version control 38

Section Using Git 42

Section Conducting Code Review 55

DEFINE KEY TERMS


Define the following key terms from this chapter and
check your answers in the glossary:

Software Development Lifecycle (SDLC)


Waterfall
Lean
Agile
Model-View-Controller (MVC)
Observer
software version control
Git
GitHub
repository
staging
index
local workspace
Chapter 3

Introduction to Python
This chapter covers the following topics:
Getting Started with Python: This section covers what you need to
know when using Python on your local machine.

Understanding Python Syntax: This section describes the basic


Python syntax and command structure.

Data Types and Variables: This section describes the various types
of data you need to interact with when coding.

Input and Output: This section describes how to get input from a
user and print out results to the terminal.

Flow Control with Conditionals and Loops: This section


discusses adding logic to your code with conditionals and loops.

Python is an easy language that anyone can learn


quickly. It has become the de facto language for
custom infrastructure automation. Thanks to its
English-like command structure, readability, and
simple programming syntax, you will find that you can
accomplish your goals more quickly with Python than
with languages such as C or Java. While you are not
expected to be an expert in Python for the 200-901
DevNet Associate DEVASC exam, you do need to be
fluent enough to understand what is going on in a
sample of Python code. You also need to be able to
construct Python code by using samples from DevNet
and GitHub to interact with Cisco products. While the
next few chapters are not intended to replace a deep
dive into Python programming, they serve as a starting
point for success on the exam.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 3-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 3-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

Getting Started with Python 1, 2

Understanding Python Syntax 3, 4

Data Types and Variables 5, 6

Input and Output 7, 8

Flow Control with Conditionals and Loops 9, 10

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.
1. What is the appropriate way to create a virtual
environment for Python 3?
1. python3 -virtual myvenv
2. python3 virtual myvenv
3. python3 -m vrt myvenv
4. python3 -m venv myvenv

2. What command is used to install Python modules


from PyPI?
1. pip load packagename
2. pip install packagename
3. python3 -m pip install packagename
4. python3 -t pip install packagename

3. What is the standard for indention in Python?


1. One space for each block of code
2. Four spaces for each block of code
3. One tab for each block of code
4. One tab and one space per block of code

4. How are comments in Python denoted?


1. // on each line you want to make a comment
2. # or '" quotation marks encompassing multiline comments
3. /* comment */
4. @$ comment %@

5. Which of the following are mutable data types?


(Choose two.)
1. Lists
2. Dictionary
3. Integers
4. Tuples

6. Which of the following would create a dictionary?


1. a= (" name","chris","age",45)
2. a= dict()
3. a= [name, chris, age, 45]
4. a= {" name":"chris", "age": 45}

7. What data type does the input() function create


when assigned to a variable?
1. List
2. Raw
3. String
4. An auto typed one

8. Which print statement is valid for Python 3?


1. print 'hello world'
2. print('hello world')
3. print(hello, world)
4. print("'hello world'")

9. How do if statements operate?


1. If evaluates a variable against a condition to determine whether the
condition is true.
2. If uses Boolean operators.
3. An if statement needs to end with :.
4. All of the above are correct.

10. Which statements are true about the range()


function? (Choose two.)
1. The range() function iterates by one, starting at 0, up to but not
including the number specified.
2. The range() function iterates by one, starting at 1, up to the
number specified.
3. A range() function cannot count down, only up.
4. A range() function can count up or down, based on a positive or
negative step value.

FOUNDATION TOPICS
GETTING STARTED WITH PYTHON
Those from many engineering backgrounds are looking
to integrate programming into their infrastructure.
Maybe you are a hardcore computer science major and
have been programming in multiple languages for years.
You might be an infrastructure engineer who is strong in
the ways of Cisco IOS and looking for new ways to
operate in a diverse environment. You might even be a
server expert who is familiar with Ansible or Terraform
automation and are being asked to bring that automation
knowledge to the networking team. Regardless of your
background or experience level, the DevNet certifications
are designed to help you build the competency needed to
be successful and give you a chance to prove what you
have learned. If you haven’t coded in years, or if the
language that you currently program in isn’t one that is
very popular for infrastructure automation, where do you
start? Python.
The Python language has become the most popular
language in infrastructure automation because it is super
easy to pick up and doesn’t have all of the crazy syntax
and structure that you see in languages like Java or C.
It’s based on the English language and is not only
readable but extendable and powerful enough to be the
one language you can count on to be able to get things
accomplished in your day-to-day life. Those repetitive
tasks that suck your productivity dry can be automated
with just a few lines of code. Plus, due to the popularity
of Python, millions of sample scripts provided by users
like you as well as engineers at Cisco are free on GitHub
for you to use and modify.

The software company TIOBE has published a list of the


most popular programming languages each year for the
past 10 years, and Python has consistently made the list.
As of December 2019, it was ranked number 3. In
addition, a significant number of job postings reference
Python programming skills as a requirement for
successful candidates. Python is used in AI, machine
learning, big data, robotics, security, penetration testing,
and many other disciplines that are being transformed
by automation. Needless to say, learning Python has
become a differentiator for skilled engineers, and it is
part of the core tool set for DevOps and cloud
operational models.

The 200-901 DevNet Associate DEVASC exam is not a


Python test per se. You will not be asked to answer
esoteric questions about techniques and syntax that only
a Python wizard would know. The exam ensures that you
are competent enough with Python to know how to
interact with Cisco hardware through APIs and to use
Cisco software development kits and frameworks such as
pyATS and Genie. It is a very good idea to continue
learning Python and spend some time in either online
courses or self-study via books focused on the Python
language itself; you should also spend lots of time
working through examples on DevNet at
developer.cisco.com. This chapter and the several that
follow provide a crash course in functional Python to get
you going with the basics you need for success on the
exam.

Many UNIX-based operating systems, such as Mac and


Linux, already have Python installed, but with Windows,
you need to install it yourself. This used to be a hassle,
but now you can even install Python from the Windows
Store. On a Mac, the default version of Python is 2.7, and
you should update it to the more current 3.8 version.
One of the easiest ways is to head over to python.org and
download the latest variant from the source. The
installation is fast, and there are many tutorials on the
Internet that walk you through the process.

Note
Why would a Mac have such an old version of Python?
Well, that’s a question for Apple to answer, but from a
community standpoint, the move to version 3
historically was slow to happen because many of the
Python extensions (modules) where not updated to the
newer version. If you run across code for a 2.x version,
you will find differences in syntax and commands (also
known as the Python standard library) that will
prevent that code from running under 3.x. Python is
not backward compatible without modifications. In
addition, many Python programs require additional
modules that are installed to add functionality to
Python that aren’t available in the standard library. If
you have a program that was written for a specific
module version, but you have the latest version of
Python installed on your machine, the program might
not work properly. You will learn more about common
Python modules and how to use them in Chapter 4,
“Python Functions, Classes, and Modules.”
The use of Python 3 has changed dramatically as support
for the 2.x version ended in January 2020. The 3.x
version came out in 2008 and is the one that you should
be using today. Of course, this version issue is still a
problem even within the 3.x train of Python and the
corresponding modules you may want to use. To address
this compatibility conundrum across different versions
and modules, Python virtual environments have been
created. Such an environment allows you to install a
specific version of Python and packages to a separate
directory structure. This way, you can ensure that the
right modules are loaded, and your applications don’t
break when you upgrade your base Python installation.
As of Python 3.3, there is native support for these virtual
environments built into the Python distribution. You can
code in Python without using virtual environments, but
the minute you update modules or Python itself, you run
the risk of breaking your apps. A virtual environment
allows you to lock in the components and modules you
use for your app into a single “package,” which is a good
practice for building Python apps.

To use virtual environments, you launch Python 3 with


the -m argument to run the venv module. You need to
supply a name for your virtual environment, which will
also become the directory name that will include all the
parts of your virtual environment. Next, you need to
activate the virtual environment by using the source
command in Linux or Mac, as shown in this example. On
Windows, you will need to run the activate batch file.

Click here to view code image

# MacOS or Linux
python3 -m venv myvenv
source myvenv/bin/activate

# Windows
C:\py -3 -m venv myvenv
C:\myvenv\Scripts\activate.bat

At this point, you will see your virtual environment name


in parentheses at the beginning of your command
prompt:

(myvenv)$

This indicates that you are running Python in your


virtual environment. If you close the terminal, you have
to reactivate the virtual environment if you want to run
the code or add modules via pip (the Python module
package manager).

To turn off the virtual environment, just type deactivate


at the command prompt, and your normal command
prompt returns, indicating that you are using your local
system Python setup and not the virtual environment.

To install new modules for Python, you use pip, which


pulls modules down from the PyPI repository. The
command to load new modules is as follows:

pip install packagename

where packagename is the name of the package or


module you want to install. You can go to pypi.org and
search for interesting modules and check to see what
others are using. There are more than 200,000 projects
in the PyPI repository, so you are sure to find quite a few
useful modules to experiment with. These modules
extend Python functionality and contribute to its
flexibility and heavy use today.

The pip command also offers a search function that


allows you to query the package index:

pip search "search value"


The output includes the name, version, and a brief
description of what the package does.

To install a specific version of a package, you can specify


a version number or a minimum version so that you can
get recent bug fixes:

Click here to view code image

pip install package==1.1.1. To install a specific


version

pip install package>=1.0 To install a version


greater than or
equal to 1.0

When you download sample code, if there are package


dependencies, there is usually a readme file that lists
these requirements.

Using a requirements.txt file included with your code is


another essential good practice. Such a file makes it
simpler to get your Python environment ready to go as
quickly as possible. If you have a requirements.txt file
included with your code, it will give pip a set of packages
that need to be installed, and you can issue this one
command to get them loaded:

Click here to view code image

pip install -r requirements.txt

The requirements.txt file is just a list that maps Python


package names to versions. Example 3-1 shows what it
looks like.

Example 3-1 Contents of requirements.txt

ansible==2.6.3
black==19.3b0
flake8==3.7.7
genie==19.0.1
ipython==6.5.0
napalm==2.4.0
ncclient==0.6.3
netmiko==2.3.3
pyang==1.7.5
pyats==19.0
PyYAML==5.1
requests==2.21.0
urllib3==1.24.1
virlutils==0.8.4
xmltodict==0.12.0

If you are building your own code and want to save the
current modules configured in your virtual environment,
you can use the freeze command and have it
automatically populate the requirements.txt file:

Click here to view code image

pip freeze > requirements.txt

UNDERSTANDING PYTHON SYNTAX


The word syntax is often used to describe structure in a
language, and in the case of programming syntax, is used
in much the same way. Some programming languages
are very strict about how you code, which can make it
challenging to get something written. While Python is a
looser language than some, it does have rules that should
be followed to keep your code not only readable but
functional. Keep in mind that Python was built as a
language to enhance code readability and named after
Monty Python (the British comedy troop) because the
original architects of Python wanted to keep it fun and
uncluttered. Python is best understood through its core
philosophy (The Zen of Python):

Beautiful is better than ugly.

Explicit is better than implicit.

Simple is better than complex.

Complex is better than complicated.

Readability counts.
Python is a scripted language, which means that any text
editor can become a Python programming environment.
Your choice of editor is completely up to your own
preferences. The easiest way to get started with some
Python code is to just enter python3 at the command
prompt (assuming that Python 3 is installed, of course)
and use the interactive interpreter:

$ python3
>>> print("Savannah rules!")
Savannah rules!

For doing simple repetitive tasks, the interpreter is a


quick interface to Python. For any program that you
want to do a lot of editing, an editor is your best bet.
Atom and Visual Studio Code are two editors that are
very popular among Python programmers. Any modern
editor will do, but a strong ecosystem of plug-ins can
certainly make your life easier when interacting with
GitHub and creating more complex applications. Figure
3-1 shows the Atom editor in action.

Figure 3-1 Atom Text Editor


One aspect in which Python is different from other
languages is that within Python code, whitespace
matters. This can seem really weird and frustrating if you
are coming from another language, such as Java or C,
that uses curly braces or start/stop keywords; instead,
Python uses indentation to separate blocks of code. This
whitespace is not used just to make your code readable;
rather, Python will not work without it. Here is an
example of a simple loop that highlights why whitespace
is so important:

Click here to view code image

>>> for kids in ["Caleb", "Sydney", "Savannah"]:


... print("Clean your room,", kids, "!")
File "<stdin>", line 2
print("Clean your room,", kids, "!")
^
IndentationError: expected an indented block

This code will generate a syntax error the minute you try
to run it. Python is expecting to see indentation on the
line after the :. If you insert four spaces before the
print() statement, the code works:

Click here to view code image

>>> for kids in ["Caleb", "Sydney", "Savannah"]:


... print("Clean your room,", kids, "!")
...
Clean your room, Caleb !
Clean your room, Sydney !
Clean your room, Savannah !

Python allows you to use spaces or tabs. You can use


both spaces and tabs in Python 2, but Python 3 will
return a syntax error; however, if you use both tabs and
spaces, you might end up with really weird issues that
you need to troubleshoot. The standard for Python from
the PEP 8 style guide is to use four spaces of indentation
before each block of code. Why four spaces? Won’t one
space work? Yes, it will, but your code blocks will be hard
to align, and you will end up having to do extra work.

The alignment issue becomes especially important when


you nest loops and conditional statements, as each loop
needs to correspond to another block of code, indented
using spaces. Many text editors allow you to view
whitespace, and some even give you a visual indication of
what is in a code block. Figure 3-2 shows this in Atom.

Figure 3-2 Spaces and Code Blocks in Atom

Comments in Python are created by entering # or a


string of three quotation marks (either single or double
quotation marks). One very important good practice
when coding is to write a description of what is
happening in code that is not obvious. You probably will
not want to write a comment for a simple print
statement, but describing the output of a nested function
would be useful for anyone who needs to make additions
to your code in the future or to remind yourself why you
did what you did during that late night caffeine-fueled
coding session. The # is used to comment out a single
line so the Python interpreter ignores it. Here is an
example:

Click here to view code image


#get input from user in numeric format

The triple quote method is used to comment multiple


lines and can be helpful when you want to provide a bit
more context than what can fit on a single line of text.
Here is an example:

''' This is
line 2
and line 3'''

DATA TYPES AND VARIABLES


Data and variables are like the fuel and the fuel tank for a
program. You can insert various types of data into a
variable, and Python supports many types natively.
Python can also be expanded with modules to support
even more variables. A variable is really just a label that
maps to a Python object stored somewhere in memory.
Without variables, your programs would not be able to
easily identify these objects, and your code would be a
mess of memory locations.

Variables
Assigning a variable in Python is very straightforward.
Python auto types a variable, and you can reassign that
same variable to another value of a different type in the
future. (Try doing that in C!) You just need to remember
the rules for variable names:

A variable name must start with a letter or the underscore character.

A variable name cannot start with a number.

A variable name can only consist of alphanumeric characters and


underscores (A–Z, 0–9, and _).

A variable name is case sensitive (so Value and value are two different
variable names).

To assign a variable you just set the variable name equal


to the value you want, as shown in these examples:
Pip = "cat" Variable assigned to a string

Age = 9 Variable assigned to an integer

Chill = True Variable assigned to a Boolean

Variable1 = Variable2 Variable assigned to another variable

Data Types
Everything in Python is an object, and depending on the
type of object, there are certain characteristics that you
must be aware of when trying to determine the correct
action you can perform on them. In Python, whenever
you create an object, you are assigning that object an ID
that Python uses to recall what is being stored. This
mechanism is used to point to the memory location of
the object in question and allows you to perform actions
on it, such as printing its value. When the object is
created, it is assigned a type that does not change. This
type is tied to the object and determines whether it is a
string, an integer, or another class.

Within these types, you are allowed to either change the


object (mutable) or are not allowed to change the object
(immutable) after it has been created. This doesn’t mean
that variables are not able to be changed; it means that
most of the basic data types are not able to be modified
but need to be replaced (or assigned, in Python speak)
with another value. You can’t just add a character at the
end of a string value, for example. You have to instead
reassign the whole string if you want to change it. This
mutable/immutable concept will make more sense as
you interact with various data types in programs. Python
treats these two types of objects differently, and each has
nuances that you must work around as you build your
Python programs. To make it simple, think of immutable
objects as ones that you want to stay the same, such as
constants. Mutable objects, on the other hand, are
objects that you will be adding elements to and
subtracting from on a regular basis.

Table 3-2 lists the most commonly used Python data


types. The rest of this section covers them in more detail.

Table 3-2 Python Data Types

NameTypeMutableDescription

Inte i N Whole numbers, such as 6, 600, and 1589


ger n o
t

Boo b N Comparison value, either True or False


lean o o
o
l

Stri s N Sequence of characters delimited by


ng t o quotes, such as "Cisco", 'Piper', and "2000"
r

List l Y Ordered sequence of objects, such as [10,


i e "DNA", 19.8]
s s
t

Tup t N Ordered sequence of immutable objects,


le u o such as (10, "DNA", 19.8)
p

Dict d Y Unordered key:value pairs, such as


iona i e {"key1":"value1","name":"Pip"}
ry c s
t
Set s Y Unordered collection of unique objects,
e e such as {"a","b"}
t s

Integers, Floating Point, and Complex Numbers


The integers and floating point numbers are the simplest
of data types:

Integers: Whole numbers without decimal points

Floating point: Numbers with decimal points or exponents (such as


10e5, which indicates 10 to the fifth power)

Python can perform advanced calculations, so it is used


heavily in data science, and many features are built into
the language and can be easily added with modules. For
our purposes, though, the basic building blocks will
suffice. Python has a simple set of built-in operators for
working with numbers, much like the operators on a
regular calculator. Table 3-3 lists Python’s numeric
operators.

Table 3-3 Python’s Numeric Operators

OperatorDescriptionExampleEvaluates to

+ Adds two expressions together 5+ 1


5 0

- Subtracts one expression from another 35 - 2


15 0

* Multiplies two expressions 10 * 1


10 0
0

/ Divides one expression by another 20 4


/5

/ Performs integer division (leaving off the 30 4


/ remainder) // 7
% Performs modulus division (printing the 30 2
remainder only) %7

* Indicates an exponent 2 ** 2
* 8 5
6

When working with numbers in Python, a defined order


of precedence must be observed in calculations. Python
uses the following order (also known as PEMDAS):

1. Parentheses: Parentheses are always evaluated first.


2. Power: The exponent is evaluated.
3. Multiplication: Any multiplication is performed.
4. Division: Division is evaluated.
5. Addition: Addition is performed.
6. Subtraction: Subtraction is performed.
7. Left to right: After PEMDAS, anything else (such as sqrt() or
other math functions) is evaluated from left to right.

In most languages, the parentheses are preferred over


anything else. Most Python programmers use them
liberally in their math formulas to make them simpler to
construct without being so strict with the rules. Take the
following example:

>>> 5 * 6 - 1
29

Python evaluates the multiplication first and then


subtracts 1 from the result. If you wanted the subtraction
to happen first, you could simply add parentheses
around the parts you want evaluated:

>>> 5 * (6 - 1)
25
A floating point number is just a whole number with a
decimal point. When you divide in Python, you often get
a remainder that is displayed as a floating point number,
as in this example:

>>> 10 / 7
1.4285714285714286

If you just want to see whole numbers, you can use


integer division and lop off the remainder, as shown
here:

>>> 10 // 7
1

Likewise, if you are only interested in the remainder, you


can have modulus division show you what is left over:

>>> 10 % 7
3

You have the option to use other base systems instead of


just the default base 10. You have three choices in
addition to base 10: binary (base 2), octal (base 8), and
hex (base 16). You need to use prefixes before integers in
order for Python to understand that you are using a
different base:

0b or 0B for binary

0o or 0O for octal

0x or 0X for hex

From Python’s perspective, these are still just integers,


and if you type any of them into the interpreter, it will
return the decimal value by default, as shown in this
example:
>>> 0xbadbeef
195935983

You can also convert back and forth by using the base
keyword in front of the value you want to exchange, as in
these examples:

>>> hex(195935983)
'0xbadbeef'

>>> bin(195935983)
'0b1011101011011011111011101111'

Booleans
A Boolean has only two possible values, True and False.
You use comparison operators to evaluate between two
Boolean objects in Python. This data type is the
foundation for constructing conditional steps and
decisions within programs. Table 3-4 shows the various
Boolean comparison operators and some examples of
how to use them.

Table 3-4 Boolean Comparisons

OperatorWhat It DoesExampleEvaluates to

< Less than 5 < 10 True

> Greater than 6.5 > 3.5 True

<= Less than or equal to 0 <= -5 False

>= Greater than or equal to 6 >= 6 True

== Equal to 5 = “5” False


!= Not equal to 5 != “5” True

Strings
The string data type is a sequence of characters and uses
quotes to determine which characters are included. The
string ‘Hello’ is just a set of characters that Python
stores in order from left to right. Even if a string contains
a series of numbers, it can still be a string data type. If
you try to add a 1 to a string value, Python gives you an
error, as shown in this example:

Click here to view code image

>>> '10' + 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate str (not "int") to
str

This error tells you that you have to convert the string to
an integer or another data type to be able to use it as a
number (in a math formula, for example). The int()
function can convert a string value into an integer for
you, as shown in this example:

>>> int('10') + 1
11

A string is just a list of characters in a certain order that


Python keeps track of. In fact, this aspect of strings
makes them easy to manipulate. If you use the string
'DevNet', you can pull out any individual characters of
the string by knowing where it sits in the string index.
One thing to keep in mind is that indexes in Python
always start with 0, so if you want the very first character
in a string, your index value would be 0 and not 1. Figure
3-3 shows the string DevNet with its corresponding
index values.

Figure 3-3 DevNet String Index

If you assign the string DevNet to a variable, you can


separate and manipulate the component values of the
string by using the index value. You can use brackets to
specify the index number. The following example prints a
capitol D from DevNet:

>>> a='DevNet'
>>> a[0]
'D'

You can also specify ranges to print. The colon operator


gives you control over whole sections of a string. The first
number is the beginning of the slice, and the second
number determines the end. The second number may be
confusing at first because it is intended to identify “up to
but not including” the last character. Consider this
example:

>>> a[0:3]
'Dev'

This example shows a 3 at the end, but this is technically


four characters since the index starts at 0, but Python
doesn’t print the last character and instead stops right
before it. For new Python programmers, this can be
confusing, but remember that Python is literal. If you
think of an index value as a box, in Figure 3-3, you want
to stop at box 3 (but don’t want to open the box). If you
want to print the whole string, just pick a number
beyond the index value, and Python will print everything,
as in this example:

>>> a[0:6]
'DevNet'

If you omit a value for the first number, Python starts at


0, as in this example:

>>> a[:2]
'De'

If you omit the second value, Python prints to the end of


the string, as in this example:

>>> a[2:]
'vNet'

You can also reverse direction by using negative


numbers. If you put a negative first number, you start
from the end of the string, as in this example:

>>> a[-2:]
'et'

A negative value on the other side of the colon causes


Python to print using the end as a reference point, as in
this example:

>>> a[:-2]
'DevN'

You can perform math operations on strings as well. The


+ is used to add or concatenate two strings together, as
in this example:

>>> 'DevNet' + 'Rocks'


'DevNetRocks'

Multiplication works as well, as in this example:

Click here to view code image

>>> 'DevNet Rocks ' * 5


'DevNet Rocks DevNet Rocks DevNet Rocks DevNet
Rocks DevNet Rocks '

There is a tremendous amount of string manipulation


you can do with Python, and there are a number of built-
in methods in the standard string library. These methods
are called with a dot after the variable name for a string.
Table 3-5 lists some commonly used string manipulation
methods, including the syntax used for each and what
each method does.

Table 3-5 String Methods

MethodWhat It Does

str.capitalize() Capitalize the string

str.center(width Center justify the string


[, fillchar])

str.endwith(suffi Add an ending string to the string


x[, start[, end]])

str.find(sub[, Find the index position of the


start[, end]]) characters in a string

str.lstrip([chars] Remove whitespace characters from


) the end of the string
str.replace(old, Replace characters in the string
new[, count])

str.lower() Make the string all lowercase

str.rstrip([chars Strip whitespace characters from the


]) front of the string

str.strip([chars]) Remove whitespace characters from


the beginning and end of the string

str.upper() Make the string all uppercase

Lists
Python, unlike other programming languages, such as
C++ and Java, doesn’t have arrays. If you want to store a
bunch of values, you can use a list. You can use a variable
to store a collection of items in a list. To create a list, you
assign the contents of the list to a variable with the = and
[] and separate the items with commas, as in this
example:

Click here to view code image

>>> kids = ['Caleb', 'Sydney', 'Savannah']


>>> kids
['Caleb', 'Sydney', 'Savannah']

A list can contain any Python object, such as integers,


strings, and even other lists. A list can also be empty and
is often initialized in an empty state for programs that
pull data from other sources. To initialize a list in an
empty state, you just assign two brackets with nothing in
them or you can use the built-in list() function:

emptylist = []
emptylist2 = list()
Lists are similar to strings in that each is a set of items
indexed by Python that you can interact with and slice
and dice. To pull out values, you just use the variable
name with brackets and the index number, which starts
at 0, as in this example:

>>> print(kids[1])
Sydney

Figure 3-4 shows a list from the perspective of the index.

Figure 3-4 List Index

Unlike strings, lists are mutable objects, which means


you can change parts of the list at will. With a string, you
can’t change parts of the string without creating a new
string. This is not the case with lists, where you have a
number of ways to make changes. If you have a
misspelling, for example, you can change just one
element of the list, leaving the rest untouched, as in this
example:

Click here to view code image

>>> kids
['Caleb', 'Sidney', 'Savannah']
>>> kids[1]="Sydney"
>>> kids
['Caleb', 'Sydney', 'Savannah']
>>>

You can concatenate lists as well by using the + operator


to join two lists together. The list items do not need to be
unique. Python just connects the two together into a new
list, as shown in the following example:

>>> a = [1, 2, 4]
>>> b = [4, 5, 6]
>>> c = a + b
>>> print(c)
[1, 2, 4, 4, 5, 6]

Remember all of the slicing you saw with strings? The


same principles apply here, but instead of having a single
string with each letter being in a bucket, the elements in
the list are the items in the bucket. Don’t forget the rule
about the second number after the colon, which means
“up to but not including.” Here is an example:

Click here to view code image

>>> c= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]


>>> c[1:4]
[2, 3, 4]
>>> c[ :-4]
[1, 2, 3, 4, 5, 6]
>>> c[:]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

Table 3-6 describes some of the most common list


methods.

Table 3-6 List Methods

MethodWhat It Does

list.appe Adds an element to the end of the list


nd(elem
ent)

list.clear Removes everything from the list


()
list.copy( Returns a copy of the list
alist)

list.coun Shows the number of elements with the


t(elemen specified value
t)

list.exten Adds the elements of a list to the end of the


d(alist) current list

list.inde Returns the index number of the first element


x() with a specified value

list.inser Adds an element at a specified index value


t( index,
element)

list.pop(i Removes an element at a specific index


ndex) position, or if no index position is provided,
removes the last item from the list

list.remo Removes a list item with a specified value


ve()

list.rever Reverses the list order


se()

list.sort( Sorts the list alphabetically and/or numerically


)

Tuples
Tuples and lists are very similar. The biggest difference
between the two comes down to mutability. As discussed
earlier, Python data types are either mutable or
immutable. Lists are mutable, and tuples are immutable.
So why would you need these two types if they are so
similar? It all comes down to how Python accesses
objects and data in memory. When you have a lot of
changes occurring, a mutable data structure is preferred
because you don’t have to create a new object every time
you need to store different values. When you have a value
that is constant and referenced in multiple parts of a
program, an immutable data type (such as a tuple), is
more memory efficient and easier to debug. You don’t
want some other part of your program to make changes
to a crucial piece of data stored in a mutable data type.

To create a tuple, you use parentheses instead of


brackets. You can use the type() function to identify a
Python data type you have created. Here is an example:

Click here to view code image

>>> person = (2012, 'Mike', 'CCNA')


>>> person
(2012, 'Mike', 'CCNA')
>>> type(person)
<class 'tuple'>

You access data in a tuple the same way as in a list—by


using brackets and the index value of the item in the
tuple that you want to return:

>>> person[0]
2012

What you can’t do with a tuple is make an assignment to


one of the values:

Click here to view code image

>>> person[0]=15
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object does not support item
assignment
Tuples can, however, be used to assign a set of variables
quickly:

Click here to view code image

>>> (a, b, c) = (12, 'Fred',18)


>>> c
18

Dictionaries
A dictionary provides another way of creating a
collection of items. Why do you need another way of
storing data when you already have lists and tuples? A
list is an ordered set of items tracked by an index. What
if you need to access data that is tied to a certain value,
such as a person’s name? This capability is exactly why
you need dictionaries in Python. A dictionary saves a ton
of effort by giving you a built-in system for storing data
in a key:value pair. As when you use labels on files in a
filing cabinet, you can assign data to a key and retrieve it
by calling that key as if you are pulling a file from the
cabinet. Dictionaries don’t have any defined order; all
you need is the key—and not some index number—to get
access to your data. There are some rules regarding
dictionaries.

Keys: A dictionary’s keys are limited to only using immutable values


(int, float, bool, str, tuple, and so on). No, you can’t use a list as a key,
but you can use a tuple, which means you could use a tuple as a key
(immutable) but you can’t use a list as a key (mutable).

Values: A value can be any Python object or any combination of


objects.

To create a dictionary, you use braces and your key and


value separated by a colon. You separate multiple items
with commas. Here’s an example:

Click here to view code image

>>> cabinet = { "scores":(98,76,95),


"name":"Chris",
"company":"Cisco"}
>>> type(cabinet)

<class 'dict'>

Instead of using an index, you use the key, as shown in


this example:

>>> cabinet["scores"]
(98, 76, 95)
>>> cabinet["company"]
'Cisco'

To add more items to a dictionary, you can assign them


with a new key. You can even add another dictionary to
your existing dictionary, as shown in this example:

Click here to view code image

>>> cabinet["address"] = {"street":"123 Anywhere


Dr",
"city":"Franklin", "state":"TN"}

>>> cabinet["address"]

{'street': '123 Anywhere Dr', 'city': 'Franklin',


'state': 'TN'}

Sets
A set in Python consists of an unordered grouping of
data and is defined by using the curly braces of a
dictionary, without the key:value pairs. Sets are mutable,
and you can add and remove items from the set. You can
create a special case of sets called a frozen set that makes
the set immutable. A frozen set is often used as the
source of keys in a dictionary (which have to be
immutable); it basically creates a template for the
dictionary structure. If you are familiar with how sets
work in mathematics, the various operations you can
perform on mutable sets in Python will make logical
sense. To define a set, do the following:

Click here to view code image


>>> numbs = {1, 2, 4, 5, 6, 8, 10}
>>> odds = {1, 3, 5, 7, 9}

To check that these are indeed sets, use the type()


function:

>>> type(odds)
<class 'set'>

To join two sets (just as in a mathematical join), you can


use the | operator to show a combined set with no
duplicates:

Click here to view code image

>>> numbs | odds


{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}

You can get an intersection of two sets and show what


numbers are in both by using the & operator:

>>> numbs & odds


{1, 5}

There are many ways to evaluate sets, and the Python


documentation can be used to explore them all. In
addition, Python has library collections that give you
even more options for ways to store data. Search the
Python documentation for “collections” and take a look
at ordered dictionaries, named tuples, and other
variations that may suit your data collecting needs. For
the purposes of the DEVASC exam, though, you do not
need to know data structures beyond the basic types
discussed here.

INPUT AND OUTPUT


Input and output pretty much define what a computer
does for you. In Python, the input() function and the
print() function are two essential components that
allow you to create interactive applications. In this
section you will learn how to leverage these powerful
functions in your applications.

Getting Input from the User


Python has the input() function to get information from
a user running your Python code. The user is asked a
question, and the program waits until the user types a
response. It really is as simple as that. The input()
function takes the characters that are entered and
automatically stores them as a string data type,
regardless of what the user enters. Here is an example:

Click here to view code image

>>> inpt = input('Type your name: ')


Type your name: Chris Jackson
>>> inpt
'Chris Jackson'

You assign a variable (in this case, inpt) to the input()


function with a text prompt so that the user knows what
is expected. That variable now holds the string the user
typed. What if you need to get an integer or a floating
point number? Since Python stores every input as a
string, you need to do a conversion on the data supplied.
Here is an example:

Click here to view code image

>>> inpt = float(input('What is the Temperature in


F: '))
What is the Temperature in F: 83.5
>>> inpt
83.5
Here you asked the user for the temperature in
Fahrenheit, which can be expressed as a floating point
number. To make sure the variable holds the correct
type, you used the input() function inside the float()
function to convert the string into a floating point
number.

The Mighty print() Function


The print() function provides output that can be
displayed in the user’s terminal. Just like every other
function in Python, it is called with parentheses. This
function is usually the gateway to writing your first code
in Python, as in this example:

>>> print('Hello World')


Hello World

Every line printed with the print() function includes a


newline character (\n) at the end, which is a special
character that tells the terminal to advance one line. If
you use the \n sequence within the print() string, it is
interpreted as a new line, as shown in this example:

>>> print('Hello\nWorld')
Hello
World

There are numerous codes like this that can control how
text is displayed in a string and how Python interprets
the output. Without them, you would not be able to use,
for example, a backslash in your text output. Here are a
few of the most common ones:

\\: Backslash

\b: Backspace

\' : Single quote

\": Double quote

\t: Tab
\r: Carriage return

You can add multiple arguments to the print() function


by using commas between elements. This is very useful
in creating meaningful text, and the print() function
also handles concatenation of the different data types
automatically. Consider this example:

Click here to view code image

>>> print('Numbers in set', 1, ':', numbs )


Numbers in set 1 : {1, 2, 4, 5, 6, 8, 10}

By default, the print() function uses a separator


between elements. This is normally not an issue if you
want spaces to appear between words or elements. In the
previous example, you can see a space between the 1 and
the : that just doesn’t look good. You can fix it by
changing the separator that the print() functions uses
with the sep=" attribute (using single quotes with
nothing in between). Since you will be removing all
automatic spacing, you have to compensate for this by
adding spaces in your actual text if you need them.
Remember that separators come between elements and
don’t add anything to the start or end of the print()
function. Consider this example:

Click here to view code image

>>> print('Numbers in set ', 1, ': ', numbs,


sep='' )
Numbers in set 1: {1, 2, 4, 5, 6, 8, 10}

One capability added to Python 3.6 and up is the


addition of f-string formatting. Not only are these strings
easier to read and less prone to syntax errors but they
allow you to write formatting code a lot faster. To create
an f-string, you put an f at the beginning of a string,
within the print() function, to let Python know what you
are doing, and then you can use {} within your string to
insert values or other functions. Here is an example:

Click here to view code image

>>> name = 'Piper'


>>> name2 = 'Chris'
>>> print(f'{name2} says Hi to {name}!')
Chris says Hi to Piper!

For more on formatting strings and beautifying your


output, see the Python documentation.

FLOW CONTROL WITH CONDITIONALS


AND LOOPS
So far you have been exposed to many of the building
blocks of the Python language. The real power of a
programming language is in the mechanisms you can use
to embed logic and respond to different conditions by
changing the flow of operation. Python has three primary
control statements:

if: An if statement is a conditional statement that can compare values


and make branching decisions.

for: A for loop is a counting loop that can iterate through data a
specific number of times.

while: The while loop can iterate forever when certain conditions are
met.

You can use these three statements in various


combinations to create very sophisticated programs. In
this section you will see how each of these statements
work.

If Statements
An if statement starts with an if and then sets up a
comparison to determine the truth of the statement it is
evaluating and ending with a : to tell Python to expect
the clause (the action if the condition is true) block of
code next. As mentioned earlier in this chapter,
whitespace indenting matters very much in Python. The
clause of an if statement must be indented (four spaces
is the standard) from the beginning of the if statement.
The following example looks for a condition where the
variable n is equal to 5 and prints a message to the
console indicating that the number is indeed a 5:

Click here to view code image

>>> n = 20
>>> if n == 20:
... print('The number is 20')
...
The number is 20

The Python interpreter uses three dots to let you


continue the clause for the if statement. Notice that there
is space between the start of the dots and the print()
statement. Without these four spaces, Python would spit
back a syntax error like this:

Click here to view code image

>>> if n == 20:
... print('oops')
File "<stdin>", line 2
print('oops')
^
IndentationError: expected an indented block

The goal of an if statement is to determine the “truth” of


the elements under evaluation. This is Boolean logic,
meaning that the operators evaluate True or False (refer
to Table 3-4). The previous example is determining
whether a variable is equal to a specific integer. What if
the number is different? You might want to apply other
logic by asking more questions. This is where else if (elif
in Python) comes into play.
An if statement can have as many elif conditions as you
want to add to the conditional check. Good coding
practices recommend simplification, but there is no real
limit to how many you add. Here is an example that uses
two elif conditionals:

Click here to view code image

>>> n = 3
>>> if n == 17:
... print('Number is 17')
... elif n < 10:
... print('Number is less than 10')
... elif n > 10:
... print('Number is greater than 10')
...
Number is less than 10

Since each if and elif statement does something only if


the condition identified is true, it may be helpful to have
a default condition that handles situations where none of
the if or elif statements are true. For this purpose, you
can assign a single else statement at the end, as shown
in Example 3-2.

Example 3-2 Adding a Final else Statement

Click here to view code image

score = int(input('What was your test


score?:'))

if score >= 90:


print('Grade is A')
elif score >= 80:
print('Grade is B')
elif score >= 70:
print('Grade is C')
elif score >= 60:
print('Grade is D')
else:
print('Grade is F')

What was your test score?:53


Grade is F
>>>

For Loops
The for statement allows you to create a loop that
continues to iterate through the code a specific number
of times. It is also referred to as a counting loop and can
work through a sequence of items, such as a list or other
data objects. The for loop is heavily used to parse
through data and is likely to be your go-to tool for
working with data sets. A for loop starts with the for
statement, followed by a variable name (which is a
placeholder used to hold each sequence of data), the in
keyword, some data set to iterate through, and then
finally a closing colon, as shown in this example:

>>> dataset=(1,2,3,4,5)
>>> for variable in dataset:
... print(variable)
...
1
2
3
4
5

The for loop continues through each item in the data set,
and in this example, it prints each item. You can also use
the range() function to iterate a specific number of
times. The range() function can take arguments that let
you choose what number it starts with or stops on and
how it steps through each one. Here is an example:

>>> for x in range(3):


... print(x)
...
0
1
2

By default, if you just give range() a number, it starts at


0 and goes by 1s until it reaches the number you
provided. Zero is a valid iteration, but if you don’t want
it, you can start at 1. Consider this example:

>>> for x in range(1,3):


... print(x)
...
1
2

To change the increment from the default of 1, you can


add a third attribute to the range() statement. In the
following example, you start at 1 and increment by 3
until you reach 10:

>>> for x in range(1,11,3):


... print(x)
...
1
4
7
10

Remember that these ranges are up to and not including


the final number you specify. If you want to go all the
way to 10 in this example, you need to set your range to
11.

While Loops
Whereas the for loop counts through data, the while
loop is a conditional loop, and the evaluation of the
condition (as in if statements) being true is what
determines how many times the loop executes. This
difference is huge in that it means you can specify a loop
that could conceivably go on forever, as long as the loop
condition is still true. You can use else with a while
loop. An else statement after a while loop executes
when the condition for the while loop to continue is no
longer met. Example 3-3 shows a count and an else
statement.

Example 3-3 else Statement with a while Loop


Click here to view code image

>>> count = 1
>>> while (count < 5):
... print("Loop count is:", count)
... count = count + 1
... else:
... print("loop is finished")
...
Loop count is: 1
Loop count is: 2
Loop count is: 3
Loop count is: 4
loop is finished

You can probably see similarities between the loop in


Example 3-3 and a for loop. The difference is that the
for loop was built to count, whereas this example uses
an external variable to determine how many times the
loop iterates. In order to build some logic into the while
loop, you can use the break statement to exit the loop.
Example 3-4 shows a break with if statements in an
infinite while loop.

Example 3-4 Using the break Statement to Exit a


Loop
Click here to view code image

while True:
string = input('Enter some text to print.
\nType "done" to quit> ')
if string == 'done' :
break
print(string)
print('Done!')

Enter some text to print.


Type "done" to quit> Good luck on the test!
Good luck on the test!
Enter some text to print
Type "done" to quit> done
Done!

Notice the condition this example is checking with the


while statement. Because True will always be True
from an evaluation perspective, the while condition is
automatically met. Without that if statement looking for
the string 'done', your loop would keep asking for input
and printing what you typed forever.

This chapter provides an overview of some of the key


concepts and capabilities in Python. The goal is to
prepare you to be able to read and understand code
snippets that you might see on the DEVASC exam. The
next two chapters dive into other aspects of working with
Python and Cisco APIs that are considered essential
skills. Make sure you have followed along with the code
examples here and that you are familiar with how to
construct these basic examples. The following chapters
build on these skills.

EXAM PREPARATION TASKS


As mentioned in the section “How to Use This Book” in
the Introduction, you have a couple of choices for exam
preparation: the exercises here, Chapter 19, “Final
Preparation,” and the exam simulation questions on the
companion website.

REVIEW ALL KEY TOPICS


Review the most important topics in this chapter, noted
with the Key Topic icon in the outer margin of the page.
Table 3-7 lists these key topics and the page number on
which each is found.

Table 3-7 Key Topics for Chapter 3

Key Topic ElementDescriptionPage Number

Paragraph Whitespace in Python code blocks 64

Table 3-2 Python Data Types 67

Paragraph Strings 70

DEFINE KEY TERMS


There are no key terms for this chapter.

ADDITIONAL RESOURCES
Python Syntax:
https://www.w3schools.com/python/python_syntax.a
sp

A Quick Tour of Python Language Syntax:


https://jakevdp.github.io/WhirlwindTourOfPython/0
2-basic-python-syntax.html

PEP 8—Style Guide for Python Code:


https://www.python.org/dev/peps/pep-0008/

Coding & APIs:


https://developer.cisco.com/startnow/#coding-apis-
v0

Mutable vs Immutable Objects in Python:


https://medium.com/@meghamohan/mutable-and-
immutable-side-of-python-c2145cf72747

Your Guide to the Python print() Function:


https://realpython.com/python-print/
Chapter 4

Python Functions, Classes, and


Modules
This chapter covers the following topics:
Python Functions: This section provides an overview of working
with and building Python functions.

Object-Oriented Programming and Python: This section


describes key aspects of using object-oriented programming
techniques.

Python Classes: This section provides an overview of creating and


using Python classes.

Working with Python Modules: This section provides an overview


of creating and using Python modules.

This chapter moves away from the basics introduced in


Chapter 3, “Introduction to Python,” and introduces
Python functions, classes, and modules. Building Python
functions allows for the creation of reusable code and is
the first step toward writing object-oriented code.
Classes are the Python tools used to construct Python
objects and make it easier to produce scalable
applications that are easy to maintain and readable.
Finally, this chapter introduces the wide world of Python
modules and how they can extend the capabilities of
Python and make your job of coding much easier.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 4-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 4-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

Python Functions 1–3

Object-Oriented Programming and Python 4–5

Python Classes 6–8

Working with Python Modules 9–10

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. Which of the following is the correct syntax for a


Python function?
1. define function (arg):
2. function function(arg);
3. def function(arg):
4. func function(arg):
2. Which of the following is a valid Python function
name?
1. 1function
2. __init__
3. True
4. Funct1on

3. When three single quotation marks are used on the


next line directly after defining a function, what does
this indicate?
1. Multi-line text
2. A docstring
3. A string value including double or single quotation marks
4. None of the above

4. What are key components of object-oriented


programming in Python? (Choose two.)
1. Functions that can be performed on a data structure
2. Attributes that are stored in an object
3. Configuration templates
4. YAML files

5. Which of the following are benefits of OOP?


(Choose all that apply.)
1. Reusable code
2. Easy to follow
3. Low coupling/high cohesion
4. Complex integration

6. Which of the following are used to define a class in


Python? (Choose two.)
1. class classname(parent):
2. class classname:
3. def class classname(arg):
4. None of the above

7. What is a method?
1. A variable applied to a class
2. Syntax notation
3. A function within a class or an object
4. Something that is not used in a class

8. Which of the following describes inheritance?


1. A hierarchy for functions in Python
2. Class attributes and methods used as the starting point for another
class
3. A function only applied to methods being used in another class
4. None of the above
9. Which module provides access to the file system
and directory structure?
1. filesystem
2. open
3. system
4. os

10. Which module is a testing framework for Cisco


infrastructure?
1. pyATS
2. pyang
3. devnetats
4. ncclient

FOUNDATION TOPICS
PYTHON FUNCTIONS
In Python, a function is a named block of code that can
take a wide variety of input parameters (or none at all)
and return some form of output back to the code that
called the function to begin with. It represents a key
concept in programming sometimes referred to as DRY,
which stands for Don’t Repeat Yourself. The idea behind
DRY is that if you perform some particular operations in
your code multiple times, you can simply create a
function to reuse that block of code anywhere you need it
instead of duplicating effort by typing it each time.

Python offers two types of functions: built-in functions


that are part of the standard library and functions you
create yourself. The standard library includes a huge
number of functions you can use in your program, like
print(), many of which you have already been
introduced to in Chapter 3. Building your own functions
is how you construct capabilities that are not already
present within the Python language.
To define a function in Python, you use the keyword def,
a name for the function, a set of parentheses enclosing
any arguments you want to pass to the function, and a
colon at the end. The name of a function must follow
these rules:

Must not start with a number

Must not be a reserved Python word, a built-in function (for example,


print(), input(), type()), or a name that has already been used as a
function or variable

Can be any combination of the A–Z, a–z, 0–9 and the underscore (_)
and dash (-)

The following is an example of an incredibly simple


function that could be entered into the interactive
Python interpreter:

Click here to view code image

Python 3.8.1 (v3.8.1:1b293b6006, Dec 18 2019,


14:08:53)

[Clang 6.0 (clang-600.0.57)] on darwin

Type "help", "copyright", "credits" or "license"


for more
information.

>>> def devnet():

'''prints simple function'''

print('Simple function')

>>> devnet()

Simple function

This function prints out the string “Simple function” any


time you call it with devnet(). Notice the indented
portion that begins on the next line after the colon.
Python expects this indented portion to contain all the
code that makes up the function. Keep in mind that
whitespace matters in Python. The three single quotation
marks that appear on the first line of the indented text of
the function are called a docstring and can be used to
describe what the function does.
As shown in the following example, you can use the built-
in Python function help() to learn what a function does
and any methods that can be used:

Click here to view code image

>>> help(devnet)
Help on function devnet in module __main__:

devnet()
prints simple function

USING ARGUMENTS AND


PARAMETERS
An argument is some value (or multiple values) that you
pass to a function when you call the function within code.
Arguments allow a function to produce different results
and make code reuse possible. You simply place
arguments within the parentheses after a function name.
For example, this example shows how you can pass
multiple numeric arguments to the max() function to
have it return the largest number in the list:

>>> max(50, 5, 8, 10, 1)


50

Each function must define how it will use arguments,


using parameters to identify what gets passed in and how
it gets used. A parameter is simply a variable that is used
in a function definition that allows code within the
function to use the data passed to it. To get results back
from the function, you use the keyword return and the
object you want to pass back. The following example
shows how to create a function that subtracts two
numbers and stores the result in a local variable called
result that gets returned when the function is called:
>>> def sub(arg1, arg2):
result = arg1 - arg2
return result

>>> sub(10, 15)


-5

The variable result is local, meaning that it is not


accessible to the main Python script, and it is used only
within the function itself. If you tried to call result
directly, Python would produce an error saying that
result is not defined. You can, however, access global
variables from within the function; you might do this, for
example, to set certain constants or key variables that
any function can use (for example, IP addresses). The
difference in accessibility between a local variable and
global variable is important, because they allow your
code to maintain separation and can keep your functions
self-contained.

The previous example uses positional arguments, which


must be passed to a function in a particular order.
Positional arguments work with a simple set of
consistently applied arguments, but if a function needs
more flexible alignment to parameters within a function,
you can use keyword arguments instead. A keyword
argument is a name/value pair that you pass to a
function. Instead of using position, you specify the
argument the function uses. It is a lot like assigning a
variable to an argument. In the previous example, arg1 is
subtracted from arg2, and if the positions of these
arguments were switched, you would get a different
result when subtracting the values. With keyword
arguments, it doesn’t matter in what order they are
passed to the function. Here is an example:

>>> sub(arg2=15, arg1=10)


-5
What happens if you don’t know the total number of
arguments that are being passed to a function? When
you read in data, you might not know how many
arguments to expect. Python allows you to use * and **
(often referred to as *args and **kwargs) to define any
number of arguments or keyword arguments. * and **
allow you to iterate through a list or other collection of
data, as shown in this example:

Click here to view code image

>>> def hello(*args):


for arg in args:
print("Hello", arg, "!")

>>> hello('Caleb', 'Sydney', 'Savannah')


Hello Caleb !
Hello Sydney !
Hello Savannah !

By using keyword arguments, you can send a list of


key/value pairs to a function, as in the following
example:

Click here to view code image

>>> def hello(**kwargs):


for key, value in kwargs.items():
print("Hello", value, "!")

>>> hello(kwarg1='Caleb', kwarg2='Sydney',


kwarg3='Savannah')

Hello Caleb !
Hello Sydney !
Hello Savannah !

Note the use of the items() function in the for


statement to unpack and iterate through the values.
You can also supply a default value argument in case you
have an empty value to send to a function. By defining a
function with an assigned key value, you can prevent an
error. If the value in the function definition is not
supplied, Python uses the default, and if it is supplied,
Python uses what is supplied when the function is called
and then ignores the default value. Consider this
example:

Click here to view code image

>>> def greeting(name, message="Good morning!"):


print("Hello", name + ', ' + message)

>>> greeting('Caleb')
Hello Caleb, Good morning!
>>> greeting('Sydney', "How are you?")
Hello Sydney, How are you?

OBJECT-ORIENTED PROGRAMMING
AND PYTHON
Python was developed as a modern object-oriented
programming (OOP) language. Object-oriented
programming is a computer programming paradigm that
makes it possible to describe real-world things and their
relationships to each other. If you wanted to describe a
router in the physical world, for example, you would list
all its properties, such as ports, software versions,
names, and IP addresses. In addition, you might list
different capabilities or functions of the router that you
would want to interact with. OOP was intended to model
these types of relationships programmatically, allowing
you to create an object that you can use anywhere in your
code by just assigning it to a variable in order to
instantiate it.
Objects are central to Python; in fact, Python really is
just a collection of objects interacting with each other. An
object is self-contained code or data, and the idea of OOP
is to break up a program into smaller, easier-to-
understand components. Up until now, you have mainly
seen procedural programming techniques, which take a
top-down approach and follow predefined sets of
instructions. While this approach works well for simple
programs, to write more sophisticated applications with
better scalability, OOP is often the preferred method
used by professional programmers. However, Python is
very flexible in that you can mix and match these two
approaches as you build applications.

Functions are an important part of the OOP principles of


reusability and object-oriented structure. For the 200-
901 DevNet Associate DEVASC exam, you need to be
able to describe the benefits and techniques used in
Python to build modular programs. Therefore, you need
to know how to use Python classes and methods, which
are covered next.

PYTHON CLASSES
In Python, you use classes to describe objects. Think of a
class as a tool you use to create your own data structures
that contain information about something; you can then
use functions (methods) to perform operations on the
data you describe. A class models how something should
be defined and represents an idea or a blueprint for
creating objects in Python.

Creating a Class

Say that you want to create a class to describe a router.


The first thing you have to do is define it. In Python, you
define a class by using the class keyword, giving the
class a name, and then closing with a colon. Pep8
(introduced in Chapter 3) recommends capitalizing a
class name to differentiate it from a variable. Here is a
simple example of creating a class in Python:

>>> class Router:


pass

This example uses pass as a sort of placeholder that


allows the class to be defined and set up to be used as an
object. To make the class more useful, you can add some
attributes to it. In the case of a router, you typically have
some values that you want to have when you instantiate
the class. Every router has a model name, a software
version, and an IP address for management. You also
need to pass some values to get started. The first value is
always self. The reason for this will become obvious
when you instantiate the class: The self value passes the
object name that you select to instantiate the class. In the
following example, the object you will create is rtr1:

Click here to view code image

class Router:
'''Router Class'''
def __init__(self, model, swversion, ip_add):
'''initialize values'''
self.model = model
self.swversion = swversion
self.ip_add = ip_add

rtr1 = Router('iosV', '15.6.7', '10.10.10.1')

After defining the class, you add a docstring to document


what the class is for and then you create a function that
calls __init__, which is a special case that is used for
the setup of the class. (In __init__, the double
underscores are called dunder or magic methods.)
Functions that are within the class are called methods
and become actions that you can perform on the object
you are creating. To store attributes, you map the name
self and the values you pass to it become variables inside
the object, which then store those values as attributes.
The last bit of code instantiates the object itself. Up until
now, you have been creating a template, and by assigning
data to the variables within the class, you have been
telling Python to build the object. Now you can access
any of the stored attributes of the class by using dot
notation, as shown here:

>>> rtr1.model
'iosV'

When you call rtr1.model, the interpreter displays the


value assigned to the variable model within the object.
You can also create more attributes that aren’t defined
during initialization, as shown in this example:

Click here to view code image

>>> rtr1.desc = 'virtual router'


>>> rtr1.desc
'virtual router'

This example shows how flexible objects are, but you


typically want to define any attributes as part of a class to
automate object creation instead of manually assigning
values. When building a class, you can instantiate as
many objects as you want by just providing a new
variable and passing over some data. Here is another
example of creating a second router object rtr2:

Click here to view code image

>>> rtr2= Router('isr4221', '16.9.5',


'10.10.10.5')
>>> rtr2.model
'isr4221'
Methods
Attributes describe an object, and methods allow you to
interact with an object. Methods are functions you define
as part of a class. In the previous section, you created an
object and applied some attributes to it. Example 4-1
shows how you can work with an object by using
methods. A method that allows you to see the details
hidden within an object without typing a bunch of
commands over and over would be a useful method to
add to a class. Building on the previous example,
Example 4-1 adds a new function called getdesc() to
format and print the key attributes of your router. Notice
that you pass self to this function only, as self can
access the attributes applied during initialization.

Example 4-1 Router Class Example


Click here to view code image

class Router:
'''Router Class'''
def __init__(self, model, swversion,
ip_add):
'''initialize values'''
self.model = model
self.swversion = swversion
self.ip_add = ip_add

def getdesc(self):
'''return a formatted description of
the router'''
desc = f'Router Model :
{self.model}\n'\
f'Software Version :
{self.swversion}\n'\
f'Router Management Address:
{self.ip_add}'
return desc

rtr1 = Router('iosV', '15.6.7', '10.10.10.1')


rtr2 = Router('isr4221', '16.9.5',
'10.10.10.5')

print('Rtr1\n', rtr1.getdesc(), '\n', sep='')


print('Rtr2\n', rtr2.getdesc(), sep='')
There are two routers instantiated in this example: rtr1
and rtr2. Using the print function, you can call the
getdesc() method to return formatted text about the
object’s attributes. The following output would be
displayed:

Click here to view code image

Rtr1
Router Model :iosV
Software Version :15.6.7
Router Management Address:10.10.10.1

Rtr2
Router Model :isr4221
Software Version :16.9.5
Router Management Address:10.10.10.5

Inheritance

Inheritance in Python classes allows a child class to take


on attributes and methods of another class. In the
previous section, Example 4-1 creates a class for routers,
but what about switches? If you look at the Router
class, you see that all of the attributes apply to a switch
as well, so why not reuse the code already written for a
new Switch class? The only part of Example 4-1 that
wouldn’t work for a switch is the getdesc() method,
which prints information about a router. When you use
inheritance, you can replace methods and attributes that
need to be different. To inherit in a class, you create the
class as shown earlier in this chapter, but before the
colon, you add parentheses that include the class from
which you want to pull attributes and methods. It is
important to note that the parent class must come before
the child class in the Python code. Example 4-2 shows
how this works, creating a second class named Switch,
using the Router class as parent. In addition, it creates a
different getdesc() method that prints text about a
switch rather than about a router.

Example 4-2 Router Class and Switch Class with


Inheritance
Click here to view code image

class Router:
'''Router Class'''
def __init__(self, model, swversion,
ip_add):
'''initialize values'''
self.model = model
self.swversion = swversion
self.ip_add = ip_add

def getdesc(self):
'''return a formatted description of
the router'''
desc = (f'Router Model :
{self.model}\n'
f'Software Version :
{self.swversion}\n'
f'Router Management Address:
{self.ip_add}')
return desc

class Switch(Router):
def getdesc(self):
'''return a formatted description of
the switch'''
desc = (f'Switch Model :
{self.model}\n'
f'Software Version :
{self.swversion}\n'
f'Switch Management Address:
{self.ip_add}')
return desc

rtr1 = Router('iosV', '15.6.7', '10.10.10.1')


rtr2 = Router('isr4221', '16.9.5',
'10.10.10.5')
sw1 = Switch('Cat9300', '16.9.5', '10.10.10.8')

print('Rtr1\n', rtr1.getdesc(), '\n', sep='')


print('Rtr2\n', rtr2.getdesc(), '\n', sep='')
print('Sw1\n', sw1.getdesc(), '\n', sep='')
You can add another variable named sw1 and instantiate
the Switch class just as you did the Router class, by
passing in attributes. If you create another print
statement using the newly created sw1 object, you see
the output shown in Example 4-3.

Example 4-3 Code Results of Using Class Inheritance


Click here to view code image

Rtr1
Router Model :iosV
Software Version :15.6.7
Router Management Address:10.10.10.1

Rtr2
Router Model :isr4221
Software Version :16.9.5
Router Management Address:10.10.10.5

Sw1
Switch Model :Cat9300
Software Version :16.9.5
Switch Management Address:10.10.10.8

To learn more about classes, methods, and inheritance,


you can refer to the Python documentation.
https://docs.python.org/3/tutorial/classes.html

WORKING WITH PYTHON MODULES

A central goal of OOP is to allow you to build modular


software that breaks code up into smaller, easier-to-
understand pieces. One big file with thousands of lines of
code would be extremely difficult to maintain and work
with. If you are going to break up your code into
functions and classes, you can also separate that code
into smaller chunks that hold key structures and classes
and allow them to be physically moved into other files,
called modules, that can be included in your main
Python code with the import statement. Creating
modular code provides the following benefits:

Easier readability/maintainability: Code written in a modular


fashion is inherently easier to read and follow. It’s like chapters in a
book providing groupings of similar concepts and topics. Even the best
programmers struggle to understand line after line of code, and
modularity makes maintaining and modifying code much easier.

Low coupling/high cohesion: Modular code should be written in


such a way that modules do not have interdependencies. Each module
should be self-contained so that changes to one module do not affect
other modules or code. In addition, a module should only include
functions and capabilities related to what the module is supposed to do.
When you spread your code around multiple modules, bouncing back
and forth, it is really difficult to follow. This paradigm is called low
coupling/high cohesion modular design.

Code reusability: Modules allow for easy reusability of your code,


which saves you time and makes it possible to share useful code.

Collaboration: You often need to work with others as you build


functional code for an organization. Being able to split up the work and
have different people work on different modules speeds up the code-
production process.

There are a few different ways you can use modules in


Python. The first and easiest way is to use one of the
many modules that are included in the Python standard
library or install one of thousands of third-party modules
by using pip. Much of the functionality you might need
or think of has probably already been written, and using
modules that are already available can save you a lot of
time. Another way to use modules is to build them in the
Python language by simply writing some code in your
editor, giving the file a name, and appending a .py
extension. Using your own custom modules does add a
bit of processing overhead to your application, as Python
is an interpreted language and has to convert your text
into machine-readable instructions on the fly. Finally,
you can program a module in the C language, compile it,
and then add its capabilities to your Python program.
Compared to writing your own modules in Python, this
method results in faster runtime for your code, but it is a
lot more work. Many of the third-party modules and
those included as part of the standard library in Python
are built this way.

Importing a Module
All modules are accessed the same way in Python: by
using the import command. Within a program—by
convention at the very beginning of the code—you type
import followed by the module name you want to use.
The following example uses the math module from the
standard library:

Click here to view code image

>>> import math

>>> dir(math)

['__doc__', '__file__', '__loader__', '__name__',


'__package__',
'__spec__', 'acos', 'acosh', 'asin', 'asinh',
'atan', 'atan2',
'atanh', 'ceil', 'comb', 'copysign', 'cos',
'cosh', 'degrees',
'dist', 'e', 'erf', 'erfc', 'exp', 'expm1',
'fabs', 'factorial',
'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'gcd',
'hypot', 'inf',
'isclose', 'isfinite', 'isinf', 'isnan', 'isqrt',
'ldexp', 'lgam-
ma', 'log', 'log10', 'log1p', 'log2', 'modf',
'nan', 'perm', 'pi',
'pow', 'prod', 'radians', 'remainder', 'sin',
'sinh', 'sqrt',
'tan', 'tanh', 'tau', 'trunc']

After you import a module, you can use the dir()


function to get a list of all the methods available as part
of the module. The ones in the beginning with the __ are
internal to Python and are not generally useful in your
programs. All the others, however, are functions that are
now available for your program to access. As shown in
Example 4-4, you can use the help() function to get
more details and read the documentation on the math
module.

Example 4-4 math Module Help


Click here to view code image
>>> help(math)
Help on module math:

NAME
math

MODULE REFERENCE
https://docs.python.org/3.8/library/math

The following documentation is


automatically generated from the Python
source files. It may be incomplete,
incorrect or include features that
are considered implementation detail and
may vary between Python
implementations. When in doubt, consult
the module reference at the
location listed above.

DESCRIPTION
This module provides access to the
mathematical functions
defined by the C standard.

FUNCTIONS
acos(x, /)
Return the arc cosine (measured in
radians) of x.

acosh(x, /)
Return the inverse hyperbolic cosine of
x.

asin(x, /)
Return the arc sine (measured in
radians) of x.
-Snip for brevity-

You can also use help() to look at the documentation on


a specific function, as in this example:

Click here to view code image

>>> help(math.sqrt)
Help on built-in function sqrt in module math:
sqrt(x, /)
Return the square root of x.

If you want to get a square root of a number, you can use


the sqrt() method by calling math.sqrt and passing a
value to it, as shown here:

>>> math.sqrt(15)
3.872983346207417

You have to type a module’s name each time you want to


use one of its capabilities. This isn’t too painful if you’re
using a module with a short name, such as math, but if
you use a module with a longer name, such as the
calendar module, you might wish you could shorten the
module name. Python lets you do this by adding as and a
short version of the module name to the end of the
import command. For example, you can use this
command to shorten the name of the calendar module
to cal.

>>> import calendar as cal

Now you can use cal as an alias for calendar in your


code, as shown in this example:

Click here to view code image

>>> print(cal.month(2020, 2, 2, 1))

February 2020
Mo Tu We Th Fr Sa Su
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29
Importing a whole module when you need only a specific
method or function adds unneeded overhead. To help
with this, Python allows you to import specific methods
by using the from syntax. Here is an example of
importing the sqrt() and tan() methods:

Click here to view code image

>>> from math import sqrt,tan


>>> sqrt(15)
3.872983346207417

As you can see here, you can import more than one
method by separating the methods you want with
commas.

Notice that you no longer have to use math.sqrt and


can just call sqrt() as a function, since you imported
only the module functions you needed. Less typing is
always a nice side benefit.

The Python Standard Library


The Python standard library, which is automatically
installed when you load Python, has an extensive range
of prebuilt modules you can use in your applications.
Many are built in C and can make life easier for
programmers looking to solve common problems
quickly. Throughout this book, you will see many of these
modules used to interact programmatically with Cisco
infrastructure. To get a complete list of the modules in
the standard library, go to at
https://docs.python.org/3/library/. This documentation
lists the modules you can use and also describes how to
use them.

Importing Your Own Modules


As discussed in this chapter, modules are Python files
that save you time and make your code readable. To save
the class example from earlier in this chapter as a
module, you just need to save all of the code for defining
the class and the attributes and functions as a separate
file with the .py extension. You can import your own
modules by using the same methods shown previously
with standard library modules. By default, Python looks
for a module in the same directory as the Python
program you are importing into. If it doesn’t find the file
there, it looks through your operating system’s path
statements. To print out the paths your OS will search
through, consider this example of importing the sys
module and using the sys.path method:

Click here to view code image

>>> import sys

>>> sys.path

['', '/Users/chrijack/Documents',
'/Library/Frameworks/Python.
framework/Versions/3.8/lib/python38.zip',
'/Library/Frameworks/
Python.framework/Versions/3.8/lib/python3.8',
'/Library/Frameworks/
Python.framework/Versions/3.8/lib/python3.8/lib-
dynload', '/
Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/

site-packages']

Depending on your OS (this output is from a Mac), the


previous code might look different from what you see
here, but it should still show you what Python sees, so it
is useful if you are having trouble importing a module.

If you remove the class from the code shown in Example


4-2 and store it in a separate file named device.py, you
can import the classes from your new module and end up
with the following program, which is a lot more readable
while still operating exactly the same:

Click here to view code image

from device import Router, Switch

rtr1 = Router('iosV', '15.6.7', '10.10.10.1')


rtr2 = Router('isr4221', '16.9.5', '10.10.10.5')
sw1 = Switch('Cat9300', '16.9.5', '10.10.10.8')

print('Rtr1\n', rtr1.getdesc(), '\n', sep='')


print('Rtr2\n', rtr2.getdesc(), '\n', sep='')

print('Sw1\n', sw1.getdesc(), '\n', sep='')

When you execute this program, you get the output


shown in Example 4-5. If you compare these results with
the results shown in Example 4-3, you see that they are
exactly the same. Therefore, the device module is just
Python code that is stored in another file but used in
your program.

Example 4-5 Code Results of device.py Import as a


Module
Click here to view code image

Rtr1
Router Model :iosV
Software Version :15.6.7
Router Management Address:10.10.10.1

Rtr2
Router Model :isr4221
Software Version :16.9.5
Router Management Address:10.10.10.5

Sw1
Switch Model :Cat9300
Software Version :16.9.5
Router Management Address:10.10.10.8

Useful Python Modules for Cisco Infrastructure


This chapter cannot cover every single module that you
might find valuable when writing Python code to interact
with Cisco infrastructure. As you become more familiar
with Python, you will come to love and trust a wide range
of standard library and third-party modules. The
following list includes many that are widely used to
automate network infrastructure. Many of these modules
are used throughout this book, so you will be able to see
them in action. The following list provides a description
of each one, how to install it (if it is not part of the
standard library), and the syntax to use in your Python
import statement:

General-purpose standard library modules:

pprint: The pretty print module is a more intelligent print


function that makes it much easier to display text and data by, for
example, aligning data for better readability. Use the following
command to import this module:

from pprint import pprint

sys: This module allows you to interact with the Python


interpreter and manipulate and view values. Use the following
command to import this module:

import sys

os: This module gives you access to the underlying operating


system environment and file system. It allows you to open files and
interact with OS variables. Use the following command to import
this module:

import os

datetime: This module allows you to create, format, and work


with calendar dates and time. It also enables timestamps and other
useful additions to logging and data. Use the following command
to import this module:

import datetime

time: This module allows you to add time-based delays and clock
capabilities to your Python apps. Use the following command to
import this module:
import time

Modules for working with data:

xmltodict: This module translates XML-formatted files into


native Python dictionaries (key/value pairs) and back to XML, if
needed. Use the following command to install this module:

pip install xmltodict

Use the following command to import this module:

import xmltodict

csv: This is a standard library module for understanding CSV files.


It is useful for exporting Excel spreadsheets into a format that you
can then import into Python as a data source. It can, for example,
read in a CSV file and use it as a Python list data type. Use the
following command to import this module:

import csv

json: This is a standard library module for reading JSON-


formatted data sources and easily converting them to dictionaries.
Use the following command to import this module:

import json

PyYAML: This module converts YAML files to Python objects that


can be converted to Python dictionaries or lists. Use the following
command to install this module:

pip install PyYAML

Use the following command to import this module:

import yaml

pyang: This isn’t a typical module you import into a Python


program. It’s a utility written in Python that you can use to verify
your YANG models, create YANG code, and transform YANG
models into other data structures, such as XSD (XML Schema
Definition). Use the following command to install this module:

pip install pyang

Tools for API interaction:

requests: This is a full library to interact with HTTP services and


used extensively to interact with REST APIs. Use the following
command to install this module:

pip install requests

Use the following command to import this module:

import requests

ncclient: This Python library helps with client-side scripting and


application integration for the NETCONF protocol. Use the
following command to install this module:

pip install ncclient

Use the following command to import this module:

from ncclient import manager

netmiko: This connection-handling library makes it easier to


initiate SSH connections to network devices. This module is
intended to help bridge the programmability gap between devices
with APIs and those without APIs that still rely on command-line
interfaces and commands. It relies on the paramiko module and
works with multiple vendor platforms. Use the following command
to install this module:

pip install netmiko

Use the following command to import this module:

Click here to view code image

from netmiko import ConnectHandler


pysnmp: This is a Python implementation of an SNMP engine for
network management. It allows you to interact with older
infrastructure components without APIs but that do support
SNMP for management. Use the following command to install this
module:

pip install pysnmp

Use the following command to import this module:

import pysnmp

Automation tools:

napalm: napalm (Network Automation and Programmability


Abstraction Layer with Multivendor Support) is a Python module
that provides functionality that works in a multivendor fashion.
Use the following command to install this module:

pip install napalm

Use the following command to import this module:

import napalm

nornir: This is an extendable, multithreaded framework with


inventory management to work with large numbers of network
devices. Use the following command to install this module:

pip install nornir

Use the following command to import this module:

Click here to view code image

from nornir.core import InitNornir

Testing tools:

unittest: This standard library testing module is used to test the


functionality of Python code. It is often used for automated code
testing and as part of test-driven development methodologies. Use
the following command to import this module:
import unittest

pyats: This module was a gift from Cisco to the development


community. Originally named Genie, it was an internal testing
framework used by Cisco developers to validate their code for Cisco
products. pyats is an incredible framework for constructing
automated testing for infrastructure as code. Use the following
command to install this module:

Click here to view code image

pip install pyats (just installs the core fra


documentation for more options)

Many parts of the pyats framework can be imported. Check the


documentation on how to use it.

Chapter 5, “Working with Data in Python,” places more


focus on techniques and tools used to interact with data
in Python. This will round out the key Python knowledge
needed to follow along with the examples in the rest of
the book.

EXAM PREPARATION TASKS


As mentioned in the section “How to Use This Book” in
the Introduction, you have a couple of choices for exam
preparation: the exercises here, Chapter 19, “Final
Preparation,” and the exam simulation questions on the
companion website.

REVIEW ALL KEY TOPICS


Review the most important topics in this chapter, noted
with the Key Topic icon in the outer margin of the page.
Table 4-2 lists these key topics and the page number on
which each is found.
Table 4-2 Key Topics

Key Topic ElementDescriptionPage Number

Paragraph Defining functions 88

Paragraph The value of object-oriented 92


programming

Paragraph Defining classes 92

Paragraph Inheritance 94

Paragraph Python modules 96

Bulleted Common Python modules 10


list 1

DEFINE KEY TERMS


There are no key terms for this chapter.
Chapter 5

Working with Data in Python


This chapter covers the following topics:
File Input and Output: This section shows how to work with test
files in Python.

Parsing Data: This section discusses how to parse data into native
Python objects.

Error Handling in Python: This section discusses how to use try-


except-else-finally to work through errors in Python input.

Test-Driven Development: This section discusses using software


testing to validate function.

Unit Testing: This section discusses how to use the internal Python
module unittest to automate Python code testing.

There are numerous ways to ingest data into a Python


program. You can get input from the user, pull data
from a website or an API, or read data from a file. The
trick is being able to convert data from a data source
into native Python structures and objects so that you
can use it to automate your infrastructure. This
chapter discusses a number of ways to use built-in and
third-party modules to transform different types of
data into Python dictionaries, lists, and other data
collection objects. The rest of the book provides more
detail on how to use these techniques to interact with
Cisco infrastructure; this chapter provides a
foundation for understanding how these data formats
differ and how best to interact with them.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 5-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 5-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

File Input and Output 1, 2

Parsing Data 3–6

Error Handling in Python 7, 8

Test-Driven Development 9

Unit Testing 10, 11

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. When parsing a text file, what determines the end of


a line?
1. Return code
2. Nothing; Python sees it as one big string
3. \n or EoF
4. All of the above

2. What syntax would you use to open a text file to be


written to?
1. data = open("text.txt", "w")
2. data = load("text.txt", "w")
3. load("text.txt", "w")
4. open("text.txt", "w")

3. Which of the following do you use to write to a CSV


file in Python?
1. with open("text.csv", "a") as filehandle:
csv_writer = csv.write(filehandle)
csv_writer.writerow(data)
2. with open("text.csv", "a") as filehandle:
csv_writer.writerow(data)
3. with open("text.csv", "a") as filehandle:
csv_writer = csv.writer(filehandle)
csv_writer.writerow(data)
4. with open("text.csv", "a") as filehandle:
csv_writer = csv.writer(f)
csv_writer.writerow(data)

4. Which module is imported to read XML data?


1. xmlm
2. xmltodict
3. XMLParse
4. None of the above

5. Which methods are used for converting a native


JSON file to Python and then back to JSON?
(Choose two.)
1. load() and dump()
2. loads() and dump()
3. loads() and dumps()
4. load() and dumps()

6. What does YAML stand for?


1. Yet Another Markup Language
2. YAML Ain’t Markup Language
3. The name of its creator
4. None of the above

7. What is the syntax for error handling in Python?


1. try-except-else-finally
2. raise ErrorMessage
3. assertErrorValue
4. All of the above

8. When does the finally block execute?


1. After the try block is successful
2. After the except block
3. At the end of every try block
4. When an error code stops the else block

9. Test-driven development requires that developers:


1. Create a unit test for every bit of code they write
2. Know how to use DevOps tools for automated testing
3. Create a simple test that fails and then write code that allows the
test to succeed
4. Completely unnecessary in an Agile development shop

10. What is the difference between a unit test and an


integration test? (Choose two.)
1. An integration test is for validation of how different parts of the
application work together.
2. An integration test verifies that the application operates as
expected.
3. A unit test verifies API functionality.
4. A unit test is most specific in scope and tests small bits of code.

11. Which class is inherited as part of a unit test?


1. unittest.testcase
2. unittest.TestCase
3. unittest
4. TestCase

FOUNDATION TOPICS
FILE INPUT AND OUTPUT
Pulling data from a file in Python is very straightforward.
To extract data from a text file, you can use native
Python capabilities. Binary files, on the other hand, need
to be processed by some module or another external
program before it is possible to extract the data from
them. The vast majority of your needs will be addressed
through text files, so that is what this section focuses on.

From Python’s perspective, a text file can be thought of


as a sequence of lines. Each line, as it is read in to
Python, is typically 79 characters long (per PEP 8
convention), with a newline character at the end (\n for
Python). There are just two functions that you need to
know when working with a text file: open() and
close().

To open a file and read in its contents, you have to first


tell Python the name of the file you want to work with.
You do this by using the open() function and assigning
the output to a Python object (the variable readdata in
this case). The function returns a file handle, which
Python uses to perform various operations on the file.
The code looks as follows:

Click here to view code image

readdata = open("textfile.txt", "r")

The open() function requires two arguments: the name


of the file as a string and the mode that you want to open
the file. In the preceding example, it opens the file in
read mode. There are numerous options you can use
when you set mode, and you can combine them in some
cases to fine-tune how you want Python to handle the
file. The following are some of the options:

r: Open for reading (default)

w: Open for writing, truncating the file first

x: Open for exclusive creation, failing if the file already exists

a: Open for writing, appending to the end of the file if it exists

b: Open in binary mode

t: Open in text mode (default)

+: Open for updating (reading and writing)

With the previous code, you now have a file handling


object named readdata, and you can use methods to
interact with the file methods. To print the contents of
the file, you can use the following:

Click here to view code image


print(readdata.read())

Line one of a text file


Line two of a text file, just like line one, but
the second one.
Third line of a text file.

When using the open() function, you have to remember


to close the file when you are finished reading from it. If
you don’t, the file will stay open, and you might run in to
file lock issues with the operating system while the
Python app is running. To close the file, you simply use
the close() method on the readdata object:

readdata.close()

Keeping track of the state of the file lock and whether


you opened and closed it can be a bit of a chore. Python
provides another way you can use to more easily work
with files as well as other Python objects. The with
statement (also called a context manager in Python) uses
the open() function but doesn’t require direct
assignment to a variable. It also has better exception
handling and automatically closes the file for you when
you have finished reading in or writing to the file. Here’s
an example:

Click here to view code image

with open("textfile.txt", "r") as data:


print(data.read())

This is much simpler code, and you can use all of the
same methods to interact with the files as before. To
write to a file, you can use the same structure, but in this
case, because you want to append some data to the file,
you need to change how you open the file to allow for
writing. In this example, you can use "a+" to allow
reading and appending to the end of the file. Here is
what the code would look like:

Click here to view code image

with open("textfile.txt", "a+") as data:


data.write('\nFourth line added by Python')

Notice the newline in front of the text you are appending


to the file. It appears here so that it isn’t just tacked on at
the very end of the text. Now you can read the file and
see what you added:

Click here to view code image

with open ("textfile.txt", "r") as data:


print(data.read())

Line one of a text file


Line two of a text file, just like line one, but
the second one.
Third line of a text file.
Fourth line added by Python

PARSING DATA
Imagine a world where all the data is in nice, neatly
formatted cells, completely structured, and always
consistent. Unfortunately, data is not so easily accessible,
as there are a multitude of types, structures, and formats.
This is why it is essential that you learn how to parse
data in some of the more common forms within your
Python programs.

Comma-Separated Values (CSV)


A CSV file is just a plaintext spreadsheet or database file.
All of those spreadsheets or databases that you have with
infrastructure information can be easily exported as CSV
files so that you can use them as source data in Python.
Each line in a CSV file represents a row, and commas are
used to separate the individual data fields to make it
easier to parse the data. Python has a built-in CSV
module that you can import that understands the CSV
format and simplifies your code. The following is an
example of a typical CSV file (in this case named
routerlist.csv):

Click here to view code image

"router1","192.168.10.1","Nashville"
"router2","192.168.20.1","Tampa"
"router3","192.168.30.1","San Jose"

This example shows a common asset list or device


inventory, such as one that you might pull from a
network management system or simply keep track of
locally. To start working with this data, you have to
import the CSV module, and then you need to create a
reader object to read your CSV file into. You first have
to read the file into a file handle, and then you run the
CSV read function on it and pass the results on to a
reader object. From there, you can begin to use the CSV
data as you wish. You can create another variable and
pass the reader object variable to the built-in list()
function. Here is what the code would look like:

Click here to view code image

>>> import csv

>>> samplefile = open('routerlist.csv')

>>> samplereader = csv.reader(samplefile)

>>> sampledata = list(samplereader)

>>> sampledata

[['router1', '192.168.10.1', 'Nashville'],


['router2',
'192.168.20.1', 'Tampa'], ['router3',
'192.168.30.1', 'San Jose ']]
In this example, you now have a list of lists that includes
each row of data. If you wanted to manipulate this data,
you could because it’s now in a native format for Python.
Using list notation, you can extract individual pieces of
information:

Click here to view code image

>>> sampledata[0]
['router1', '192.168.10.1', 'Nashville']
>>> sampledata[0][1]
'192.168.10.1'

Using with, you can iterate through the CSV data and
display information in an easier way:

Click here to view code image

import csv

with open("routerlist.csv") as data:

csv_list = csv.reader(data)

for row in csv_list:

device = row[0]

location = row[2]

ip = row[1]

print(f"{device} is in {location.rstrip()}
and has IP
{ip}.")

Notice the rstrip function used to format the location


variable? You use it because the last entry in your CSV
file will have a whitespace character at the very end when
it is read into Python because it is at the very end of the
file. If you don’t get rid of it (by using rstrip), your
formatting will be off.

The output of this code is as follows:

Click here to view code image


router1 is in Nashville and has IP 192.168.10.1.
router2 is in Tampa and has IP 192.168.20.1.
router3 is in San Jose and has IP 192.168.30.1.

If you want to add a fourth device to the CSV file, you can
follow a process very similar to what you did with text
files. Example 5-1 shows how to add a little interaction
from the command line to fill in the fields and create a
Python list with details on the new router. Instead using
of a reader object, this example uses a writer object to
store the formatted CSV data and then write it to the file.

Example 5-1 Code and Input for a CSV File

Click here to view code image

import csv

print("Please add a new router to the list")


hostname = input("What is the hostname? ")
ip = input("What is the ip address? ")
location = input("What is the location? ")

router = [hostname, ip, location]

with open("routerlist.csv", "a") as data:


csv_writer = csv.writer(data)
csv_writer.writerow(router)

<Below is interactive from the terminal after


running the above code>
Please add a new router to the list
What is the hostname? router4
What is the ip address? 192.168.40.1
What is the location? London

If you run the code shown in Example 5-1 and input


details for router 4, now when you display the router list,
you have the new router included as well:

Click here to view code image

router1 is in Nashville and has IP 192.168.10.1.


router2 is in Tampa and has IP 192.168.20.1.
router3 is in San Jose and has IP 192.168.30.1.
router4 is in London and has IP 192.168.40.1.

JavaScript Object Notation (JSON)

JavaScript Object Notation (JSON) is a data structure


that is derived from the Java programming language, but
it can be used as a portable data structure for any
programming language. It was built to be an easily
readable and standard way for transporting data back
and forth between applications. JSON is heavily used in
web services and is one of the core data formats you need
to know how to use in order to interact with Cisco
infrastructure. The data structure is built around
key/value pairs that simplify mapping of data and its
retrieval. Example 5-2 shows an example of JSON.

Example 5-2 JSON


Click here to view code image

{
"interface": {
"name": "GigabitEthernet1",
"description": "Router Uplink",
"enabled": true,
"ipv4": {
"address": [
{
"ip": "192.168.1.1",
"netmask": "255.255.255.0"
}
]
}
}
}

In Example 5-2, you can see the structure that JSON


provides. interface is the main data object, and you can
see that its value is multiple key/value pairs. This nesting
capability allows you to structure very sophisticated data
models. Notice how similar to a Python dictionary the
data looks. You can easily convert JSON to lists (for a
JSON array) and dictionaries (for JSON objects) with the
built-in JSON module. There are four functions that you
work with to perform the conversion of JSON data into
Python objects and back.

load(): This allows you to import native JSON and convert it to a


Python dictionary from a file.

loads(): This will import JSON data from a string for parsing and
manipulating within your program.

dump(): This is used to write JSON data from Python objects to a file.

dumps(): This allows you to take JSON dictionary data and convert it
into a serialized string for parsing and manipulating within Python.

The s at the end of dump and load refers to a string, as


in dump string. To see this in action, you load the JSON
file and map the file handle to a Python object (data) like
so:

Click here to view code image

import json

with open("json_sample.json") as data:


json_data = data.read()

json_dict = json.loads(json_data)

The object json_dict has taken the output of


json.loads(json_data) and now holds the json object
as a Python dictionary:

Click here to view code image

>>> type(json_dict)

<class 'dict'>

>>> print(json_dict)

{'interface': {'name': 'GigabitEthernet1',


'description':
'Router Uplink', 'enabled': True, 'ipv4':
{'address':
[{'ip': '192.168.0.2', 'netmask':
'255.255.255.0'}]}}}

You can now modify any of the key/value pairs, as in this


example, where the description is changed:

Click here to view code image

>>> json_dict["interface"]["description"] =
"Backup Link"

>>> print(json_dict)

{'interface': {'name': 'GigabitEthernet1',


'description': 'Backup
Link', 'enabled': True, 'ipv4': {'address':
[{'ip': '192.168.0.2',
'netmask': '255.255.255.0'}]}}}

In order to save the new json object back to a file, you


have to use the dump() function (without the s) to
convert the Python dictionary back into a JSON file
object. To make it easier to read, you can use the indent
keyword:

Click here to view code image

with open("json_sample.json", "w") as fh:


json.dump(json_dict, fh, indent = 4)

Now if you load the file again and print, you can see the
stored changes, as shown in Example 5-3.

Example 5-3 Loading the JSON File and Printing the


Output to the Screen
Click here to view code image

>>> with open ("json_sample.json") as data:


json_data = data.read()
print(json_data)
{
"interface": {
"name": "GigabitEthernet1",
"description": "Backup Link",
"enabled": true,
"ipv4": {
"address": [
{
"ip": "192.168.0.2",
"netmask": "255.255.255.0"
}
]
}
}
}
>>>

Extensible Markup Language (XML)


Extensible Markup Language (XML) is a very common
data format that is used heavily in configuration
automation. Parsing XML is similar to using other data
formats, in that Python natively understands and can
support XML encoding and decoding. The following is a
very simple example of what XML structure looks like.

<device>
<Hostname>Rtr01</Hostname>
<IPv4>192.168.1.5</IP4>
<IPv6> </IPv6>
</device>

It should come as no surprise that XML looks a bit like


HTML syntax; it was designed to work hand-in-hand
with HTML for data transport and storage between web
services and APIs. XML has a tree structure, with the
root element being at the very top. There is a
parent/child relationship between elements. In the
preceding example, device is the root element that has
Hostname, IPv4, and IPv6 as child elements. Just like
with HTML, a tag has meaning and is used to enclose the
relationships of the elements with a start tag (<>)and a
closing tag (</>). It’s not all that different from JSON in
that a tag acts as a key with a value. You can also assign
attributes to a tag by using the following syntax:

attribute name="some value"

This works the same as an element in that it can provide


a way to represent data. Example 5-4 shows an example
of an IETF interface YANG model in XML.

Example 5-4 YANG Model Represented in XML


Click here to view code image

<?xml version="1.0" encoding="UTF-8" ?>


<interface xmlns="ietf-interfaces">
<name>GigabitEthernet2</name>
<description>Wide Area Network</description>
<enabled>true</enabled>
<ipv4>
<address>
<ip>192.168.1.5</ip>
<netmask>255.255.255.0</netmask>
</address>
</ipv4>
</interface>

To work with this, you can use the native XML library,
but it has a bit of a learning curve and can be a little hard
to use if you just want to convert XML into something
you can work with in Python. To make it easier, you can
use a module called xmltodict to convert XML into an
ordered dictionary in Python. This is a special class of
dictionary that does not allow elements to change order.
Since dictionaries use key/value pairs, where the
key/value pairs are stored is normally not a problem, but
in the case of XML, order matters. Example 5-5 reads in
the XML from Example 5-4 and converts it to an ordered
dictionary.

Example 5-5 Reading in XML and Printing the


Imported Dictionary to the Command Line
Click here to view code image

import xmltodict

with open("xml_sample.xml") as data:


xml_example = data.read()

xml_dict = xmltodict.parse(xml_example)

>>> print(xml_dict)
OrderedDict([('interface',
OrderedDict([('@xmlns', 'ietf-interfaces'),
('name',
'GigabitEthernet2'), ('description', 'Wide Area
Network'), ('enabled', 'true'),
('ipv4', OrderedDict([('address',
OrderedDict([('ip', '192.168.0.2'), ('netmask',
'255.255.255.0')]))]))]))])

Now that you have the XML in a Python dictionary, you


can modify an element, as shown here:

Click here to view code image

xml_dict["interface"]["ipv4"]["address"]["ip"] =
"192.168.55.3"

You can see your changes in XML format by using the


unparse function (see Example 5-6). You can use the
pretty=True argument to format the XML to make it a
bit easier to read. XML doesn’t care about whitespace,
but humans do for readability.

Example 5-6 Printing from the Command Line with


the unparse Function
Click here to view code image

>>> print(xmltodict.unparse(xml_dict,
pretty=True))
<?xml version="1.0" encoding="utf-8"?>
<interface xmlns="ietf-interfaces">
<name>GigabitEthernet2</name>
<description>Wide Area
Network</description>
<enabled>true</enabled>
<ipv4>
<address>
<ip>192.168.55.3</ip>
<netmask>255.255.255.0</netmask>
</address>
</ipv4>
</interface>

To write these changes back to your original file, you can


use the following code:

Click here to view code image

with open("xml_sample.xml", "w") as data:


data.write(xmltodict.unparse(xml_dict,
pretty=True))

YAML Ain’t Markup Language (YAML)


YAML is an extremely popular human-readable format
for constructing configuration files and storing data. It
was built for the same use cases as XML but has a much
simpler syntax and structure. It uses Python-like
indentation to differentiate blocks of information and
was actually built based on JSON syntax but with a
whole host of features that are unavailable in JSON (such
as comments). If you have ever used Docker or
Kubernetes, you have undoubtably run into YAML files.
The following is an example of YAML:

Click here to view code image

---
interface:
name: GigabitEthernet2
description: Wide Area Network
enabled: true
ipv4:
address:
- ip: 172.16.0.2
netmask: 255.255.255.0
Notice that a YAML object has minimal syntax, all data
that is related has the same indentation level, and
key/value pairs are used to store data. YAML can also
represent a list by using the - character to identify
elements, as in the following example of IP addresses.

Click here to view code image

---
addresses:
- ip: 172.16.0.2
netmask: 255.255.255.0
- ip: 172.16.0.3
netmask: 255.255.255.0
- ip: 172.16.0.4
netmask: 255.255.255.0

To work with YAML in Python, you need to install and


import the PyYaml module. Once you import it into your
code, you can convert YAML to Python objects and back
again. YAML objects are converted to dictionaries, and
YAML lists automatically become Python lists. The two
functions that perform this magic are yaml.load to
convert from YAML objects into Python and
yaml.dump to convert Python objects back to YAML.
Just as in the other data examples, you can load a YAML
file and then pass it to yaml.load to work its magic. The
latest PyYaml module requires that you add an argument
to tell it which loader you want to use. This is a security
precaution so that your code will not be vulnerable to
arbitrary code execution from a bad YAML file. Here is
what the code looks like:

Click here to view code image

import yaml

with open("yaml_sample.yaml") as data:


yaml_sample = data.read()
yaml_dict = yaml.load(yaml_sample,
Loader=yaml.FullLoader)

The variable yaml_dict is now a dictionary object


containing your YAML file. Had this key/value been a
YAML list, it would have created a list instead, like this:

Click here to view code image

>>> type(yaml_dict)

<class 'dict'>

>>> yaml_dict

{'interface': {'name': 'GigabitEthernet2',


'description': 'Wide
Area Network', 'enabled': True, 'ipv4':
{'address': [{'ip':
'192.168.0.2', 'netmask': '255.255.255.0'}]}}}

As before, you can modify this object to your liking. For


example, you can change the interface name to
GigabitEtherenet1, as shown in Example 5-7.

Example 5-7 Changing an Interface Name Within the


Python Dictionary and Printing the Results
Click here to view code image

>>> yaml_dict["interface"]["name"] =
"GigabitEthernet1"

>>> print(yaml.dump(yaml_dict,
default_flow_style=False))
interface:
description: Wide Area Network
enabled: true
ipv4:
address:
- ip: 192.168.0.2
netmask: 255.255.255.0
name: GigabitEthernet1
To write these changes back to your file, use the
following code.

Click here to view code image

with open("yaml_sample.yaml", "w") as data:


data.write(yaml.dump(yaml_dict,
default_flow_style=False))

ERROR HANDLING IN PYTHON


Whenever you are working with code, errors are bound
to happen. In Python, errors often halt the execution of
code, and the interpreter spits out some type of cryptic
message. What if you wanted Python to tell the users
what they did wrong and let them try again or perform
some other task to recover from the error? That’s where
the try-except-else-finally code blocks come into play.

You have seen quite a bit of file access in this chapter.


What happens if you ask the user for the filename
instead of hard-coding it? If you did this, you would run
the risk of a typo halting your program. In order to add
some error handling to your code, you can use the try
statement. Example 5-8 shows an example of how this
works.

Example 5-8 try-except-else-finally Code Example


Click here to view code image

x = 0
while True:

try:
filename = input("Which file would you
like to open? :")
with open(filename, "r") as fh:
file_data = fh.read()
except FileNotFoundError:
print(f'Sorry, {filename} doesn't
exist! Please try again.')
else:
print(file_data)
x = 0
break
finally:
x += 1
if x == 3:
print('Wrong filename 3
times.\nCheck name and Rerun.')
break

In this example, a variable keeps track of the number of


times the while loop will be run. This is useful for
building in some logic to make sure the program doesn’t
drive the users crazy by constantly asking them to enter a
filename. Next is an infinite while loop that uses the fact
that the Boolean True will always result in continuing
looping through the code. Next is the try statement,
which contains the block of code you want to subject to
error handling. You ask the user to enter a filename to
open, and it is stored in the filename variable. This
variable is used with open() to open a read-only text file
and use the file handle object fh. The file handle object
uses read() to store the text file in the file_data
variable. If Python can’t find the file specified, the
except FileNotFoundError block of code is executed,
printing an error message with the file’s name and
informing the user to try again. The else block runs only
if an exception does not occur and the filename can be
found. The file_data is printed, x is set to 0 (to empty
the counter), the loop is stopped, and the finally block is
run. The finally block runs regardless of whether an
exception occurs each time through the loop. The x
variable is incremented each time through the loop, and
if the user gets the wrong filename three times, a
message is printed, saying the user tried three times and
to check the file. At this point, the loop is broken, and the
script is halted.
Here is what the program output would look like with a
valid test.txt file in the script directory:

Click here to view code image

Which file would you like to open? :test


Sorry, test doesn't exist! Please try again.
Which file would you like to open? :test.txt
Test file with some text.
Two lines long.

Here is what the output would look like with three wrong
choices:

Click here to view code image

Which file would you like to open? :test


Sorry, test doesn't exist! Please try again.
Which file would you like to open? :test2
Sorry, test2 doesn't exist! Please try again.
Which file would you like to open? :test3
Sorry, test3 doesn't exist! Please try again.
Wrong filename 3 times.
Check name and Rerun.

There are quite a few other error-handling capabilities


available in Python, and if you want to try to make your
applications more user friendly, it would be worth your
time to explore them. The latest documentation can be
found at https://docs.python.org/3/tutorial/errors.html.
This documentation discusses custom errors and
provides more examples of types of errors you can use
with the previous sample code.

TEST-DRIVEN DEVELOPMENT
Test-driven development (TDD) is an interesting concept
that at first glance may seem completely backward. The
idea is that you build a test case first, before any software
has been created or modified. The goal is to streamline
the development process by focusing on only making
changes or adding code that satisfies the goal of the test.
In normal testing, you test after the software is written,
which means you spend your time chasing errors and
bugs more than writing code. By writing the test first,
you spend your time focused on writing only what is
needed and making your code simple, easier to
understand, and hopefully bug free. Figure 5-1 shows the
TDD process in action.

Figure 5-1 Test-Driven Development in Action

The following are the five steps of TDD:

Step 1. Write a test: Write a test that tests for the


new class or function that you want to add to
your code. Think about the class name and
structure you will need in order to call the new
capability that doesn’t exist yet—and nothing
more.
Step 2. Test fails: Of course, the test fails because
you haven’t written the part that works yet. The
idea here is to think about the class or function
you want and test for its intended output. This
initial test failure shows you exactly where you
should focus your code writing to get it to pass.
This is like starting with your end state in mind,
which is the most effective way to accomplish a
goal.
Step 3. Write some code: Write only the code
needed to make the new function or class
successfully pass. This is about efficiency and
focus.

Step 4. Test passes: The test now passes, and the


code works.
Step 5. Refactor: Clean up the code as necessary,
removing any test stubs or hard-coded variables
used in testing. Refine the code, if needed, for
speed.

TDD may see like a waste of time initially. Why write


tests for stuff you know isn’t going to pass? Isn’t all of
this testing just wasted effort? The benefit of this style of
development is that it starts with the end goal in mind,
by defining success right away. The test you create is
laser focused on the application’s purpose and a clear
outcome. Many programmers add too much to their code
by trying to anticipate future needs or building in too
much complexity for an otherwise simple problem. TDD
works extremely well with the iterative nature of Agile
development, with the side benefit of having plenty of
test cases to show that the software works.

UNIT TESTING
Testing your software is not optional. Every script and
application that you create have to go through testing of
some sort. Maybe it’s just testing your syntax in the
interactive interpreter or using an IDE and trying your
code as you write it. While this is software testing, it’s not
structured and often is not repeatable. Did you test all
options? Did you validate your expectations? What
happens if you send unexpected input to a function?
These are some of the reasons using a structured and
automated testing methodology is crucial to creating
resilient software.

A unit test is a type of test that is conducted on small,


functional aspects of code. It’s the lowest level of
software testing and is interested in the logic and
operation of only a single function in your code. That’s
not to say that you can’t perform multiple tests at the
same time. Computers are great at performing repetitive
tasks, but the goal is for each test to be on one function at
a time so that the testing is specific and consistent.
Remember that a unit is the smallest testable part of
your software.

There are other types of testing that you may hear about,
such as integration testing and functional testing. The
differences between these types of testing and unit
testing come down to the scope of the test. As
mentioned, a unit test is testing a small piece of code,
such as a method or function. An integration test, on the
other hand, tests how one software component works
with the rest of the application. It is often used when
modules of an application are developed by separate
teams or when a distributed application has multiple
components working together. A functional test (also
called an end-to-end test) is the broadest in scope from a
testing perspective. This is where the entire system is
tested against the functional specifications and
requirements of the software application. Figure 5-2
shows a testing pyramid to put it in perspective.
Figure 5-2 Testing Pyramid

Python has a built-in unit test module, named unittest.


This module is quite full featured and can support a
tremendous number of test cases. There are other testing
modules that you can use, such as Nose and PyTest (who
comes up with these names?), but for the purposes of the
200-901 DevNet Associate DEVASC exam, you need to
know how unittest works. In order to use unittest, you
need a bit of code to test. Here is a simple function that
computes the area of a circle:

from math import pi

def area_of_circle(r):
return pi*(r**2)

You import from the math module the pi method to


make it a little easier. A function is defined with the
name area_of_circle that takes the argument r. The
function computes the radius of a circle and returns the
value. This is very simple, but what happens if the
function is called, and odd values are passed to it? You
guessed it: lots of errors. So in order to test this function,
you can create a unit test.
Certain conventions must be followed for a unit test.
While you can create a unit test that has the code that
you want to test all in the same file, it’s a better idea to
use object-oriented principles when building tests. In
this case, the function you want to test is saved as
areacircle.py, so following good practices you should
name your unit test file test_areacircle.py. This makes it
easy to differentiate the two. You should also import the
unittest module, and from areacircle you can import the
area_of_circle function. Import the pi method from
math so that you can test your results. The import
statements would look as follows:

Click here to view code image

import unittest
from areacircle import area_of_circle
from math import pi

Next, you need to create a class for your test. You can
name it whatever you want, but you need to inherit
unittest.TestCase from the unittest module. This is
what enables the test function methods to be assigned to
your test class. Next, you can define your first test
function. In this case, you can test various inputs to
validate that the math in your function under test is
working as it should. You will notice a new method called
assertAlmostEqual(), which takes the function you
are testing, passes a value to it, and checks the returned
value against an expected value. You can add a number
of tests to this function. This is what the test now looks
like with the additional code:

Click here to view code image

class
Test_Area_of_Circle_input(unittest.TestCase):
def test_area(self):
# Test radius >= 0
self.assertAlmostEqual(area_of_circle(1),
pi)
self.assertAlmostEqual(area_of_circle(0),
0)
self.assertAlmostEqual(area_of_circle(3.5),
pi * 3.5**2)

You can go to the directory where these two scripts


reside and enter python -m unittest
test_areacircle.py to run the test. If you don’t want to
type all that, you can add the following to the bottom of
the test_areacircle.py script to allow the unittest
module to be launched when you run the test script:

if __name__ == '__main__':
unittest.main()

All this does is check to see if the script is being run


directly (because the __main__ special case is an
attribute for all Python scripts run from the command
line) and call the unittest.main() function. After
executing the function, you should see the following
results:

Click here to view code image

.
--------------------------------------------------
-----------------
Ran 1 test in 0.000s

OK

The dot at the top shows that 1 test ran (even though you
had multiple checks in the same function) to determine
whether the values submitted produced an error. Since
all are valid for the function, the unit test came back
successfully.
Now you can check to see if a negative number causes a
problem. Create a new function under your previous
test_area function. Name this function test_values.
(The test at the beginning is required, or unittest will
ignore the function and not check it.) You can use the
assertRaises check, which will be looking for a
ValueError exception for the function
area_of_circle, and pass it a value of -1. The following
function can be added to your code:

Click here to view code image

def test_values(self):
# Test that bad values are caught
self.assertRaises(ValueError,
area_of_circle, -1)

Example 5-9 shows the output of the test with this


additional check.

Example 5-9 Output from Adding a New Test That


Fails

Click here to view code image

.F
=========================================
FAIL: test_values
(__main__.Test_Area_of_Circle_input)
-----------------------------------------------
-----------------------
Traceback (most recent call last):
File
"/Users/chrijack/Documents/ccnadevnet/test_areacircle.py",
line 14, in
test_values
self.assertRaises(ValueError,
area_of_circle, -1)
AssertionError: ValueError not raised by
area_of_circle

-----------------------------------------------
-----------------------
Ran 2 tests in 0.001s

FAILED (failures=1)

The first check is still good, so you see one dot at the top,
but next to it is a big F for fail. You get a message saying
that the test_value function is where it failed, and see
that your original function did not catch this error. This
means that the code is giving bad results. A radius of -1 is
not possible, but the function gives you the following
output:

>>> area_of_circle(-1)
3.141592653589793

To fix this, you go back to your original function and


some error-checking code. You use a simple if statement
to check for a negative number, and you raise a
ValueError with a message to the user about invalid
input:

Click here to view code image

from math import pi

def area_of_circle(r):
if r < 0:
raise ValueError('Negative radius value
error')
return pi*(r**2)

Now when you try the test from the interpreter, you see
an error raised:

Click here to view code image


>>> area_of_circle(-1)

Traceback (most recent call last):

File "<pyshell>", line 1, in <module>

File
"/Users/chrijack/Documents/ccnadevnet/areacircle.py",

line 5, in area_of_circle

raise ValueError('Negative radius value


error')

ValueError: Negative radius value error

If you rerun the unit test, you see that it now passes the
new check because an error is raised:

Click here to view code image

..
--------------------------------------------------
-----------------
Ran 2 tests in 0.000s

OK

This simple example barely scratches the surface of how


you can use unit testing to check your software, but it
does show you how a unit test is constructed and, more
importantly, what it does to help you construct resilient
code. Many more tests can be conducted; see the
documentation at
https://docs.python.org/3/library/unittest.html.

EXAM PREPARATION TASKS


As mentioned in the section “How to Use This Book” in
the Introduction, you have a couple of choices for exam
preparation: the exercises here, Chapter 19, "Final
Preparation," and the exam simulation questions on the
companion website.
REVIEW ALL KEY TOPICS
Review the most important topics in this chapter, noted
with the Key Topic icon in the outer margin of the page.
Table 5-2 lists these key topics and the page number on
which each is found.

Table 5-2 Key Topics

Key Topic ElementDescriptionPage Number

Section JavaScript Object Notation (JSON) 113

Section Extensible Markup Language (XML) 115

Section Error Handling in Python 11


9

Steps Test-driven development 121

Paragraph Unit test in Python 12


3

Example 5- Output from Adding a New Test That 12


9 Fails 5

Paragraph Fixing a test that fails 12


5

DEFINE KEY TERMS


Define the following key terms from this chapter and
check your answers in the glossary:

test-driven development (TDD)


unit test
integration test
functional test

ADDITIONAL RESOURCES
Reading Data from a File in Python:
https://developer.cisco.com/learning/lab/coding-
204-reading-a-file/step/1

Useful Python Libraries for Network


Engineers: https://www.youtube.com/watch?
v=Y4vfA11fPo0

Python unittest Module—How to Test Your


Python Code?
https://saralgyaan.com/posts/python-unittest-
module-how-to-test-your-python-code/
Chapter 6

Application Programming
Interfaces (APIs)
This chapter covers the following topics:
Application Programming Interfaces (APIs): This section
describes what APIs are and what they are used for.

Representational State Transfer (REST) APIs: This section of


provides a high-level overview of the RESTful APIs and how they
function as well as the benefits of using RESTful APIs.

RESTful API Authentication: This section covers various aspects of


the API authentication methods and the importance of API security.

Simple Object Access Protocol (SOAP): This section examines


SOAP and common examples of when and where this protocol is used.

Remote-Procedure Calls (RPCs): This section provides a high-


level overview of RPCs, why they are used, and the components
involved.

Software developers use application programming


interfaces (APIs) to communicate with and configure
networks. APIs are used to communicate with
applications and other software. They are also used to
communicate with various components of a network
through software. You can use APIs to configure or
monitor specific components of a network, and there
are multiple different types of APIs. This chapter
focuses on two of the most common APIs: northbound
and southbound APIs. This chapter explains the
differences between these type of APIs through the
lens of network automation.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 6-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 6-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

Application Programming Interfaces (APIs) 1, 2

Representational State Transfer (REST) APIs 3

RESTful API Authentication 4

Simple Object Access Protocol (SOAP) 5, 6

Remote-Procedure Calls (RPCs) 7

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. Which of the following is a sample use case of a


southbound API?
1. Pushing network configuration changes down to devices
2. Increasing security
3. Streaming telemetry
4. Sending information to the cloud

2. What are some benefits of using asynchronous


APIs? (Choose two.)
1. Not having to wait on a response to process data
2. Reduced processing time
3. Increased processing time
4. Data function reuse

3. What are the HTTP functions used for API


communication? (Choose three.)
1. GET
2. SOURCE
3. PURGE
4. PATCH
5. PUT

4. True or false: RESTful API authentication can use


API keys or custom tokens.
1. True
2. False

5. What does SOAP stand for?


1. Software Operations and Procedures
2. Software Operations Authentication Protocol
3. Simple Object Access Protocol
4. Simple Operations Automation Platform
5. Support Object Abstract Protocol

6. What are the main components of SOAP messages?


(Choose all that apply.)
1. Envelope
2. Header
3. Destination
4. Body
5. Fault
6. Authentication
7. Source

7. Remote-procedure calls (RPCs) behave similarly to


which of the following?
1. Synchronous API
2. Asynchronous API
FOUNDATION TOPICS
APPLICATION PROGRAMMING
INTERFACES (APIS)
For communicating with and configuring networks,
software developers commonly use application
programming interfaces (APIs). APIs are mechanisms
used to communicate with applications and other
software. They are also used to communicate with
various components of a network through software. A
developer can use APIs to configure or monitor specific
components of a network. Although there are multiple
different types of APIs, this chapter focuses on two of the
most common APIs: northbound and southbound APIs.
The following sections explain the differences between
these two API types through the lens of network
automation, and Figure 6-1 illustrates the typical basic
operations of northbound and southbound APIs.

Figure 6-1 Basic API Operations


Northbound APIs
Northbound APIs are often used for communication
from a network controller to its management software.
For example, Cisco DNA Center has a software graphical
user interface (GUI) that is used to manage its own
network controller. Typically, when a network operator
logs into a controller to manage the network, the
information that is passed to the management software
leverages a northbound REST-based API. Best practices
suggest that the traffic should be encrypted using TLS
between the software and the controller. Most types of
APIs have the ability to use encryption to secure the data
in flight.

Note
RESTful APIs are covered in an upcoming section of
this chapter and in depth in Chapter 7, “RESTful API
Requests and Responses.”

Southbound APIs
If a network operator makes a change to a switch’s
configuration in the management software of the
controller, those changes will then be pushed down to
the individual devices using a southbound API. These
devices can be routers, switches, or even wireless access
points. APIs interact with the components of a network
through the use of a programmatic interface.
Southbound APIs can modify more than just the data
plane on a device.

Synchronous Versus Asynchronous APIs


APIs can handle transactions either in a synchronous
manner or an asynchronous manner. A synchronous
API causes an application to wait for a response from
the API in order to continue processing data or function
normally. This can lead to interruptions in application
processing as delay in responses or failed responses
could cause the application to hang or stop performing
the way it was intended to work. This might occur, for
example, if an application relies on some piece of
information to be retrieved from another API before it
can continue functioning. For example, uploading videos
to YouTube was originally a synchronous use case. While
the videos were uploading, users couldn’t use the rest of
the GUI or change the names of the videos or make other
changes. Users had to wait until the process completed
prior to doing any other work within the YouTube
application. Figure 6-2 provides an example of a
synchronous process.

Figure 6-2 Synchronous API Call Example

Asynchronous APIs do exactly the opposite of


synchronous APIs in that they do not wait until a
response is received prior to continuing to function and
process other aspects of data. Asynchronous APIs
provide a callback function so that the API response can
be sent back at another time, without the application
having to wait for the entire transaction to complete. As
an example of an asynchronous API, today you can
upload a video to YouTube, and while it’s uploading,
users can change the title, add hashtags, and even
retrieve the URL to which the video will be posted once it
is finished being uploaded.

In summary, the main difference between synchronous


and asynchronous APIs is that a synchronous API waits
for other aspects of the code to complete prior to moving
on and processing additional code. An asynchronous
API, on the other hand, provides the ability to continue
processing code and provides a callback so that an
application doesn’t have to wait for the API call to
complete before it can continue processing other API
calls. An asynchronous API provides a better user
experience as the users do not have to wait on certain
aspects of information to be received prior to being able
to use the application for other things. Figure 6-3
illustrates an asynchronous API call and how the
application can continue processing while waiting for the
response from the initial API call.

Figure 6-3 Asynchronous API Call Example

Representational State Transfer (REST) APIs


An API that uses REST is often referred to a RESTful
API. RESTful APIs use HTTP methods to gather and
manipulate data. Because there is a defined structure for
how HTTP works, HTTP offers a consistent way to
interact with APIs from multiple vendors. REST uses
different HTTP functions to interact with data. Table 6-2
lists some of the most common HTTP functions and their
associated use cases.

HTTP functions are very similar to the functions that


most applications and databases use to store or alter
data, whether it is stored in a database or within an
application. These functions are called CRUD functions;
CRUD is an acronym that stands for CREATE, READ,
UPDATE, and DELETE. For example, in an SQL
database, the CRUD functions are used to interact with
or manipulate the data stored in the database. Table 6-3
lists the CRUD functions and their associated actions
and use cases.

Table 6-2 HTTP Functions and Sample Use Cases

HTTP FunctionActionUse Case

GET Requests data from a Viewing a website


destination

POS Submits data to a specific Submitting login


T destination credentials

PUT Replaces data at a specific Updating an NTP


destination server

PAT Appends data to a specific Adding an NTP


CH destination server

DEL Removes data from a Removing an NTP


ETE specific destination server
Table 6-3 CRUD Functions and Sample Use Cases

CRUD FunctionActionUse Case

C Inserts data inside a Creating a customer’s


R database or an home address in a
E application database
A
T
E

R Retrieves data from a Pulling up a customer’s


E database or an home address from a
A application database
D

U Modifies or replaces data Changing a street


P in a database or an address stored in a
D application database
A
T
E

D Removes data from a Removing a customer


E database or an from a database
L application
E
T
E

Whether you are trying to learn how APIs interact with


applications or controllers, test code and outcomes, or
become a full-time developer, one of the most important
pieces of interacting with any software via APIs is
testing. Testing code helps ensure that developers are
accomplishing the desired outcome. This chapter covers
some tools and resources that make it possible to
practice using APIs and REST functions and hone your
development skills in order to become a more efficient
network engineer with coding skills.

Note
Chapter 7 provides more detail on HTTP and CRUD
functions as well as response codes.

RESTful API Authentication


As mentioned earlier in this chapter, it is important to
able to interact with a software controller using RESTful
APIs and to be able to test code to see if the desired
outcomes are accomplished when executing the code.
Keep in mind that APIs are software interfaces into an
application or a controller. Many APIs require
authentication; such APIs are just like devices in that the
user needs to authenticate to gain access to utilize the
APIs. Once a user has authenticated, any changes that a
developer has access to make via the API are then able to
impact the application. This means if a RESTful API call
is used to delete data, that data will be removed from the
application or controller just as if a user were logged into
the device via the CLI and deleted it. It is best practice to
use a test lab or a Cisco DevNet sandbox while learning
or practicing API concepts to prevent accidental impacts
in a production or lab environment.

Note
Cisco DevNet is covered in Chapter 1, “Introduction to
Cisco DevNet Associate Certification.”

Basic Authentication
Basic authentication, illustrated in Figure 6-4, is one of
the simplest and most common authentication methods
used in APIs. The downfall of basic authentication is that
the credentials are passed unencrypted. This means that
if the transport is simple HTTP, it is possible to sniff the
traffic and capture the username and password with little
to no effort. The lack of encryption means that the
credentials are in simple plaintext base 64 encoding in
the HTTP header. However, basic authentication is more
commonly used with SSL or TLS to prevent such attacks.

Figure 6-4 Basic Authentication Example

Another big issue with basic authentication is that the


password is sent back and forth with each request, which
increases the opportunity for an attacker to capture the
traffic containing the password. This is yet another
reason to use encryption on this type of transaction.

API Keys
Some APIs use API keys for authentication. An API key
is a predetermined string that is passed from the client to
the server. It is intended to be a pre-shared secret and
should not be well known or easy to guess because it
functions just like a password. Anyone with this key can
access the API in question and can potentially cause a
major outage and gain access to critical or sensitive data.
An API key can be passed to the server in three different
ways:

String

Request header

Cookie

Example 6-1 provides an example of a string-based API


key. This type of API key is sent with every API call and is
often used as a one-off method of authentication. When
you’re looking to do multiple API calls, it isn’t convenient
to manually enter the API key string every time. This is
where the request header or cookie options come into
play.

Example 6-1 String-Based API Key Example


Click here to view code image

GET /something?api_key=abcdef12345

Request headers are frequently used when a user is


making multiple API calls and doesn’t want to keep
having to put the API key into each API individually. This
approach is typically seen in Postman and Python
scripts. The header must include the string or token in
the header of each API call. Example 6-2 shows the
request header option for API key authentication.

Example 6-2 Request Header API Key Example

GET /something HTTP/1.1


X-API-Key: abcdef12345

Finally, one of the most common methods for recurring


API calls is to use cookies. A cookie stores the API key
string and can be reused and stored over and over. This
is synonymous with a header. Example 6-3 shows an API
key cookie that uses the same key as the previous
examples.

Example 6-3 Cookie API Key Example


Click here to view code image

GET /something HTTP/1.1


Cookie: X-API-KEY=abcdef12345

Note
Later chapters provide detailed examples of the
authentication methods introduced in this chapter.

Custom Tokens
A custom token allows a user to enter his or her
username and password once and receive a unique auto-
generated and encrypted token. The user can then use
this token to access protected pages or resources instead
of having to continuously enter the login credentials.
Tokens can be time bound and set to expire after a
specific amount of time has passed, thus forcing users to
reauthenticate by reentering their credentials. A token is
designed to show proof that a user has previously
authenticated. It simplifies the login process and reduces
the number of times a user has to provide login
credentials. A token is stored in the user’s browser and
gets checked each time the user tries to access
information requiring authentication. Once the user logs
out of the web browser or website, the token is destroyed
so it cannot be compromised. Figure 6-5 provides an
overview of token-based authentication between a client
and a server.
Figure 6-5 Token-Based Authentication Example

Simple Object Access Protocol (SOAP)

Simple Object Access Protocol (SOAP) is used to access


web services. Although HTTP is the most commonly
deployed transport for SOAP, SOAP can use either
Simple Mail Transfer Protocol (SMTP) or HTTP. SOAP is
used to exchange data between applications that were
built on different programming languages, such as Java,
.NET, and PHP. SOAP greatly simplifies the life of a
developer, eliminating the need to know how to develop
in each of these specific programming languages. It
makes it possible to exchange data between these
applications in a more simplified manner, without
requiring a developer to be expert in all the different
languages. SOAP is based on XML. Because most
programming languages today have libraries for working
with XML, SOAP can act as an intermediary specification
between the different applications.

SOAP uses XML to communicate between web services


and clients. Because SOAP is platform and operating
system independent, it can work with both Windows and
Linux platforms. SOAP messages, which typically consist
of the following four main components, are sent between
the web applications and the clients (see Figure 6-6):

Envelope

Header

Body

Fault (optional)

Figure 6-6 SOAP Message Format

The SOAP envelope encloses the XML data and identifies


it as a SOAP message. The envelope indicates the
beginning and the end of the SOAP message. The next
portion of a SOAP message is the SOAP header, and it
can contain multiple header blocks. Header blocks are
targeted to specific SOAP receiver nodes. If a SOAP
message contains a header, it must come before the body
element. The SOAP body contains the actual message
that is designated for the SOAP receiver. Every SOAP
envelope must contain at least one body element.
Typically, SOAP messages are automatically generated
by the web service when it’s called by the client. Figure 6-
7 illustrates the high-level communication that occurs
between a client and a server or web service.

Figure 6-7 High-Level SOAP Communication

Another potentially beneficial aspect of SOAP is that


because it primarily uses HTTP, it is efficient in passing
through firewalls without requiring that any additional
ports be allowed or open for the web service traffic to be
permitted. This can save time and reduce some
operational overhead. To reiterate, the benefit of SOAP is
its capability to work between different languages while
using a simple common HTTP and XML structure.
Example 6-4 shows a sample SOAP message that is being
used to leverage an HTTP GET to retrieve the price for
Cisco’s stock, using the ticker symbol CSCO.

Example 6-4 Sample SOAP Message


Click here to view code image

POST /InStock HTTP/1.1


Host: www.example.org
Content-Type: application/soap+xml;
charset=utf-8
Content-Length: 299
SOAPAction: "http://www.w3.org/2003/05/soap-
envelope"

<?xml version="1.0"?>
<soap:Envelope
xmlns:soap="http://www.w3.org/2003/05/soap-
envelope" xmlns:m="http://
www.example.org">
<soap:Header>
</soap:Header>
<soap:Body>
<m:GetStockPrice>
<m:StockName>CSCO</m:StockName>
</m:GetStockPrice>
</soap:Body>
</soap:Envelope>

The World Wide Web Consortium (W3C) recommends


using the current SOAP specification, version 1.2. The
previous version of SOAP is 1.1. Although it is possible
for a SOAP node to support both versions of the protocol,
a protocol binding must be used for the version of SOAP
that the client intends to use.

As mentioned briefly at the beginning of this section,


SOAP can include optional fault messages. Table 6-4 lists
the options that are available for the optional fault
messages as well as which of them are optional.

Table 6-4 SOAP Fault Options

Fault ElementDescriptionOptional

faultCode Specifies the fault code of an error No

faultString Describes an error No

faultActor Specifies who caused a fault Yes

detail Applies specific error messages Yes

Table 6-4 shows the options available in SOAP version


1.2. faultString provides a description of an error
message that is generated. This is not an optional field;
rather, it is mandatory in the communications between
the client and the web service. faultActor specifies
which node caused a fault. Although this field would
provide some additional information related to who
caused the fault at hand, this field is optional. The detail
element provides application-specific error messages;
that is, this element is populated with information
provided by the application. SOAP fault messages can
contain a variety of faultCode options, which indicate
what errors were generated as well as potentially who
caused each error (the sender or receiver). Table 6-5 lists
the available SOAP fault codes and their associated use
cases.

Table 6-5 SOAP Fault Codes

SOAP Fault CodeDescription

V The faulting node found an invalid element information


e item instead of the expected envelope element
r information item. The namespace, local name, or both
s did not match the envelope element information item
i required by this recommendation.
o
n
M
i
s
m
a
t
c
h

M This is a child element of the SOAP header. If this


u attribute is set, any information that was not understood
s triggers this fault code.
t
U
n
d
e
r
s
t
a
n
d

D A SOAP header block or SOAP body child element


a information item targeted at the faulting SOAP node is
t scoped.
a
E
n
c
o
d
i
n
g
U
n
k
n
o
w
n

S The message was incorrectly formed or did not contain


e the information needed to succeed. For example, the
n message might have lacked the proper authentication or
d payment information. This code generally indicates that
e the message is not to be resent without change.
r

R The message could not be processed for reasons


e attributable to the processing of the message rather than
c to the contents of the message itself. For example,
e processing could include communicating with an
i upstream SOAP node, which did not respond. The
v message could succeed, however, if resent at a later point
e in time.
r

The fault message shown in Example 6-5 was generated


because the Detail value wasn’t interpreted correctly due
to the typo in the XML <m:MaxTime>
P5M</m:MaxTime>. The value P5M caused the issue in
this case because the code was expecting it to be 5PM.
The XML code and value should be
<m:MaxTime>5PM</m:MaxTime> in this case.

Example 6-5 Sample SOAP Fault


Click here to view code image

<env:Envelope
xmlns:env="http://www.w3.org/2003/05/soap-
envelope"

xmlns:m="http://www.example.org/timeouts"

xmlns:xml="http://www.w3.org/XML/1998/namespace">

<env:Body>
<env:Fault>
<env:Code>
<env:Value>env:Sender</env:Value>
<env:Subcode>
<env:Value>m:MessageTimeout</env:Value>
</env:Subcode>
</env:Code>
<env:Reason>
<env:Text xml:lang="en">Sender
Timeout</env:Text>
</env:Reason>
<env:Detail>
<m:MaxTime>P5M</m:MaxTime>
</env:Detail>
</env:Fault>
</env:Body>
</env:Envelope>

Note
The examples used in this chapter are all based on
SOAP version 1.2.

Remote-Procedure Calls (RPCs)


Remote-procedure calls (RPCs) make it possible to
execute code or a program on a remote node in a
network. RPCs behave as if the code were executed
locally on the same local node, even though the code is
executed on a remote address space, such as another
system on the network. Most remote and local calls are
very similar in nature and can be distinguished from one
another based on whether they are local or remote. RPCs
are sometimes also known as function or subroutine
calls. Using an RPC is a very common way of executing
specific commands, such as executing GET or POST
operations to a set API or URL.

When a client sends a request message, the RPC


translates it and then sends it to the server. A request
may be a procedure or a function call destined to a
remote server. When a server receives the request, it
sends back a response to the client. While this
communication is happening, the client is blocked,
allowing the server time to process the call. Once the call
is processed and a response has been sent back to the
client, the communication between the client and server
is unblocked so the client can resume executing the
procedure call. This can be considered a security
mechanism to prevent the flooding of RPCs to brute-
force the server and cause denial-of-service (DoS) attacks
or exhaustion of resources. Figure 6-8 showcases the
high-level RPC communications between a client and a
server.

Figure 6-8 High-Level RPC Communications


As mentioned earlier in this section, an RPC call is
blocked during the waiting periods. Once a procedure is
executed and the response is sent from the server and
received on the client, the execution of the procedure
continues. (This means that RPC calls are typically
synchronous. There are also asynchronous RPC calls, but
the focus of this section is on synchronous RPC calls.)

Now that the high-level communications of RPC have


been covered, let’s look at an example of an RPC request
message. There are different versions of RPC messages.
However, the most common is XML-RPC; XML-RPC was
also the most common version prior to SOAP becoming
available. Example 6-6 shows a simple RPC call with
XML-RPC that uses a GET to retrieve the name of the
21st state added to the United States.

Example 6-6 Sample XML-RPC Request Message

Click here to view code image

<?xml version="1.0"?>
<methodCall>

<methodName>examples.getStateName</methodName>
<params>
<param>
<value><i4>21</i4></value>
</param>
</params>
</methodCall>

You can see in Example 6-6 that the format of XML is


very similar to that of SOAP, making these messages
simple for humans to read and digest and also to build.
Example 6-7 shows an example of an XML-RPC reply or
response message, in which the response to the GET
message from Example 6-6 is Illinois.

Example 6-7 Sample XML-RPC Reply Message


Click here to view code image
<?xml version="1.0"?>
<methodResponse>
<params>
<param>
<value><string>Illinois</string>
</value>
</param>
</params>
</methodResponse>

EXAM PREPARATION TASKS


As mentioned in the section “How to Use This Book” in
the Introduction, you have a couple of choices for exam
preparation: the exercises here, Chapter 19, “Final
Preparation,” and the exam simulation questions on the
companion website.

REVIEW ALL KEY TOPICS


Review the most important topics in this chapter, noted
with the Key Topic icon in the outer margin of the page.
Table 6-6 lists these key topics and the page number on
which each is found.

Table 6-6 Key Topics for Chapter 6

Key Topic ElementDescriptionPage Number

Section Synchronous Versus Asynchronous 131


APIs

Table 6-2 HTTP Functions and Sample Use Cases 133

Table 6-3 CRUD Functions and Sample Use Cases 133


Section RESTful API Authentication 133

Paragrap SOAP structure and components 136


h

DEFINE KEY TERMS


Define the following key terms from this chapter and
check your answers in the glossary:

Representational State Transfer (REST) APIs


synchronous API
asynchronous API
CRUD functions
API key
API token
Simple Object Access Protocol (SOAP)
remote-procedure call (RPC)
Chapter 7

RESTful API Requests and


Responses
This chapter covers the following topics:
RESTful API Fundamentals: This section covers the basics of
RESTful APIs and details operations such as GET, POST, PUT, and
DELETE. Other topics include REST headers and data formats such as
XML, JSON, and YAML.

REST Constraints: This section covers the six architectural


constraints of REST in detail.

REST Tools: This section covers sequence diagrams and tools such as
Postman, curl, HTTPie, and the Python Requests library that are used
to make basic REST calls.

Application programming interfaces (APIs) are the


foundation of the new generation of software,
including networking, cloud, mobile, and Internet of
Things (IoT) software. APIs connect pieces of software
together, “gluing” together any required information
components around a system and enabling two pieces
of software to communicate with each other.

REST, which stands for Representational State


Transfer, refers to a particular style of API building.
Most modern services and networking products today
rely on REST for their APIs simply because REST is
based on HTTP (which happens to be the protocol that
powers nearly all Internet connections). REST is
lightweight, flexible, and scalable, and its popularity
has been growing.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 7-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 7-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

RESTful API Fundamentals 1–5

REST Constraints 6, 7

REST Tools 8

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. In HTTP, in order to make a successful GET request


to the server, the client needs to include at least
which of the following? (Choose two.)
1. URL
2. Method
3. Headers
4. Body

2. Which of the following is not an HTTP method?


1. GET
2. HEAD
3. TRIGGER
4. PATCH

3. Webhooks are like which of the following? (Choose


two.)
1. Remote procedure calls
2. Callback functions
3. State-altering functions
4. Event-triggered notifications

4. Which response code indicates that a resource has


moved?
1. 201
2. 301
3. 401
4. 501

5. Which of the following model the interactions


between various objects in a single use case?
1. REST APIs
2. Sequence diagrams
3. Excel sheets
4. Venn diagrams

6. Which REST API architectural constraint allows


you to download code and execute it?
1. Client/server
2. Statelessness
3. Code on demand
4. Layered systems

7. Rate limiting is an essential REST API design


method for developers. Rate-limiting techniques are
used to ______.
1. increase security
2. have business impact
3. enhance efficiency end to end
4. do all the above

8. To add HTTP headers to a Python request, you can


simply pass them in as which of the following?
1. list
2. dict
3. tuple
4. set

FOUNDATION TOPICS
RESTFUL API FUNDAMENTALS
An application programming interface (API) is a
set of functions and procedures intended to be used as an
interface for software components to communicate with
each other. An API may be for a web app, an operating
system, a database system, computer hardware, or any
software library. A common example of an API is the
Google Maps API, which lets you interface with Google
Maps so that you can display maps in your application,
query locations, and so on. Figure 7-1 shows a simple
way to visualize an API.

Figure 7-1 APIs are a contract between two


communicating applications

The following sections look at the different API types.

API Types
APIs can be broadly classified into three categories,
based on the type of work that each one provides:

Service API: In a service API, an application can call on another


application to solve a particular problem (see Figure 7-2). Usually these
systems can exist independently. For example, in a payment system, an
application can call the API to accept payments via credit cards. As
another example, with a user-management system, an application can
call an API to validate and authenticate users.

Figure 7-2 Service API Providing a Complete Service


to the Calling Application

Information API: An information API allows one application to ask


another application for information. Information in this context can
refer to data gathered over time, telemetry data, or a list of devices that
are currently connected. Figure 7-3 provides a visual representation of
an information API.

Figure 7-3 Information API Providing Information


or Analysis of Data That Has Been Collected

Hardware API: Application developers use hardware APIs to gain


access to the features of hardware devices. Usually these APIs
encompass some kind of hardware or sensors, and an application can
call this kind of API to get the GPS location or real-time sensor data
such as temperature or humidity. Figure 7-4 provides a visual
representation of what a hardware API does.
Figure 7-4 Hardware API Providing Access to
Hardware in Order to Get or Set Data

API Access Types


There are typically three ways APIs can be accessed:

Private: A private API is for internal use only. This access type gives a
company the most control over its API.

Partner: A partner API is shared with specific business partners. This


can provide additional revenue streams without compromising quality.

Public: A public API is available to everyone. This allows third parties


to develop applications that interact with an API and can be a source
for innovation.

Regardless of how they are accessed, APIs are designed


to interact through a network. Because the most widely
used communications network is the Internet, most APIs
are designed based on web standards. Not all remote
APIs are web APIs, but it’s fair to assume that web APIs
are remote.

Thanks to the ubiquity of HTTP on the web, most


developers have adopted it as the protocol underlying
their APIs. The greatest benefit of using HTTP is that it
reduces the learning curve for developers, which
encourages use of an API. HTTP has several features that
are useful in building a good API, which will be apparent
as we start exploring the basics in the next section.

HTTP Basics
A web browser is a classic example of an HTTP client.
Communication in HTTP centers around a concept
called the request/response cycle, in which the client
sends the server a request to do something. The server,
in turn, sends the client a response saying whether or not
the server can do what the client asked. Figure 7-5
provides a very simple illustration of how a client
requests data from a server and how the server responds
to the client.

Figure 7-5 Simple HTTP Request/Response Cycle

Now let’s look at a request from the HTTP point of view,


where the client (web browser) makes a request (GET
/index.hml) to the server (developer.cisco.com). The
server eventually responds to the client with the actual
HTML page, which then gets rendered by the browser.
Figure 7-6 provides a very simple representation of how
a client sends a GET request requesting the page from
the server and how the server responds with the HTML
page to the client.

Figure 7-6 Simple HTTP GET Request with 200 OK


Response
In HTTP, in order to make a successful request to the
server, the client needs to include four items:

URL (uniform resource locator)

Method

List of headers

Body

The following sections look at each of these items in


detail.

Uniform Resource Locator (URL)


An URL is similar to a house address in that it defines
the location where a service resides on the Internet. A
URL typically has four components, as shown in Figure
7-7:

Protocol

Server/host address

Resource

Parameters

Figure 7-7 Anatomy of an HTTP URL

These four components are shown in the Figure 7-7.

As you can see, the server or host address is the unique


server name, /api/rooms/livingroom defines a resource
to access, and lights?state=ON is the parameter to send
in order to take some action.
Method
HTTP defines a set of request methods, outlined in Table
7-2. A client can use one of these request methods to
send a request message to an HTTP server.

Table 7-2 Request Methods

MethodExplanation

G A client can use a GET request to get a web resource


E from the server.
T

H A client can use a HEAD request to get the header that a


E GET request would have obtained. Because the header
A contains the last-modified date of the data, it can be used
D to check against the local cache copy.

P A client can use a POST request to post data or add new


O data to the server.
S
T

P A client can use a PUT request to ask a server to store or


U update data.
T

P A client can use a PATCH request to ask a server to


A partially store or update data.
T
C
H

D A client can use a DELETE request to ask a server to


E delete data.
L
E
T
E
T A client can use a TRACE request to ask a server to
R return a diagnostic trace of the actions it takes.
A
C
E

O A client can use an OPTIONS request to ask a server to


P return a list of the request methods it supports.
T
I
O
N
S

C A client can use a CONNECT request to tell a proxy to


O make a connection to another host and simply reply with
N the content, without attempting to parse or cache it. This
N request is often used to make SSL connection through
E the proxy.
C
T

REST Methods and CRUD


As you have seen, REST is an architectural paradigm that
allows developers to build RESTful services. These
RESTful applications make use of HTTP requests for
handling all four CRUD operations: CREATE, READ,
UPDATE, and DELETE. These four operations are the
operations most commonly used in manipulating data.
The HTTP methods map in a one-to-one way to the
CRUD operations, as shown in Table 7-3.

Table 7-3 Mapping HTTP Methods to CRUD


Operations

HTTP MethodOperationExplanation

P C Used to create a new object or resource.


OS R
T EA Example: Add new room to a house
TE
G R Used to retrieve resource details from the
ET EA system.
D
Example: Get a list of all the rooms or all the
details of one room

P U Typically used to replace or update a resource.


U P Can be used to modify or create a resource.
T D
AT Example: Update details of a room
E

PA U Used to modify some details about a resource.


TC P
H D Example: Change the dimensions of a room
AT
E

D D Used to remove a resource from the system.


EL EL
ET ET Example: Delete a room from a house.
E E

Deep Dive into GET and POST


GET is the most common HTTP request method. A client
can use the GET request method to request (or “get”) a
resource from an HTTP server. GET requests, which
have special qualities, fetch information, and that’s it;
they have no side effects, make no modifications to the
system, create nothing, and destroy nothing. GET
requests should, in other words, be safe and idempotent.
(Idempotent means that no matter how many times you
perform an action, the state of the system you’re dealing
with remains the same.)

A GET request message has the following components, as


shown in Figure 7-8:
Figure 7-8 Syntax of a GET Request

GET: The keyword GET must be all uppercase.

Request URI: Specifies the path of the resource requested, which


must begin from the root / of the document base directory.

HTTP version: Either HTTP/1.0 or HTTP/1.1. This client negotiates


the protocol to be used for the current session. For example, the client
may request to use HTTP/1.1. If the server does not support HTTP/1.1,
it may inform the client in the response to use HTTP/1.0.

Request headers (optional): The client can use optional request


headers (such as accept and accept language) to negotiate with the
server and ask the server to deliver the preferred contents (such as in
the language the client prefers).

Request body (optional): A GET request message has an optional


request body, which contains the query string (explained later in this
chapter).

The POST request method is used to post additional data


to the server (for example, submitting HTML form data
or uploading a file). Issuing an HTTP URL from the
browser always triggers a GET request. To trigger a
POST request, you can use an HTML form with attribute
method=“post” or write your own code. For submitting
HTML form data, the POST request is the same as the
GET request except that the URL-encoded query string is
sent in the request body rather than appended behind
the request URI.

The POST request has the following components, as


shown in Figure 7-9:
Figure 7-9 Syntax of a POST Request

POST: The keyword POST must be all uppercase.

Request URI: Specifies the path of the resource requested, which


must begin from the root / of the document base directory.

HTTP version: Either HTTP/1.0 or HTTP/1.1. This client negotiates


the protocol to be used for the current session. For example, the client
may request to use HTTP/1.1. If the server does not support HTTP/1.1,
it may inform the client in the response to use HTTP/1.0.

Request headers (optional): The client can use optional request


headers, such as content type and content length to inform the server of
the media type and the length of the request body, respectively.

Request body (optional): A POST request message has an optional


request body, which contains the query string (explained later in this
chapter).

HTTP Headers
The HTTP headers and parameters provide of a lot of
information that can help you trace issues when you
encounter them. HTTP headers are an essential part of
an API request and response as they represent the
metadata associated with the API request and response.
Headers carry information for the following:

Request and response body

Request authorization

Response caching

Response cookies
In addition, HTTP headers have information about
HTTP connection types, proxies, and so on. Most of
these headers are for managing connections between a
client, a server, and proxies.

Headers are classified as request headers and response


headers. You have to set the request headers when
sending a request API and have to set the assertion
against the response headers to ensure that the correct
headers are returned.

Request Headers
The request headers appear as name:value pairs.
Multiple values, separated by commas, can be specified
as follows:

Click here to view code image

request-header-name: request-header-value1,
request-header-value2, ...

The following are some examples of request headers:

Click here to view code image

Host: myhouse.cisco.com
Connection: Keep-Alive
Accept: image/gif, image/jpeg, */*
Accept-Language: us-en, fr, cn

Response Headers
The response headers also appear as name:value pairs.
As with request headers, multiple values can be specified
as follows:

Click here to view code image

response-header-name: response-header-value1,
response-header-value2, ...

The following are some examples of response headers:


Click here to view code image

Content-Type: text/html
Content-Length: 35
Connection: Keep-Alive
Keep-Alive: timeout=15, max=100
The response message body contains the resource
data requested.

The following are some examples of request and


response headers:

Authorization: Carries credentials containing the authentication


information of the client for the resource being requested.

WWW-Authenticate: This is sent by the server if it needs a form of


authentication before it can respond with the actual resource being
requested. It is often sent along with response code 401, which means
“unauthorized.”

Accept-Charset: This request header tells the server which character


sets are acceptable by the client.

Content-Type: This header indicates the media type (text/HTML or


application/JSON) of the client request sent to the server by the client,
which helps process the request body correctly.

Cache-Control: This header is the cache policy defined by the server.


For this response, a cached response can be stored by the client and
reused until the time defined in the Cache-Control header.

Response Codes
The first line of a response message (that is, the status
line) contains the response status code, which the server
generates to indicate the outcome of the request. Each
status code is a three-digit number:

1xx (informational): The request was successfully received; the


server is continuing the process.

2xx (success): The request was successfully received, understood,


accepted, and serviced.

3xx (redirection): Further action must be taken to complete the


request.

4xx (client error): The request cannot be understood or is


unauthorized or the requested resource could not be found.

5xx (server error): The server failed to fulfill a request.


Table 7-4 describes some commonly encountered status
codes.

Table 7-4 HTTP Status Codes

Status CodeMeaningExplanation

1 Con The server received the request and is in the


0 tinu process of giving the response.
0 e

2 Oka The request is fulfilled.


0 y
0

3 Mo The resource requested has been permanently


0 ve moved to a new location. The URL of the new
1 per location is given in the Location response
ma header. The client should issue a new request to
nen the new location, and the application should
tly update all references to this new location.

3 Fou This is the same as code 301, but the new


0 nd location is temporary in nature. The client
2 and should issue a new request, but applications
redi need not update the references.
rect
(or
mov
e
tem
por
aril
y)

3 Not In response to the if-modified-since conditional


0 mo GET request, the server notifies that the
4 difi resource requested has not been modified.
ed

4 Bad The server could not interpret or understand the


0 req request; there is probably a syntax error in the
0 uest request message.

4 Aut The requested resource is protected and


0 hen requires the client’s credentials (username and
1 ticat password). The client should resubmit the
ion request with the appropriate credentials
req (username and password).
uire
d

4 For The server refuses to supply the resource,


0 bid regardless of the identity of the client.
3 den

4 Not The requested resource cannot be found on the


0 fou server.
4 nd

4 Met The request method used (for example, POST,


0 hod PUT, DELETE) is a valid method. However, the
5 not server does not allow that method for the
allo resource requested.
wed

4 Req The request sent to the server took longer than


0 uest the website’s server was prepared to wait.
8 tim
eout

4 Req The URI requested by the client is longer than


1 uest the server is willing to interpret.
4 URI
too
larg
e

5 Inte The server is confused; this may be caused by an


0 rnal error in the server-side program responding to
0 serv the request.
er
erro
r

5 Met The request method used is invalid; this could


0 hod be caused by a typing error, such as Get in place
1 not of GET.
imp
lem
ente
d

5 Bad The proxy or gateway indicates that it received a


0 gate bad response from the upstream server.
2 way

5 Serv The server cannot respond due to overloading or


0 ice maintenance. The client can try again later.
3 una
vail
able

5 Gat The proxy or gateway indicates that it received a


0 ewa timeout from an upstream server.
4 y
tim
eout

Now that we have looked at HTTP methods and return


codes, let’s look at the data that is sent or received during
a GET or POST. Let’s look at an example and see how the
same information is represented in the various data
types. For this example, refer to Figure 7-7. In this
example, you are making a GET request to the server at
myhouse.com to change the state of lights in the living
room to ON.

The data sent and received in a RESTful connection


requires structured data formatting. For the house
example, you now see a response from the server that
includes information about the house. Standard data
formats include XML, JSON, and YAML, which are
described in the following sections.

XML
Extensible Markup Language (XML) is a markup
language that encodes information between descriptive
tags. XML is a superset of Hypertext Markup Language
(HTML), which was originally designed to describe the
formatting of web pages served by servers through
HTTP. The encoded information is defined within user-
defined schemas that enable any data to be transmitted
between systems. An entire XML document is stored as
text, and it is both machine readable and human
readable.

Example 7-1 shows a sample XML response document.


As you can see, with XML, you can assign some meaning
to the tags in the document. You can extract the various
attributes from the response by simply locating the
content surrounded by <study_room> and
</study_room>; this content is technically known as the
<study_room> element.

Example 7-1 XML Data Format

Click here to view code image

<?xml version="1.0" encoding="UTF-8" ?>


<root>
<home>this is my house</home>
<home>located in San Jose, CA</home>
<rooms>
<living_room>true</living_room>
<kitchen>false</kitchen>
<study_room>
<size>20x30</size>
</study_room>
<study_room>
<desk>true</desk>
</study_room>
<study_room>
<lights>On</lights>
</study_room>
</rooms>
</root>

JSON
JSON, short for JavaScript Object Notation, is
pronounced like the name “Jason.” The JSON format is
derived from JavaScript object syntax, but it is entirely
text based. It is a key: value data format that is typically
rendered in curly braces {} and square brackets []. JSON
is readable and lightweight, and it is easy for humans to
understand.

A key/value pair has a colon (:) that separates the key


from the value, and each such pair is separated by a
comma in the document or the response.

JSON keys are valid strings. The value of a key is one of


the following data types:

String

Number

Object

Array

Boolean (true or false)

Null

Example 7-2 shows a sample JSON response document,


and you can see the full response. If you are interested in
seeing the status of the lights in the study_room, then
you look at all the values that are present and follow the
various key/value pairs (such as “lights”: “On”) and
extract the various values from the response by locating
the correct keys and corresponding values.

Example 7-2 JSON Data Format

{
"home": [
"this is my house",
"located in San Jose, CA"
],
"rooms": {
"living_room": "true",
"kitchen": "false",
"study_room": [
{
"size": "20x30"
},
{
"desk": true
},
{
"lights": "On"
}
]
}}

YAML
YAML is an acronym that stands for “YAML Ain’t
Markup Language.” According to the official YAML site
(https://yaml.org), “YAML is a human-friendly data
serialization standard for all programming languages.”

YAML is a data serialization language designed for


human interaction. It’s a strict superset of JSON, another
data serialization language. But because it’s a strict
superset, it can do everything that JSON can do and
more. One significant difference is that newlines and
indentation mean something in YAML, whereas JSON
uses brackets and braces to convey similar ideas. YAML
uses three main data formats:

Scalars: The simplest is a keyvalue view.

Lists/sequences: Data can be ordered by indexes.

Dictionary mappings: These are similar to scalars but can contain


nested data, including other data types.

Example 7-3 shows a sample YAML response document.


As you can see, the response is very straightforward and
human readable. If you are interested in seeing the status
of the lights in study_room, you find the study_room
section and then look for the value of lights.

Example 7-3 YAML Data Format

---
home:
- this is my house
- located in San Jose, CA
rooms:
living_room: 'true'
kitchen: 'false'
study_room:
- size: 20x30
- desk: true
- lights: 'On'

Webhooks
Webhooks are user-defined HTTP callbacks. A webhook
is triggered by an event, such as pushing code to a
repository or typing a keyword in a chat window. An
application implementing webhooks sends a POST
message to a URL when a specific event happens.
Webhooks are also referred to as reverse APIs, but
perhaps more accurately, a webhook lets you skip the
request step in the request/response cycle. No request is
required for a webhook, and a webhook sends data when
triggered.

For security reasons, the REST service may perform


some validation to determine whether the receiver is
valid. A simple validation handshake performs
validation, but this is just one way of validating.

The validation token is a unique token specified by the


server. Validation tokens can be generated or revoked on
the server side through the configuration UI. When the
server sends data to a webhook URL, it includes a
validation token in the request HTTP header. The
webhook URL should consist of the same validation
token value in the HTTP response header. In this way,
the server knows that it is sending to a validated
endpoint and not a rogue endpoint. Figure 7-10
illustrates the flow of webhooks.
Figure 7-10 Webhook Validation and Event Flow

Tools Used When Developing with Webhooks


You will face a particular difficulty when developing an
application that consumes webhooks. When using a
public service that provides webhooks, you need a
publicly accessible URL to configure the webhook
service. Typically, you develop on localhost, and the rest
of the world has no access to your application, so how do
you test your webhooks? ngrok (http://ngrok.com) is a
free tool that allows you to tunnel from a public URL to
your application running locally.

Sequence Diagrams
Now that you understand the fundamentals of REST API
(request, response, and webhooks), authentication, data
exchange, and constraints that go with rest APIs, it’s
time to introduce sequence diagrams. A sequence
diagram models the interactions between various objects
in a single use case. It illustrates how the different parts
of a system interact with each other to carry out a
function and the order in which the interactions occur
when a particular use case is executed. In simpler terms,
a sequence diagram shows how different parts of a
system work in a sequence to get something done.
Figure 7-11 is a sequence diagram for the example we’ve
been looking at, where a user wants to get list of all
rooms in the house. For this example, assume that there
is a web application with a user interface that renders the
list of all the rooms and the various attributes of the
rooms.

Figure 7-11 Sequence Diagram Showing End-to-End


Flow

The sequence of events that occur is as follows:

1. The client browser points to http://myhouse.cisco.com/ (the


HTTP GET request sent), which is the web application.
2. The server sends out a REST API request to get all the rooms to
the back-end service (/API/getallrooms) to get all the details of the
house.
3. The back-end API service returns data in JSON format.
4. The web application processes the JSON and renders the data in
the user interface.
5. The client sees the data.

REST CONSTRAINTS
REST defines six architectural constraints that make any
web service a truly RESTful API. These are constraints
also known as Fielding’s constraints (see
https://www.ics.uci.edu/~fielding/pubs/dissertation/to
p.htm). They generalize the web’s architectural
principles and represent them as a framework of
constraints or an architectural style. These are the REST
constraints:

Client/server

Stateless

Cache

Uniform interface

Layered system

Code on demand

The following sections discuss these constraints in some


detail.

Client/Server
The client and server exist independently. They must
have no dependency of any sort on each other. The only
information needed is for the client to know the resource
URIs on the server. The interaction between them is only
in the form of requests initiated by the client and
responses that the server sends to the client in response
to requests. The client/server constraint encourages
separation of concerns between the client and the server
and allows them to evolve independently.

Stateless
REST services have to be stateless. Each individual
request contains all the information the server needs to
perform the request and return a response, regardless of
other requests made by the same API user. The server
should not need any additional information from
previous requests to fulfill the current request. The URI
identifies the resource, and the body contains the state of
the resource. A stateless service is easy to scale
horizontally, allowing additional servers to be added or
removed as necessary without worry about routing
subsequent requests to the same server. The servers can
be further load balanced as necessary.

Cache
With REST services, response data must be implicitly or
explicitly labeled as cacheable or non-cacheable. The
service indicates the duration for which the response is
valid. Caching helps improve performance on the client
side and scalability on the server side. If the client has
access to a valid cached response for a given request, it
avoids repeating the same request. Instead, it uses its
cached copy. This helps alleviate some of the server’s
work and thus contributes to scalability and
performance.

Note
GET requests should be cacheable by default. Usually
browsers treat all GET requests as cacheable.
POST requests are not cacheable by default but can be
made cacheable by adding either an Expires header or
a Cache-Control header to the response.
PUT and DELETE are not cacheable at all.

Uniform Interface
The uniform interface is a contract for communication
between a client and a server. It is achieved through four
subconstraints:

Identification of resources: As we saw earlier in the chapter,


resources are uniquely identified by URIs. These identifiers are stable
and do not change across interactions, even when the resource state
changes.

Manipulation of resources through representations: A client


manipulates resources by sending new representations of the resource
to the service. The server controls the resource representation and can
accept or reject the new resource representation sent by the client.

Self-descriptive messages: REST request and response messages


contain all information needed for the service and the client to interpret
the message and handle it appropriately. The messages are quite
verbose and include the method, the protocol used, and the content
type. This enables each message to be independent.

Hypermedia as the Engine of Application State (HATEOS):


Hypermedia connects resources to each other and describes their
capabilities in machine-readable ways. Hypermedia refers to the
hyperlinks, or simply links, that the server can include in the response.
Hypermedia is a way for a server to tell a client what HTTP requests the
client might want to make in the future.

Layered System
A layered system further builds on the concept of
client/server architecture. A layered system indicates
that there can be more components than just the client
and the server, and each system can have additional
layers in it. These layers should be easy to add, remove,
or change. Proxies, load balancers, and so on are
examples of additional layers.

Code on Demand
Code on demand is an optional constraint that gives the
client flexibility by allowing it to download code. The
client can request code from the server, and then the
response from the server will contain some code, usually
in the form of a script, when the response is in HTML
format. The client can then execute that code.

REST API Versioning


Versioning is a crucial part of API design. It gives
developers the ability to improve an API without
breaking the client’s applications when new updates are
rolled out. Four strategies are commonly employed with
API versioning:

URI path versioning: In this strategy, the version number of the API
is included in the URL path.

Query parameter versioning: In this strategy, the version number


is sent as a query parameter in the URL.

Custom headers: REST APIs are versioned by providing custom


headers with the version number included as an attribute. The main
difference between this approach and the two previous ones is that it
doesn’t clutter the URI with versioning information.
Content negotiation: This strategy allows you to version a single
resource representation instead of versioning an entire API, which
means it gives you more granular control over versioning. Another
advantage of this approach is that it doesn’t require you to implement
URI routing rules, which are introduced by versioning through the URI
path.

Pagination

When a request is made to get a list, it is almost never a


good idea to return all resources at once. This is where a
pagination mechanism comes into play. There are two
popular approaches to pagination:

Offset-based pagination

Keyset-based pagination, also known as continuation token or cursor


pagination (recommended)

A really simple approach to offset-based pagination is to


use the parameters offset and limit, which are well
known from databases.

Example 7-4 shows how query parameters are passed in


the URI in order to get data based on offset and to limit
the number of results returned.

Example 7-4 Pagination: Offset and Limit


Click here to view code image

# returns the devices between 100-115

Usually if the parameters are not specified, the default


values are used. Never return all resources. One rule of
thumb is to model the limit based on the design of your
store retrieval performance.

Example 7-5 shows a URI where no parameters are


passed, which results in the default number of results.
Example 7-5 Pagination: No Parameters Yields the
Default
Click here to view code image

# returns the devices 0 to 200

Note that the data returned by the service usually has


links to the next and the previous pages, as shown in
Example 7-6.

Example 7-6 Pagination Response Containing Links

Click here to view code image

GET /devices?offset=100&limit=10
{
"pagination": {
"offset": 100,
"limit": 10,
"total": 220,
},
"device": [
//...
],
"links": {
"next": "http://myhouse.cisco.com/devices?
offset=110&limit=10",
"prev": "http://myhouse.cisco.com/devices?
offset=90&limit=10"
}
}

Rate Limiting and Monetization


Rate limiting is an essential REST API design method for
developers. Rate-limiting techniques are used to increase
security, business impact, and efficiency across the board
or end to end. Let’s look at how rate limiting helps with
each of them:

Security: Allowing unlimited access to your API is essentially like


handing over the master key to a house and all the rooms therein.
While it’s great when people want to use your API and find it useful,
open access can decrease value and limit business success. Rate
limiting is a critical component of an API’s scalability. Processing limits
are typically measured in transactions per second (TPS). If a user sends
too many requests, API rate limiting can throttle client connections
instead of disconnecting them immediately. Throttling enables clients
to keep using your services while still protecting your API. Finally, keep
in mind that there is always a risk of API requests timing out, and the
open connections also increase the risk of DDoS attacks. (DDoS stands
for distributed denial of service. A DDoS attack consists of a website
being flooded by requests during a short period of time, with the aim of
overwhelming the site and causing it to crash.)

Business impact: One approach to API rate limiting is to offer a free


tier and a premium tier, with different limits for each tier. Limits could
be in terms of sessions or in terms of number of APIs per day or per
month. There are many factors to consider when deciding what to
charge for premium API access. API providers need to consider the
following when setting up API rate limits:

Are requests throttled when they exceed the limit?

Do new calls and requests incur additional fees?

Do new calls and requests receive a particular error code and, if so,
which one?

Efficiency: Unregulated API requests usually and eventually lead to


slow page load times for websites. Not only does this leave customers
with an unfavorable opinion but it can lower your service rankings.

Rate Limiting on the Client Side


As discussed in the previous section, various rate-
limiting factors can be deployed on the server side. As a
good programming practice, if you are writing client-side
code, you should consider the following:

Avoid constant polling by using webhooks to trigger updates.

Cache your own data when you need to store specialized values or
rapidly review very large data sets.

Query with special filters to avoid re-polling unmodified data.

Download data during off-peak hours.

REST TOOLS
Understanding and testing REST API architecture when
engaging in software development is crucial for any
development process. The following sections explore a
few of the most commonly used tools in REST API
testing and how to use some of their most important
features. Based on this information, you will get a better
idea of how to determine which one suits a particular
development process the best.

Postman
One of the most intuitive and popular HTTP clients is a
tool called Postman
(https://www.getpostman.com/downloads/). It has a
very simple user interface and is very easy to use, even if
you’re just starting out with RESTful APIs. It can handle
the following:

Sending simple GETs and POSTs

Creating and executing collections (to group together requests and run
those requests in a predetermined sequence)

Writing tests (scripting requests with the use of dynamic variables,


passing data between requests, and so on)

Chaining, which allows you to use the output of response as an input to


another request

Generating simple code samples in multiple programming languages

Importing and executing collections created by the community

Now, let’s take a look at several Postman examples.


Figure 7-12 shows the user interface of Postman calling
an HTTP GET to the Postman Echo server, and Figure 7-
13 shows how easy it is to send a POST request using the
same interface.
Figure 7-12 Postman: HTTP GET from the Postman
Echo Server

Figure 7-13 Postman: HTTP POST to the Postman


Echo Server

Figure 7-14 illustrates collections. A collection lets you


group individual requests together. You can then
organize these requests into folders. Figure 7-14 shows
the user interface of Postman, with a default collection
that interacts with the Postman Echo Server. Using this
interface is a very good way to learn about various
options for sending or getting REST-based information.
Figure 7-14 Postman Collection

It is possible to generate code for any REST API call that


you try in Postman. After a GET or POST call is made,
you can use the Generate Code option and choose the
language you prefer. Figure 7-15 shows an example of
generating Python code for a simple GET request.
Figure 7-15 Postman Automatic Code Generation

Postman also has other helpful features, such as


environments. An environment is a key/value pair. The
key represents the name of the variable, which allows
you to customize requests; by using variables, you can
easily switch between different setups without changing
your requests.

Finally, Postman stores a history of past calls so you can


quickly reissue a call. Postman even includes some nice
touches such as autocompletion for standard HTTP
headers and support for rendering a variety of payloads,
including JSON, HTML, and even multipart payloads.

You can find Postman examples at


https://learning.postman.com/docs/postman/launching
-postman/introduction/.

curl
curl is an extensive command-line tool that can be
downloaded from https://curl.haxx.se. curl can be used
on just about any platform on any hardware that exists
today. Regardless of what you are running and where,
the most basic curl commands just work.

With curl, you commonly use a couple of different


command-line options:

-d: This option allows you to pass data to the remote server. You can
either embed the data in the command or pass the data using a file.

-H: This option allows you to add an HTTP header to the request.

-insecure: This option tells curl to ignore HTTPS certificate


validation.

-c: This option stores data received by the server. You can reuse this
data in subsequent commands with the -b option.

-b: This option allows you to pass cookie data.

-X: This option allows you to specify the HTTP method, which
normally defaults to GET.

Now let’s take a look at some examples of how to use


curl. Example 7-7 shows how to use curl to call a simple
GET request.

Example 7-7 Sample HTTP GET Using curl

Click here to view code image

$ curl -sD - https://postman-echo.com/get?


test=123
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Tue, 27 Aug 2019 04:59:34 GMT
ETag: W/"ca-42Kz98xXW2nwFREN74xZNS6JeJk"
Server: nginx
set-cookie:
sails.sid=s%3AxZUPHE3Ojk1yts3qrUFqTj_MzBQZZR5n.NrjPkNm0WplJ7%2F%2BX9O7VU

TFpKHpJySLzBytRbnlzYCw; Path=/; HttpOnly


Vary: Accept-Encoding
Content-Length: 202
Connection: keep-alive
{"args":{"test":"123"},"headers":{"x-forwarded-
proto":"https","host":"postman-
echo.com","accept":"*/*","user-
agent":"curl/7.54.0","x-forward-
ed-port":"443"},"url":"https://postman-
echo.com/get?test=123"}

Example 7-8 shows how to use curl to call a simple POST


request.

Example 7-8 Sample HTTP POST Using curl

Click here to view code image

$ curl -sD - -X POST https://postman-


echo.com/post -H 'cache-control: no-cache'
-H 'content-type: text/plain' -d 'hello
DevNet'
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Tue, 27 Aug 2019 05:16:58 GMT
ETag: W/"13a-0lMLfkxl7vDVWfb06pyVxdZlaug"
Server: nginx
set-cookie:
sails.sid=s%3AwiFXmSNJpzY0ONduxUCAE8IodwNg9Z2Y.j%2BJ5%2BOmch8XEq8jO1vzH8

kjNBi8ecJij1rGT8D1nBhE; Path=/; HttpOnly


Vary: Accept-Encoding
Content-Length: 314
Connection: keep-alive

{"args":{},"data":"hello DevNet","files":
{},"form":{},"headers":{"x-forwarded-
proto":"https","host":"postman-
echo.com","content-
length":"12","accept":"*/*","ca
che-control":"no-cache","content-
type":"text/plain","user-
agent":"curl/7.54.0","x-
forwarded-
port":"443"},"json":null,"url":"https://postman-
echo.com/post"}

Example 7-9 shows how to use curl to call a simple GET


request with Basic Auth sent via the header.
Example 7-9 Basic Auth Using curl
Click here to view code image

$ curl -sD - -X GET https://postman-


echo.com/basic-auth -H 'authorization: Basic
cG9zdG1hbjpwYXNzd29yZA==' -H 'cache-control:
no-cache'
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Tue, 27 Aug 2019 05:21:00 GMT
ETag: W/"16-sJz8uwjdDv0wvm7//BYdNw8vMbU"
Server: nginx
set-cookie: sails.sid=s%3A4i3UW5-
DQCMpey8Z1Ayrqq0izt4KZR5-.Bl8QDnt44B690E8J06qyC-

s8oyCLpUfEsFxLEFTSWSC4; Path=/; HttpOnly


Vary: Accept-Encoding
Content-Length: 22
Connection: keep-alive
{"authenticated":true}

HTTPie
HTTPie is a modern, user-friendly, and cross-platform
command-line HTTP client written in Python. It is
designed to make CLI interaction with web services easy
and user friendly. Its simple HTTP commands enable
users to send HTTP requests using intuitive syntax.
HTTPie is used primarily for testing, trouble-free
debugging, and interacting with HTTP servers, web
services, and RESTful APIs. For further information on
HTTPie documentation, downloading, and installation,
see https://httpie.org/doc:

HTTPie comes with an intuitive UI and supports JSON.

It uses expressive and intuitive command syntax.

HTTPie allows for syntax highlighting, formatting, and colorized


terminal output.

HTTPie allows you to use HTTPS, proxies, and authentication.

It provides support for forms and file uploads.

It provides support for arbitrary request data and headers.

It enables Wget-like downloads and extensions.


Now let’s take a look at some examples of using HTTPie.
Example 7-10 shows how to use HTTPie to call a simple
GET request.

Example 7-10 Sample HTTP GET Using HTTPie


Click here to view code image

$ http https://postman-echo.com/get?test=123
HTTP/1.1 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 179
Content-Type: application/json; charset=utf-8
Date: Tue, 27 Aug 2019 05:27:17 GMT
ETag: W/"ed-mB0Pm0M3ExozL3fgwq7UlH9aozQ"
Server: nginx
Vary: Accept-Encoding
set-cookie:
sails.sid=s%3AYCeNAWJG7Kap5wvKPg8HYlZI5SHZoqEf.r7Gi96fe5g7%2FSp0jaJk%2Fa

VRpHZp3Oj5tDxiM8TPZ%2Bpc; Path=/; HttpOnly

{
"args": {
"test": "123"
},
"headers": {
"accept": "*/*",
"accept-encoding": "gzip, deflate",
"host": "postman-echo.com",
"user-agent": "HTTPie/1.0.2",
"x-forwarded-port": "443",
"x-forwarded-proto": "https"
},
"url": "https://postman-echo.com/get?
test=123"
}

Python Requests
Requests is a Python module that you can use to send
HTTP requests. It is an easy-to-use library with a lot of
possibilities ranging from passing parameters in URLs to
sending custom headers and SSL verification. The
Requests library is a very handy tool you can use
whenever you programmatically start using any APIs.
Here you will see how to use this library to send simple
HTTP requests in Python as way to illustrate its ease of
use.

You can use Requests with Python versions 2.7 and 3.x.
Requests is an external module, so it needs to be
installed before you can use it. Example 7-11 shows the
command you use to install the Requests package for
Python.

Example 7-11 Installing the Requests Package for


Python

$ pip3 install requests

To add HTTP headers to a request, you can simply pass


them in a Python dict to the headers parameter.
Similarly, you can send your own cookies to a server by
using a dict passed to the cookies parameter. Example 7-
12 shows a simple Python script that uses the Requests
library and does a GET request to the Postman Echo
server.

Example 7-12 Simple HTTP GET Using Python


Requests
Click here to view code image

import requests
url = "https://postman-echo.com/get"
querystring = {"test":"123"}
headers = {}
response = requests.request("GET", url,
headers=headers, params=querystring)
print(response.text)

Example 7-13 shows a simple Python script that uses the


Requests library and does a POST request to the
Postman Echo server. Notice that the headers field is
populated with the content type and a new field call
payload that sends some random text to the service. The
response to the request is stored in a response object
called response. Everything in the response can be
parsed, and further actions can be taken. This example
simply prints the values of a few attributes of the
response.

Example 7-13 Simple HTTP POST Using Python


Requests
Click here to view code image

import requests
url = "https://postman-echo.com/post"
payload = "hello DevNet"
headers = {'content-type': "text/plain"}
response = requests.request("POST", url,
data=payload, headers=headers)
print(response.text)

Example 7-14 shows a simple Python script that uses the


Requests library and does a GET request to the Postman
Echo server. One difference you will notice between
Example 7-11 and Example 7-14 is related to
authentication. With the Requests library, authentication
is usually done by passing the ‘authorization’ keyword
along with the type and key.

Example 7-14 Basic Auth Using Python Requests


Click here to view code image

import requests
url = "https://postman-echo.com/basic-auth"
headers = {
'authorization': "Basic
cG9zdG1hbjpwYXNzd29yZA=="
}
response = requests.request("GET", url,
headers=headers)
print(response.text)

REST API Debugging Tools for Developing APIs


As you start playing with RESTful APIs, you are bound to
encounter errors. You can use several techniques to
determine the nature of a problem. As you saw in Table
7-3, RESTful APIs use several mechanisms to indicate
the results of REST calls and errors that occur during
processing. You can use these methods to start your
debugging journey for a RESTful application. Usually the
error code returned is the biggest hint you can receive.
Once you have this information, you can use tools like
Postman and curl to make simple API calls and see the
sent and response headers. In addition, other tools that
are built in to web browsers can allow you to see traces
and do other types of debugging. Most browsers include
some type of developer tools, such as Safari’s Web
Development Tools, Chrome’s DevTools, and Firefox’s
Developer Tools. Such tools are included with browsers
by default and enable you to inspect API calls quickly.
Finally, if you plan on building your own test
environment or sandbox, you might want to use tools
like Simple JSON Server (an open-source server that you
can clone and run in your environment for playing with
and learning about RESTful APIs).

EXAM PREPARATION TASKS


As mentioned in the section “How to Use This Book” in
the Introduction, you have a couple of choices for exam
preparation: the exercises here, Chapter 19, “Final
Preparation,” and the exam simulation questions in the
Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS


Review the most important topics in this chapter, noted
with the Key Topic icon in the outer margin of the page.
Table 7-5 lists these key topics and the page number on
which each is found.
Table 7-5 Key Topics

Key Topic ElementDescriptionPage

List The elements of a URL 14


9

Table 7-2 Request Methods 15


0

Table 7-4 HTTP Status Codes 15


4

Paragrap Data formats and XML, YAML, and 15


hs JSON data 5

Section REST Constraints 16


0

Paragrap Pagination 16
h 2

Section REST Tools 16


4

DEFINE KEY TERMS


Define the following key terms from this chapter and
check your answers in the glossary:

API
REST
CRUD
YAML
JSON
webhook
Chapter 8

Cisco Enterprise Networking


Management Platforms and APIs
This chapter covers the following topics:
What Is an SDK?: This section covers what an SDK is and what it is
used for.

Cisco Meraki: This section covers the Cisco Meraki platform and the
REST APIs it exposes.

Cisco DNA Center: This section covers Cisco DNA Center and the
REST APIs that it publicly exposes.

Cisco SD-WAN: This section covers Cisco SD-WAN and the REST
APIs exposed through Cisco vManage.

In Chapter 7, “RESTful API Requests and Responses,”


you learned about REST API concepts. This chapter
begins exploring software development kits (SDKs) as
well as Cisco enterprise networking products, their
APIs, and the public SDKs that come with them. In
particular, this chapter explores the Cisco Meraki,
Cisco DNA Center, and Cisco SD-WAN platforms and
the REST APIs they expose. This chapter provides a
short introduction to each of these solutions and
shows authentication and authorization API calls for
each platform. This chapter also covers basic API calls,
such as for obtaining a list of devices and client health
status. API tools such as curl and Postman are used
throughout the chapter. Python SDKs and scripts are
also explored as an introduction to network
programmability and automation.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 8-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 8-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

What Is an SDK? 1–2

Cisco Meraki 3–5

Cisco DNA Center 6–8

Cisco SD-WAN 9–10

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. What are some of the features of a good SDK?


(Choose three.)
1. Is easy to use
2. Is well documented
3. Integrates well with other SDKs
4. Impacts hardware resources

2. What are the advantages of using an SDK? (Choose


two.)
1. Quicker integration
2. Faster development
3. Advanced customization
4. Error handling

3. What APIs does the Cisco Meraki platform provide


to developers? (Choose two.)
1. Captive Portal API
2. Scanning API
3. Access Point API
4. Infrastructure API

4. What is the name of the Cisco Meraki Dashboard


API authentication header?
1. X-Cisco-Meraki-API-Key
2. X-Cisco-Meraki-Token
3. X-Cisco-Meraki-Session-key
4. Bearer

5. What is the base URL for the Cisco Meraki


Dashboard API?
1. https://api.meraki.com/api/v0
2. https://api.meraki.com/v1/api
3. https://api.meraki.cisco.com/api/v0
4. https://api.meraki.cisco.com/v1/api

6. What type of authentication do the Cisco DNA


Center platform APIs use?
1. No auth
2. API key
3. Basic auth
4. Hash-based message authentication

7. When specifying the timestamp parameter with


the Cisco DNA Center APIs, what format should the
time be in?
1. UNIX time
2. OS/2
3. OpenVMS
4. SYSTEMTIME
8. What is the output of the multivendor SDK for
Cisco DNA Center platform?
1. Device driver
2. Device package
3. Software driver
4. Software package

9. Which component of the Cisco SD-WAN fabric


exposes a public REST API interface?
1. vSmart
2. vBond
3. vManage
4. vEdge

10. When initially authenticating to the Cisco SD-WAN


REST API, how are the username and password
encoded?
1. application/postscript
2. application/xml
3. application/json
4. application/x-www-form-urlencoded

FOUNDATION TOPICS
WHAT IS AN SDK?
An SDK (software development kit) or devkit is a set of
software development tools that developers can use to
create software or applications for a certain platform,
operating system, computer system, or device. An SDK
typically contains a set of libraries, APIs, documentation,
tools, sample code, and processes that make it easier for
developers to integrate, develop, and extend the
platform. An SDK is created for a specific programming
language, and it is very common to have the same
functionality exposed through SDKs in different
programming languages.

Chapter 6, “Application Programming Interfaces (APIs),”


describes what an API is, and Chapter 7 covers the most
common API framework these days, the REST API
framework. As a quick reminder, an application
programming interface (API) is, as the name implies, a
programming interface that implements a set of rules
developers and other software systems can use to
interact with a specific application.

As you will see in this chapter and throughout this book,


where there is an API, there is usually also an SDK.
Software developers can, of course, implement and
spend time on developing code to interact with an API
(for example, building their own Python classes and
methods to authenticate, get data from the API, or create
new objects in the API), or they can take advantage of the
SDK, which makes all these objects already available.

Besides offering libraries, tools, documentation, and


sample code, some SDKs also offer their own integrated
development environments (IDEs). For example, the
SDKs available for mobile application development on
Google Android and Apple iOS also make available an
IDE to give developers a complete solution to create, test,
debug, and troubleshoot their applications.

A good SDK has these qualities:

Is easy to use

Is well documented

Has value-added functionality

Integrates well with other SDKs

Has minimal impact on hardware resources

In order to be used by developers, an SDK should be easy


to use and follow best practices for software development
in the programming language for which the SDK was
developed. For example, for Python development, there
are Python Enhancement Proposals (PEPs), which are
documents that provide guidance and spell out best
practices for how Python code should be organized,
packaged, released, deprecated, and so on. PEP8 is a
popular standard for styling Python code and is
extensively used in the developer community.
Documenting the SDK inline as well as having external
documentation is critical to developer adoption and the
overall quality of the SDK. Having good, up-to-date
documentation of the SDK makes the adoption and
understanding of the code and how to use it much easier.
A good SDK also adds value by saving development time
and providing useful features. Integrating with other
SDKs and development tools should be easy and
scalable, and the code should be optimized for minimal
hardware resource utilization as well as execution time.

SDKs provide the following advantages:

Quicker integration

Faster and more efficient development

Brand control

Increased security

Metrics

As mentioned previously, there are significant


development time savings when adopting SDKs, as the
functionality and features provided by an SDK don’t have
to be developed in house. This leads to quicker
integration with the API and quicker time to market. In
addition, brand control can be enforced with the SDK.
The look and feel of applications developed using the
SDK can be uniform and in line with the overall brand
design. For example, applications developed for iOS
using the Apple SDK have a familiar and uniform look
and feel because they use the building blocks provided by
the SDK. Application security best practices can be
enforced through SDKs. When you develop using a
security-conscious SDK, the applications developed have
the SDK’s security features integrated automatically. Sets
of metrics and logging information can be included with
an SDK to provide better insights into how the SDK is
being used and for troubleshooting and performance
tweaking.

It is critical to ensure a great experience for all


developers when interacting with an API. In addition,
offering a great SDK with an API is mandatory for
success.

Cisco has been developing applications and software


since its inception. As the requirements for integrations
with other applications and systems have grown, APIs
have been developed to make it easier for developers and
integrators to create and develop their own solutions and
integrations. Throughout the years, software
architectures have evolved, and currently all Cisco
solutions provide some type of API. As mentioned
earlier, where there is an API, there is usually also an
SDK.

The starting point in exploring all the SDKs that Cisco


has to offer is https://developer.cisco.com. As you will
see in the following sections of this chapter and
throughout this book, there are several SDKs developed
by Cisco and third parties that take advantage of the APIs
that currently exist with all Cisco products.

CISCO MERAKI
Meraki became part of Cisco following its acquisition in
2012. The Meraki portfolio is large, comprising wireless,
switching, security, and video surveillance products. The
differentiating factor for Meraki, compared to similar
products from Cisco and other vendors, is that
management is cloud based. Explore all the current Cisco
Meraki products and offerings at
https://meraki.cisco.com.
From a programmability perspective, the Meraki cloud
platform provides several APIs:

Captive Portal API

Scanning API

MV Sense Camera API

Dashboard API

The Cisco Meraki cloud platform also provides


webhooks, which offer a powerful and lightweight way to
subscribe to alerts sent from the Meraki cloud when an
event occurs. (For more about webhooks, see Chapter 7.)
A Meraki alert includes a JSON-formatted message that
can be configured to be sent to a unique URL, where it
can be further processed, stored, and acted upon to
enable powerful automation workflows and use cases.

The Captive Portal API extends the power of the built-in


Meraki splash page functionality by providing complete
control of the content and authentication process that a
user interacts with when connecting to a Meraki wireless
network. This means Meraki network administrators can
completely customize the portal, including the
onboarding experience for clients connecting to the
network, how the web page looks and feels, and the
authentication and billing processes.

The Scanning API takes advantage of Meraki smart


devices equipped with wireless and BLE (Bluetooth Low
Energy) functionality to provide location analytics and
report on user behavior. This can be especially useful in
retail, healthcare, and enterprise environments, where
business intelligence and information can be extracted
about trends and user engagement and behavior. The
Scanning API delivers data in real time and can be used
to detect Wi-Fi and BLE devices and clients. The data is
exported to a specified destination server through an
HTTP POST of JSON documents. At the destination
server, this data can then be further processed, and
applications can be built on top of the received data.
Taking into consideration the physical placement of the
access points on the floor map, the Meraki cloud can
estimate the location of the clients connected to the
network. The geolocation coordinates of this data vary
based on a number of factors and should be considered
as a best-effort estimate.

The MV Sense Camera API takes advantage of the


powerful onboard processor and a unique architecture to
run machine learning workloads at the edge. Through
the MV Sense API, object detection, classification, and
tracking are exposed and become available for
application integration. You can, for example, extract
business insight from video feeds at the edge without the
high cost of compute infrastructure that is typically
needed with computer imagining and video analytics.
Both REST and MQTT API endpoints are provided, and
information is available in a request or subscribe model.
MQ Telemetry Transport (MQTT) is a client/server
publish/subscribe messaging transport protocol. It is
lightweight, simple, open, and easy to implement. MQTT
is ideal for use in constrained environments such as
Internet of Things (IoT) and machine-to-machine
communication where a small code footprint is required.

The Meraki APIs covered so far are mostly used to


extract data from the cloud platform and build
integrations and applications with that data. The
Dashboard API, covered next, provides endpoints and
resources for configuration, management, and
monitoring automation of the Meraki cloud platform.
The Dashboard API is meant to be open ended and can
be used for many purposes and use cases. Some of the
most common use cases for the Dashboard API are as
follows:
Provisioning new organizations, administrators, networks, devices, and
more

Configuring networks at scale

Onboarding and decommissioning of clients

Building custom dashboards and applications

To get access to the Dashboard API, you first need to


enable it. Begin by logging into the Cisco Meraki
dashboard at https://dashboard.meraki.com using your
favorite web browser and navigating to Organization >
Settings. From there, scroll down and locate the section
named Dashboard API Access and make sure you select
Enable Access and save the configuration changes at the
bottom of the page. Once you have enabled the API,
select your username at the top-right corner of the web
page and select My Profile. In your profile, scroll down
and locate the section named Dashboard API Access and
select Generate New API Key. The API key you generate
is associated with your account. You can generate,
revoke, and regenerate your API key in your profile.
Make sure you copy and store your API key in a safe
place, as whoever has this key can impersonate you and
get access through the Dashboard API to all the
information your account has access to. For security
reasons, the API key is not stored in plaintext in your
profile, so if you lose the key, you will have to revoke the
old one and generate a new one. If you believe that your
API key has been compromised, you can generate a new
one to automatically revoke the existing API key.

Every Dashboard API request must specify an API key


within the request header. If a missing or incorrect API
key is specified, the API returns a 404 HTTP error
message. Recall from Chapter 7 that HTTP error code
404 means that the API resource you were trying to
reach could not be found on the server. This error code
prevents information leakage and unauthorized
discovery of API resources.
The key for the authentication request header is X-Cisco-
Meraki-API-Key, and the value is the API key you
obtained previously.

In order to mitigate abuse and denial-of-service attacks,


the Cisco Meraki Dashboard API is limited to 5 API calls
per second per organization. In the first second, a burst
of an additional 5 calls is allowed, for a maximum of 15
API calls in the first 2 seconds per organization. If the
rate limit has been exceeded, an error message with
HTTP status code 429 is returned. The rate-limiting
technique that the Dashboard API implements is based
on the token bucket model. The token bucket is an
algorithm used to check that the data that is transmitted
in a certain amount of time complies with set limits for
bandwidth and burstiness. Based on this model, if the
number of API requests crosses the set threshold for a
certain amount of time, you have to wait a set amount of
time until you can make another request to the API. The
time you have to wait depends on how many more
requests you have performed above the allowed limit; the
more requests you have performed, the more time you
have to wait.

The Cisco Meraki Dashboard API uses the base URL


https://api.meraki.com/api/v0. Keep in mind that the
API will evolve, and different versions will likely be
available in the future. Always check the API
documentation for the latest information on all Cisco
APIs, including the Meraki APIs, at
https://developer.cisco.com.

To make it easier for people to become comfortable with


the Meraki platform, the Dashboard API is organized to
mirror the structure of the Meraki dashboard. When you
become familiar with either the API or the GUI, it should
be easy to switch between them. The hierarchy of the
Dashboard API looks as follows:

Organizations
Networks

Devices

Uplink

Most Dashboard API calls require either the organization


ID or the network ID as part of the endpoint. (You will
see later in this chapter how to obtain these IDs and how
to make Dashboard API calls.) When you have these IDs,
you can build and make more advanced calls to collect
data, create and update new resources, and configure
and make changes to the network. Remember that all
API calls require an API key.

If your Meraki dashboard contains a large number of


organizations, networks, and devices, you might have to
consider pagination when making API calls. Recall from
Chapter 7 that pagination is used when the data returned
from an API call is too large and needs to be limited to a
subset of the results. The Meraki Dashboard API
supports three special query parameters for pagination:

perPage: The number of entries to be returned in the current request

startingAfter: A value used to indicate that the returned data will


start immediately after this value

endingBefore: A value used to indicate that the returned data will


end immediately before this value

While the types of the startingAfter and


endingBefore values differ based on API endpoints,
they generally are either timestamps specifying windows
in time for which the data should be returned or integer
values specifying IDs and ranges of IDs.

The Dashboard API also supports action batches, which


make it possible to submit multiple configuration
requests in a single transaction and are ideal for initial
provisioning of a large number of devices or performing
large configuration changes throughout the whole
network. Action batches also provide a mechanism to
avoid hitting the rate limitations implemented in the API
for high-scale configuration changes as you can
implement all the changes with one or a small number of
transactions instead of a large number of individual API
requests. Action batch transactions can be run either
synchronously, waiting for the API call return before
continuing, or asynchronously, in which case the API call
does not wait for a return as the call is placed in a queue
for processing. (In Chapter 7 you saw the advantages and
disadvantages of both synchronous and asynchronous
APIs.) With action batches, you can be confident that all
the updates contained in the transaction were submitted
successfully before being committed because batches are
run in an atomic fashion: all or nothing.

After you have enabled the Meraki Dashboard API,


generated the API key, and saved it in a safe place, you
are ready to interact with the API. For the rest of this
chapter, you will use the always-on Cisco DevNet Meraki
Sandbox, which can be found at
https://developer.cisco.com/sandbox. The API key for
this sandbox is
15da0c6ffff295f16267f88f98694cf29a86ed87.

At this point, you need to obtain the organization ID for


this account. As you saw in Chapter 7, there are several
ways you can interact with an API: You can use tools like
curl and Postman, or you can interact with the API
through programming languages and the libraries that
they provide. In this case, you will use curl and Postman
to get the organization ID for the Cisco DevNet Sandbox
account and then the Cisco Meraki Python SDK.

As mentioned earlier, the base URL for the Dashboard


API is https://api.meraki.com/api/v0. In order to get the
organizations for the account with the API key
mentioned previously, you have to append the
/organizations resource to the base URL. The resulting
endpoint becomes
https://api.meraki.com/api/v0/organizations. You also
need to include the X-Cisco-Meraki-API-Key header for
authentication purposes. This header will contain the
API key for the DevNet Sandbox Meraki account. The
curl command should look as follows in this case:

Click here to view code image

curl -I -X GET \
--url
'https://api.meraki.com/api/v0/organizations' \
-H 'X-Cisco-Meraki-API-Key:
15da0c6ffff295f16267f88f98694cf29a86ed87'

The response should look as shown in Example 8-1.

Example 8-1 Headers of the GET Organizations REST


API Call

Click here to view code image

HTTP/1.1 302 Found


Server: nginx
Date: Sat, 17 Aug 2019 19:05:25 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: no-cache, no-store, max-age=0,
must-revalidate
Pragma: no-cache
Expires: Fri, 01 Jan 1990 00:00:00 GMT
X-Frame-Options: sameorigin
X-Robots-Tag: none
Location:
https://n149.meraki.com/api/v0/organizations
X-UA-Compatible: IE=Edge,chrome=1
X-Request-Id: 87654dbd16ae23fbc7e3282a439b211c
X-Runtime: 0.320067
Strict-Transport-Security: max-age=15552000;
includeSubDomains

You can see in Example 8-1 that the response code for
the request is 302. This indicates a redirect to the URL
value in the Location header. Redirects like the one in
Example 8-1 can occur with any API call within the
Dashboard API, including POST, PUT, and DELETE. For
GET calls, the redirect is specified through a 302 status
code, and for any non-GET calls, the redirects are
specified with 307 or 308 status codes. When you specify
the -I option for curl, only the headers of the response
are displayed to the user. At this point, you need to run
the curl command again but this time specify the
resource as
https://n149.meraki.com/api/v0/organizations, remove
the -I flag, and add an Accept header to specify that the
response to the call should be in JSON format. The
command should look like this:

Click here to view code image

curl -X GET \

--url
'https://n149.meraki.com/api/v0/organizations' \

-H 'X-Cisco-Meraki-API-Key:
15da0c6ffff295f16267f88f98694cf29a86ed87'\

-H 'Accept: application/json'

The response in this case contains the ID of the DevNet


Sandbox organization in JSON format:

Click here to view code image

[
{
"name" : "DevNet Sandbox",
"id" : "549236"
}
]

Now let’s look at how you can obtain the organization ID


for the Cisco DevNet Sandbox Meraki account by using
Postman. As mentioned in Chapter 7, Postman is a
popular tool used to explore APIs and create custom
requests; it has extensive built-in support for different
authentication mechanisms, headers, parameters,
collections, environments, and so on. By default,
Postman has the Automatically Follow Redirects option
enabled in Settings, so you do not have to change the
https://api.meraki.com/api/v0/organizations resource
as it is already done in the background by Postman. If
you disable the Automatically Follow Redirects option in
the Postman settings, you should see exactly the same
behavior you just saw with curl. Figure 8-1 shows the
Postman client interface with all the fields (the method
of the call, the resource URL, and the headers) populated
so that the Cisco Meraki REST API returns all the
organizations to which this account has access.

Figure 8-1 GET Organizations REST API Call in


Postman

In the body of the response, you see the same JSON-


formatted output as before, with the same organization
ID for the DevNet Sandbox account.

Let’s explore the Meraki Dashboard API further and


obtain the networks associated with the DevNet Sandbox
organization. If you look up the API documentation at
https://developer.cisco.com/meraki/api/#/rest/api-
endpoints/networks/get-organization-networks, you see
that in order to obtain the networks associated with a
specific organization, you need to do a GET request to
https://api.meraki.com/api/v0/organizations/{organiza
tionId}/networks, where {organizationId} is the ID you
obtained previously, in your first interaction with the
Dashboard API. You have also discovered that the base
URL for the DevNet Sandbox organization is
https://n149.meraki.com/api/v0. You can modify the
API endpoint with this information to use the following
curl command:

Click here to view code image

curl -X GET \

--url
'https://n149.meraki.com/api/v0/organizations/549236/

networks' \

-H 'X-Cisco-Meraki-API-Key:
15da0c6ffff295f16267f88f98694c
f29a86ed87'\

-H 'Accept: application/json'

The response from the API should contain a list of all the
networks that are part of the DevNet Sandbox
organization and should look similar to the output in
Example 8-2.

Example 8-2 List of All the Networks in a Specific


Organization

Click here to view code image

[
{
"timeZone" : "America/Los_Angeles",
"tags" : " Sandbox ",
"organizationId" : "549236",
"name" : "DevNet Always On Read Only",
"type" : "combined",
"disableMyMerakiCom" : false,
"disableRemoteStatusPage" : true,
"id" : "L_646829496481099586"
},
{
"organizationId" : "549236",
"tags" : null,
"timeZone" : "America/Los_Angeles",
"id" : "N_646829496481152899",
"disableRemoteStatusPage" : true,
"name" : "test - mx65",
"disableMyMerakiCom" : false,
"type" : "appliance"
}, ... omitted output

The output in Example 8-2 shows a list of all the


networks that are part of the DevNet Sandbox
organization with an ID of 549236. For each network,
the output contains the same information found in the
Meraki dashboard. You should make a note of the first
network ID returned by the API as you will need it in the
next step in your exploration of the Meraki Dashboard
API.

Now you can try to get the same information—a list of all
the networks that are part of the DevNet Sandbox
organization—by using Postman. As you’ve seen,
Postman by default does the redirection automatically,
so you can specify the API endpoint as
https://api.meraki.com/api/v0/organizations/549236/n
etworks. You need to make sure to specify the GET
method, the X-Cisco-Meraki-API-Key header for
authentication, and the Accept header, in which you
specify that you would like the response from the API to
be in JSON format. Figure 8-2 shows the Postman client
interface with all the information needed to obtain a list
of all the networks that belong to the organization with
ID 549236.
Figure 8-2 GET Networks REST API Call in
Postman

The body of the response contains exactly the same


information as before: a complete list of all the networks
that are part of the DevNet Sandbox organization.

Next, you can obtain a list of all devices that are part of
the network that has the name “DevNet Always On Read
Only” and ID L_646829496481099586. Much as in the
previous steps, you start by checking the API
documentation to find the API endpoint that will return
this data to you. The API resource that contains the
information you are looking for is
/networks/{networkId}/devices, as you can see from the
API documentation at the following link:
https://developer.cisco.com/meraki/api/#/rest/api-
endpoints/devices/get-network-devices. You add the
base URL for the DevNet Sandbox account,
https://n149.meraki.com/api/v0, and populate
{networkId} with the value you obtained in the previous
step. Combining all this information, the endpoint that
will return the information you are seeking is
https://n149.meraki.com/api/v0/networks/L_6468294
96481099586/devices. The curl command in this case is
as follows:

Click here to view code image

curl -X GET \

--url
'https://n149.meraki.com/api/v0/networks/L_646829496481099586/

devices' \

-H 'X-Cisco-Meraki-API-Key:
15da0c6ffff295f16267f88f98694cf29a86ed87'\

-H 'Accept: application/json'

And the response from the API should be similar to the


one in Example 8-3.
Example 8-3 List of All the Devices in a Specific
Network
Click here to view code image

[
{
"wan2Ip" : null,
"networkId" : "L_646829496481099586",
"lanIp" : "10.10.10.106",
"serial" : "QYYY-WWWW-ZZZZ",
"tags" : " recently-added ",
"lat" : 37.7703718,
"lng" : -122.3871248,
"model" : "MX65",
"mac" : "e0:55:3d:17:d4:23",
"wan1Ip" : "10.10.10.106",
"address" : "500 Terry Francois, San
Francisco"
},
{
"switchProfileId" : null,
"address" : "",
"lng" : -122.098531723022,
"model" : "MS220-8P",
"mac" : "88:15:44:df:f3:af",
"tags" : " recently-added ",
"serial" : "QAAA-BBBB-CCCC",
"networkId" : "L_646829496481099586",
"lanIp" : "192.168.128.2",
"lat" : 37.4180951010362
}
]

Notice from the API response that the “DevNet Always


On Read Only” network has two devices: an MX65
security appliance and an eight-port MS-220 switch. The
output also includes geographic coordinates, MAC
addresses, serial numbers, tags, model numbers, and
other information. The same information is available in
the Meraki GUI dashboard.

Following the process used so far, you can obtain the


same information but this time using Postman. Since the
redirection is automatically done for you, the API
endpoint for Postman is
https://api.meraki.com/api/v0/networks/L_646829496
481099586/devices. You populate the two headers
Accept and X-Cisco-Meraki-API-Key with their
respective values, select the GET method, and click the
Send button. If everything went well, the response code
should be 200 OK, and the body of the response should
contain exactly the same information found using curl.
Figure 8-3 shows the Postman client interface with all
the headers and fields needed to get a list of all devices
that are part of the network with ID
L_646829496481099586.

Figure 8-3 GET Devices REST API Call in Postman

So far, you have explored the Meraki Dashboard API


using curl and Postman. You first obtained the
organization ID of the DevNet Sandbox Meraki account
and then, based on that ID, you obtained all the networks
that are part of the organization and then used one of the
network IDs you obtained to find all the devices that are
part of that specific network. This is, of course, just a
subset of all the capabilities of the Meraki Dashboard
API, and we leave it as an exercise for you to explore in
more depth all the capabilities and functionalities of the
API.

As a final step in this section, let’s take a look at the


Meraki Python SDK. As of this writing, there are two
Meraki SDKs for the Dashboard API: one is Python
based and the other is Node.js based. The Meraki Python
SDK used in this book is version 1.0.2; it was developed
for Python 3 and implements a complete set of classes,
methods, and functions to simplify how users interact
with the Dashboard API in Python.

In order to get access to the SDK, you need to install the


meraki-sdk module. As a best practice, always use virtual
environments with all Python projects. Once a virtual
environment is activated, you can run pip install
meraki-sdk to get the latest version of the SDK. In this
section, you follow the same three steps you have
followed in other examples in this chapter: Get the
organization ID for the DevNet Sandbox account, get a
list of all the networks that are part of this organization,
and get all the devices associated to the “DevNet Always
on Read Only” network. The Python 3 code to
accomplish these three tasks might look as shown in
Example 8-4.

You need to import the MerakiSdkClient class from the


meraki_sdk module. You use the MerakiSdkClient class
to create an API client by passing the API key as a
parameter and creating an instance of this class called
MERAKI.

Example 8-4 Python Script That Uses meraki_sdk


Click here to view code image

#! /usr/bin/env python
from meraki_sdk.meraki_sdk_client import
MerakiSdkClient

#Cisco DevNet Sandbox Meraki API key


X_CISCO_MERAKI_API_KEY =
'15da0c6ffff295f16267f88f98694cf29a86ed87'

#Establish a new client connection to the


Meraki REST API
MERAKI =
MerakiSdkClient(X_CISCO_MERAKI_API_KEY)
#Get a list of all the organizations for the
Cisco DevNet account
ORGS = MERAKI.organizations.get_organizations()
for ORG in ORGS:
print("Org ID: {} and Org Name:
{}".format(ORG['id'], ORG['name']))

PARAMS = {}
PARAMS["organization_id"] = "549236" # Demo
Organization "DevNet Sandbox"

#Get a list of all the networks for the Cisco


DevNet organization
NETS =
MERAKI.networks.get_organization_networks(PARAMS)

for NET in NETS:


print("Network ID: {0:20s}, Name:
{1:45s},Tags: {2:<10s}".format(\
NET['id'], NET['name'],
str(NET['tags'])))

#Get a list of all the devices that are part of


the Always On Network
DEVICES =
MERAKI.devices.get_network_devices("L_646829496
481099586")
for DEVICE in DEVICES:
print("Device Model: {0:9s},Serial:
{1:14s},MAC: {2:17}, Firmware:{3:12s}"\
.format(DEVICE['model'],
DEVICE['serial'], DEVICE['mac'], \
DEVICE['firmware']))

After you instantiate the MerakiSdkClient class, you get


an API client object that provides access to all the
methods of the class. You can find the documentation for
the Python SDK at
https://developer.cisco.com/meraki/api/#/python/guid
es/python-sdk-quick-start. This documentation covers
all the classes and methods, their parameters, and input
and output values for the Python SDK implementation.
Unlike with the curl and Postman examples earlier in
this chapter, you do not need to determine the exact API
resource that will return the information you are
interested in; however, you do need to know how the
MerakiSdkClient API class is organized and what
methods are available. There’s a consistent one-to-one
mapping between the Dashboard API and the Python
SDK, so when you are familiar with one of them, you
should find the other one very easy to understand. There
is also no need to pass the API key through the X-Cisco-
Meraki-API-Key header. This is all automatically
handled by the SDK as well as all the redirects that had
to manually be changed for the curl examples.

Obtaining the organization ID for the DevNet Sandbox


account is as easy as invoking the
organizations.get_organizations() method of the
API client class. The ORGS variable in Example 8-4
contains a list of all the organizations that the DevNet
Sandbox account is a member of. Next, you iterate within
a for loop through all these organizations and display to
the console the organization ID and the organization
name.

Next, you create an empty dictionary called PARAMS


and add to it a key called organization_id with the value
549236. Remember that this was the organization ID for
the DevNet Sandbox account. You still use the Meraki
API client instance, but in this case, you invoke the
networks.get_organization_networks() method.
Just as in the case of the REST API calls with curl and
Postman earlier in this section, where you had to specify
the organization ID when building the endpoint to obtain
the list of networks, the
get_organization_networks() method takes as input
the params dictionary, which contains the same
organization ID value but in a Python dictionary format.
The NETS variable stores the output of the API call. In
another iterative loop, information about each network is
displayed to the console.

Finally, you get the list of devices that are part of the
network and have the ID L_646829496481099586.
Recall from earlier that this ID is for the “DevNet Always
on Read Only” network. In this case, you use the
devices.get_network_devices() method of the
Meraki API client instance and store the result in the
DEVICES variable. You iterate over the DEVICES
variable and, for each device in the list, extract and print
to the console the device model, the serial number, the
MAC address, and the firmware version.

Running the Python 3 script discussed here should result


in output similar to that shown in Figure 8-4.

Figure 8-4 Output of the Python Script from


Example 8-4

CISCO DNA CENTER


Cisco Digital Network Architecture (DNA) is an open,
extensible, software-driven architecture from Cisco that
accelerates and simplifies enterprise network operations.
Behind this new architecture is the concept of intent-
based networking, a new era in networking, in which the
network becomes an integral and differentiating part of
the business. With Cisco DNA and the products behind
it, network administrators define business intents that
get mapped into infrastructure configurations by a
central SDN controller. In the future, an intent-based
network will dynamically adjust itself based on what it
continuously learns from the traffic it transports as well
as the business inputs it gets from the administrator.

Cisco DNA Center is the network management and


command center for Cisco DNA. With Cisco DNA Center,
you can provision and configure network devices in
minutes, define a consistent policy throughout a
network, get live and instantaneous statistics, and get
granular networkwide views. Multidomain and
multivendor integrations are all built on top of a secure
platform.

From a programmability perspective, Cisco DNA Center


provides a set of REST APIs and SDKs through the Cisco
DNA Center platform that are grouped in the following
categories:

Intent API

Integration API

Multivendor SDK

Events and notifications

The Intent API is a northbound REST API that exposes


specific capabilities of the Cisco DNA Center platform.
The main purpose of the Intent API is to simplify the
process of creating workflows that consolidate multiple
network actions into one. An example is the SSID
provisioning API, which is part of the Intent API. When
configuring an SSID on a wireless network, several
operations need to be completed, including creating a
wireless interface, adding WLAN settings, and adding
security settings. The SSID provisioning API combines
all these operations and makes them available with one
API call. This results in a drastic reduction in overall
wireless SSID deployment time and also eliminates
errors and ensures a consistent configuration policy. The
Intent API provides automation capabilities and
simplified workflows for QoS policy configuration,
software image management and operating system
updates for all devices in the network, overall client
health status, and monitoring of application health.
Application developers and automation engineers can
take advantage of this single northbound integration
layer to develop tools and applications on top of the
network.

One of the main goals of the Cisco DNA Center platform


is to simplify and streamline end-to-end IT processes.
The Integration API was created for exactly this purpose.
Through this API, Cisco DNA Center platform publishes
network data, events, and notifications to external
systems and at the same time can consume information
from these connected systems. Integrations with IT
service management systems like Service Now, BMC
Remedy, and other ticketing systems are supported.
Automatic ticket creation and assignment based on
network issues that are flagged by Cisco DNA Center are
now possible. Cisco DNA Center can even suggest
remediation steps based on the machine learning
algorithms that are part of the assurance capabilities.
You can see how a typical support workflow can be
improved with Cisco DNA Center platform in the future.
Cisco DNA Center detects a network issue, automatically
creates a ticket and assigns it to the right support
personnel, along with a possible solution to the issue.
The support team reviews the ticket and the suggested
solution and can approve either immediately or during a
maintenance window the remediation of the problem
that Cisco DNA Center suggested. IP Address
Management (IPAM) integrations are also supported by
the Integration API. It is possible to seamlessly import IP
address pools of information from IPAM systems such as
Infoblox and BlueCat into Cisco DNA Center.
Synchronization of IP pool/subpool information between
Cisco DNA Center and IPAM systems is also supported.
Through the Integration API that Cisco DNA Center
provides, developers can integrate with any third-party
IPAM solution.

Data as a service (DaaS) APIs that are part of the


Integration API allow Cisco DNA Center to publish
insights and data to external tools such as Tableau and
similar reporting solutions. IT administrators have the
option to build dashboards and extract business-relevant
insights.

The Integration API is also used for cross-domain


integrations with other Cisco products, such as Cisco
Meraki, Cisco Stealthwatch, and Cisco ACI. The idea is to
deliver a consistent intent-based infrastructure across
the data center, WAN, and security solutions.

Cisco DNA Center allows customers to have their non-


Cisco devices managed by DNA Center through a
multivendor SDK. Cisco DNA Center communicates with
third-party devices through device packages. The device
packages are developed using the multivendor SDK and
implement southbound interfaces based on CLI, SNMP,
or NETCONF.

Cisco DNA Center also provides webhooks for events and


notifications that are generated in the network or on the
Cisco DNA Center appliance itself. You have the option
of configuring a receiving URL to which the Cisco DNA
Center platform can publish events. Based on these
events, the listening application can take business
actions. For instance, if some of the devices in a network
are out of compliance, the events that the platform
generates can be interpreted by a custom application,
which might trigger a software upgrade action in Cisco
DNA Center. This completes the feedback loop in the
sense that a notification generated by Cisco DNA Center
is interpreted by a third-party custom application and
acted upon by sending either an Intent API or
Integration API call back to Cisco DNA Center to either
remedy or modify the network, based on the desired
business outcome. This mechanism of publishing events
and notifications also saves on processing time and
resources; before this capability existed, the custom
application had to poll continuously to get the status of
an event. By subscribing through the webhook, polling
can now be avoided entirely, and the custom application
receives the status of the event right when it gets
triggered.

Next, let’s focus on the Cisco DNA Center Platform


Intent API. As of this writing, the Intent API is organized
into several distinct categories:

Know Your Network category: This category contains API calls


pertaining to sites, networks, devices, and clients:

With the Site Hierarchy Intent API, you can get information about,
create, update, and delete sites as well as assign devices to a
specific site. (Sites within Cisco DNA Center are logical groupings
of network devices based on a geographic location or site.)

The Network Health Intent API retrieves data regarding network


devices, their health, and how they are connected.

The Network Device Detail Intent API retrieves detailed


information about devices. Different parameters can be passed to
limit the scope of the information returned by the API, such as
timestamp, MAC address, and UUID. Besides all the detailed
information you can retrieve for all the devices in the network, you
can also add, delete, update, or sync specified devices.

The Client Health Intent API returns overall client health


information for both wired and wireless clients.

The Client Detail Intent API returns detailed information about a


single client.

Site Management category: This category helps provision


enterprise networks with zero-touch deployments and manage the
activation and distribution of software images in the network:

The Site Profile Intent API gives you the option to provision NFV
and ENCS devices as well as retrieve the status of the provisioning
activities.

The Software Image Management (SWIM) API enables you to


completely manage the lifecycle of software images running within
a network in an automated fashion. With this API, you can retrieve
information about available software images, import images into
Cisco DNA Center, distribute images to devices, and activate
software images that have been deployed to devices.

The Plug and Play (PnP) API enables you to manage all PnP-
related workflows. With this API, you can create, update, and
delete PnP workflows and PnP server profiles, claim and unclaim
devices, add and remove virtual accounts, and retrieve information
about all PnP-related tasks.

Connectivity category: This category contains APIs that provide


mechanisms to configure and manage both non-fabric wireless and
Cisco SDA wired fabric devices. For fabric devices, you can add and
remove border devices to the fabric and get details about their status.
For non-fabric wireless devices, you can create, update, and delete
wireless SSIDs, profiles, and provisioning activities.

Operational Tools category: This category includes APIs for the


most commonly used tools in the Cisco DNA Center toolbelt:

The Command Runner API enables the retrieval of all valid


keywords that Command Runner accepts and allows you to run
read-only commands on devices to get their real-time
configuration.

The Network Discovery API provides access to the discovery


functionalities of Cisco DNA Center. You can use this API to create,
update, delete, and manage network discoveries and the
credentials needed for them. You can also retrieve network
discoveries, network devices that were discovered as part of a
specific network discovery task, and credentials associated with
these discoveries.

The Template Programmer API can be used to manage


configuration templates. You can create, view, edit, delete, version,
add commands, check contents for errors, deploy, and check the
status of template deployments.

The Path Trace API provides access to the Path Trace application
in Cisco DNA Center. Path Trace can be used to troubleshoot and
trace application paths throughout the network and provide
statistics at each hop. The API gives you access to initiating,
retrieving, and deleting path traces.

The File API enables you to retrieve files such as digital certificates,
maps, and SWIM files from Cisco DNA Center.

The Task API provides information about the network actions that
are being run asynchronously. Each of these background actions
can take from seconds to minutes to complete, and each has a task
associated with it. You can query the Task API about the
completion status of these tasks, get the task tree, retrieve tasks by
their IDs, and so on.

The Tag API gives you the option of creating, updating, and
deleting tags as well as assigning tags to specific devices. Tags are
very useful in Cisco DNA Center; they are used extensively to group
devices by different criteria. You can then apply policies and
provision and filter these groups of devices based on their tags.

The Cisco DNA Center platform APIs are rate limited to


five API requests per minute.

So far in this section, we’ve covered all the APIs and the
multivendor SDK offered by Cisco DNA Center. Next, we
will start exploring the Intent API, using Cisco DNA
Center version 1.3 for the rest of the chapter. As API
resources and endpoints exposed by the Cisco DNA
Center platform might change in future versions of the
software, it is always best to start exploring the API
documentation for any Cisco product at
https://developer.cisco.com/docs/dna-center/api/1-3-0-
x/.

In Cisco DNA Center version 1.3, the REST API is not


enabled by default. Therefore, you need to log in to DNA
Center with a super-admin role account, navigate to
Platform > Manage > Bundles, and enable the DNA
Center REST API bundle. The status of the bundle
should be active, as shown in Figure 8-5.

Figure 8-5 Cisco DNA Center Platform Interface

For this section, you can use the always-on DevNet


Sandbox for Cisco DNA Center at
https://sandboxdnac2.cisco.com. The username for this
sandbox is devnetuser, and the password is Cisco123!.
You need to get authorized to the API and get the token
that you will use for all subsequent API calls. The Cisco
DNA Center platform API authorization is based on basic
auth. Basic auth, as you learned in Chapter 7, is an
authorization type that requires a username and
password to access an API endpoint. In the case of Cisco
DNA Center, the username and password mentioned
previously are base-64 encoded and then transmitted to
the API service in the Authorization header. The are
many online services that can do both encoding and
decoding of base-64 data for you, or as a fun challenge
you can look up how to do it manually. The username
devnetuser and the password Cisco123! become
ZGV2- bmV0dXNlcjpDaXNjbzEyMyE= when they
are base-64 encoded. The only missing component is the
resource that you need to send the authorization request
to. You verify the documentation and see that the
authorization resource is /system/api/v1/auth/token
and requires the API call to be POST. With this
information, you can build the authorization API
endpoint, which becomes
https://sandboxdnac2.cisco.com/system/api/v1/auth/to
ken.

Next, you will use curl, a command-line tool that is


useful in testing REST APIs and web services. Armed
with the authorization API endpoint, the Authorization
header containing the base-64-encoded username and
password, and the fact that the API call needs to be a
POST call, you can now craft the authorization request in
curl. The authorization API call with curl should look as
follows:

Click here to view code image

curl -X POST \

https://sandboxdnac2.cisco.com/dna/system/api/v1/auth/token
\
-H 'Authorization: Basic
ZGV2bmV0dXNlcjpDaXNjbzEyMyE='

The result should be JSON formatted with the key Token


and a value containing the actual authorization token. It
should look similar to the following:

Click here to view code image

{"Token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiI1Y2U3M-

TJiMDhlZTY2MjAyZmEyZWI4ZjgiLCJhdXRoU291cmNlIjoiaW50ZXJuYWwiL-

CJ0ZW5hbnROYW1lIjoiVE5UMCIsInJvbGVzIjpbIjViNmNmZGZmNDMwOTkwM-
DA4OWYwZmYzNyJdLCJ0ZW5hbnRJZCI6IjViNmNmZGZjNDMwOTkwMDA4OWYwZ-

mYzMCIsImV4cCI6MTU2NjU0Mzk3OCwidXNlcm5hbWUiOiJkZXZuZXR1c2VyIn0.

Qv6vU6d1tqFGx9GETj6SlDa8Ts6uJNk9624onLSNSnU"}

Next, you can obtain an authorization token by using


Postman. Make sure you select POST as the verb for the
authorization API call and the endpoint
https://sandboxdnac2.cisco.com/dna/system/api/v1/au
th/token. This is a POST call because you are creating
new data in the system—in this case, a new authorization
token. Under the Authorization tab, select Basic Auth as
the type of authorization, and in the Username and
Password fields, make sure you have the correct
credentials (devnetuser and Cisco123!). Since you have
selected basic auth as the authorization type, Postman
does the base-64 encoding for you automatically. All you
need to do is click Send, and the authorization API call is
sent to the specified URL. If the call is successfully
completed, the status code should be 200 OK, and the
body of the response should contain the JSON-formatted
token key and the corresponding value. Figure 8-6 shows
the Postman client interface with the information needed
to successfully authenticate to the always-on Cisco
DevNet DNA Center Sandbox.

Figure 8-6 Authenticating to Cisco DNA Center over


the REST API
The body of the response for the Postman request should
look as follows:

Click here to view code image

"Token":

"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiI1
Y2U3MTJiMDhlZ-
TY2MjAyZmEyZWI4ZjgiLCJhdXRoU291cmNlIjoiaW50ZXJuYWw
iLCJ0ZW5hbnROY-
W1lIjoiVE5UMCIsInJvbGVzIjpbIjViNmNmZGZmNDMwOTkwMDA
4OWYwZmYzNyJdL-
CJ0ZW5hbnRJZCI6IjViNmNmZGZjNDMwOTkwMDA4OWYwZmYzMCI
sImV4cCI6MTU2N-
jU5NzE4OCwidXNlcm5hbWUiOiJkZXZuZXR1c2VyIn0.ubXSmZY
rI-yoCWmzCSY486y-
HWhwdTlnrrWqYip5lv6Y"

As with the earlier curl example, this token will be used


in all subsequent API calls performed in the rest of this
chapter. The token will be passed in the API calls
through a header that is called X-Auth-Token.

Let’s now get a list of all the network devices that are
being managed by the instance of Cisco DNA Center that
is running in the always-on DevNet Sandbox you’ve just
authorized with. If you verify the Cisco DNA Center API
documentation on
https://developer.cisco.com/docs/dna-center/api/1-3-0-
x/, you can see that the API resource that will return a
complete list of all network devices managed by Cisco
DNA Center is /dna/intent/api/v1/network-device.
Figure 8-7 shows the online documentation for Cisco
DNA Center version 1.3.
Figure 8-7 Cisco DNA Center Platform API
Documentation (https://developer.cisco.com)

With all this information in mind, you can craft the curl
request to obtain a list of all the network devices
managed by the Cisco DevNet always-on DNA Center
Sandbox. The complete URL is
https://sandboxdnac2.cisco.com/dna/intent/api/v1/net
work-device. You need to retrieve information through
the API, so we need to do a GET request; don’t forget the
X-Auth-Token header containing the authorization
token. The curl command should look as follows, and it
should contain a valid token:

Click here to view code image

curl -X GET \

https://sandboxdnac2.cisco.com/dna/intent/api/v1/network-
device \

-H 'X-Auth-Token:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.

eyJzdWIiOiI1Y2U3MTJiMDhlZTY2MjAyZmEyZWI4ZjgiLCJhdXRoU291c-

mNlIjoiaW50ZXJuYWwiLCJ0ZW5hbnROYW1lIjoiVE5UMCIsInJvbGVzIjpbI-

jViNmNmZGZmNDMwOTkwMDA4OWYwZmYzNyJdLCJ0ZW5hbnRJZCI6IjViNmNmZG-

ZjNDMwOTkwMDA4OWYwZmYzMCIsImV4cCI6MTU2NjYwODAxMSwidXNlcm5hbWUiOi-

JkZXZuZXR1c2VyIn0.YXc_2o8FDzSQ1YBhUxUIoxwzYXXWYeNJRkB0oKBlIHI'
The response to this curl command should look as
shown in Example 8-5.

Example 8-5 List of Network Devices Managed by the


Always-On Cisco DNA Center Sandbox Instance
Click here to view code image

{
"response" : [
{
"type" : "Cisco 3504 Wireless LAN
Controller",
"roleSource" : "AUTO",
"apManagerInterfaceIp" : "",
"lastUpdateTime" : 1566603156991,
"inventoryStatusDetail" : "<status>
<general code=\"SUCCESS\"/></status>",
"collectionStatus" : "Managed",
"serialNumber" : "FCW2218M0B1",
"location" : null,
"waasDeviceMode" : null,
"tunnelUdpPort" : "16666",
"reachabilityStatus" : "Reachable",
"lastUpdated" : "2019-08-23 23:32:36",
"tagCount" : "0",
"series" : "Cisco 3500 Series Wireless
LAN Controller",
"snmpLocation" : "",
"upTime" : "158 days, 13:59:36.00",
"lineCardId" : null,
"id" : "50c96308-84b5-43dc-ad68-
cda146d80290",
"reachabilityFailureReason" : "",
"lineCardCount" : null,
"managementIpAddress" : "10.10.20.51",
"memorySize" : "3735302144",
"errorDescription" : null,
"snmpContact" : "",
"family" : "Wireless Controller",
"platformId" : "AIR-CT3504-K9",
"role" : "ACCESS",
"softwareVersion" : "8.5.140.0",
"hostname" : "3504_WLC",
"collectionInterval" : "Global
Default",
"bootDateTime" : "2019-01-19
02:33:05",
"instanceTenantId" : "SYS0",
"macAddress" : "50:61:bf:57:2f:00",
"errorCode" : null,
"locationName" : null,
"softwareType" : "Cisco Controller",
"associatedWlcIp" : "",
"instanceUuid" : "50c96308-84b5-43dc-
ad68-cda146d80290",
"interfaceCount" : "8"
},
... omitted output
],
"version" : "1.0"
}

From this verbose response, you can extract some very


important information about the network devices
managed by Cisco DNA Center. The data returned in the
response is too verbose to fully include in the previous
output, so just a snippet is included for your reference.
As of this writing, in the complete response output, you
can see that there are 14 devices in this network:

One AIR-CT3504-K9 wireless LAN controller

One WS-C3850-24P-L Catalyst 3850 switch

Two C9300-48U Catalyst 9300 switches

Ten AIR-AP141N-A-K9 wireless access points

For each device, you can see extensive information such


as the hostname, uptime, serial number, software
version, management interface IP address, reachability
status, hardware platform, and role in the network. You
can see here the power of the Cisco DNA Center platform
APIs. With one API call, you were able to get a complete
status of all devices in the network. Without a central
controller like Cisco DNA Center, it would have taken
several hours to connect to each device individually and
run a series of commands to obtain the same information
that was returned with one API call in less than half a
second. These APIs can save vast amounts of time and
bring numerous possibilities in terms of infrastructure
automation. The data returned by the API endpoints can
be extremely large, and it might take a long time to
process a request and return a complete response. As
mentioned in Chapter 7, pagination is an API feature that
allows for passing in parameters to limit the scope of the
request. Depending on the Cisco DNA Center platform
API request, different filter criteria can be considered,
such as management IP address, MAC address, and
hostname.

Now you will see how to obtain the same information you
just got with curl but now using Postman. The same API
endpoint URL is used:
https://sandboxdnac2.cisco.com/dna/intent/api/v1/net
work-device. In this case, it is a GET request, and the X-
Auth-Token header is specified under the Headers tab
and populated with a valid token. If you click Send and
there aren’t any mistakes with the request, the status
code should be 200 OK, and the body of the response
should be very similar to that obtained with the curl
request. Figure 8-8 shows how the Postman interface
should look in this case.

Figure 8-8 Getting a List of Network Devices

Now you can try to obtain some data about the clients
that are connected to the network managed by Cisco
DNA Center. Much like network devices, network clients
have associated health scores, provided through the
Assurance feature to get a quick overview of client
network health. This score is based on several factors,
including onboarding time, association time, SNR
(signal-to-noise ratio), and RSSI (received signal
strength indicator) values for wireless clients,
authentication time, connectivity and traffic patterns,
and number of DNS requests and responses. In the API
documentation, you can see that the resource providing
the health status of all clients connected to the network is
/dna/intent/api/v1/client-health. This API call requires
a parameter to be specified when performing the call.
This parameter, called timestamp, represents the UNIX
epoch time in milliseconds. UNIX epoch time is a system
for describing a point in time since January 1, 1970, not
counting leap seconds. It is extensively used in UNIX
and many other operating systems. The timestamp
provides the point in time for which the client health
information should be returned in the API response. For
example, if I retrieved the health status of all the clients
connected to the network on Thursday, August 22, 2019
8:41:29 PM GMT, the UNIX time, in milliseconds, would
be 1566506489000. Keep in mind that based on the data
retention policy set in Cisco DNA Center, client data
might not be available for past distant timeframes.

With the information you now have, you can build the
API endpoint to process the API call:
https://sandboxdnac2.cisco.com/dna/intent/api/v1/clie
nt-health?timestamp=1566506489000. The
authorization token also needs to be included in the call
as a value in the X-Auth-Token header. The curl
command should look as follows:

Click here to view code image

curl -X GET \

https://sandboxdnac2.cisco.com/dna/intent/api/v1/client-

health?timestamp=1566506489000 \

-H 'X-Auth-Token:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.

eyJzdWIiOiI1Y2U3MTJiMDhlZTY2MjAyZmEyZWI4ZjgiLCJhdXRoU291c

mNlIjoiaW50ZXJuYWwiLCJ0ZW5hbnROYW1lIjoiVE5UMCIsInJvbGVzIjpbI
jViNmNmZGZmNDMwOTkwMDA4OWYwZmYzNyJdLCJ0ZW5hbnRJZCI6IjViNmNmZG-

ZjNDMwOTkwMDA4OWYwZmYzMCIsImV4cCI6MTU2NjYxODkyOCwidXNlcm5hbWUiO

iJkZXZuZXR1c2VyIn0.7JNXdgSMi3Bju8v8QU_L5nmBKYOTivinAjP8ALT_opw'

The API response should look similar to Example 8-6.

From this response, you can see that there are a total of
82 clients in the network, and the average health score
for all of them is 27. To further investigate why the
health scores for some of the clients vary, you can look
into the response to the /dna/intent/api/v1/client-detail
call. This API call takes as input parameters the
timestamp and the MAC address of the client, and it
returns extensive data about the status and health of that
specific client at that specific time.

Now you can try to perform the same API call but this
time with Postman. The API endpoint stays the same:
https://sandboxdnac2.cisco.com/dna/intent/api/v1/clie
nt-health?timestamp=1566506489000. In this case, you
are trying to retrieve information from the API, so it will
be a GET call, and the X-Auth-Token header contains a
valid token value. Notice that the Params section of
Postman gets automatically populated with a timestamp
key, with the value specified in the URL:
1566506489000. Click Send, and if there aren’t any
errors with the API call, the body of the response should
be very similar to the one obtained previously with curl.
The Postman window for this example should look as
shown in Figure 8-9.
Figure 8-9 Viewing Client Health in Cisco DNA
Center

Example 8-6 List of Clients and Their Status


Click here to view code image

{
"response" : [ {
"siteId" : "global",
"scoreDetail" : [ {
"scoreCategory" : {
"scoreCategory" : "CLIENT_TYPE",
"value" : "ALL"
},
"scoreValue" : 27,
"clientCount" : 82,
"clientUniqueCount" : 82,
"starttime" : 1566506189000,
"endtime" : 1566506489000,
"scoreList" : [ ]
}, ... output omitted
}

So far in this section, you’ve explored the Cisco DNA


Center platform API and seen how to authorize the API
calls and obtain a token, how to get a list of all the
devices in the network, and how to get health statistics
for all the clients in the network using both curl and
Postman. Next let’s explore the Cisco DNA Center
Python SDK. The SDK has been developed for Python 3
and maps all the Cisco DNA Center APIs into Python
classes and methods to make it easier for developers to
integrate with and expand the functionality of Cisco DNA
Center. Installing the SDK is as simple as issuing the
command pip install dnacentersdk from the
command prompt. At this writing, the current Cisco
DNA Center SDK version is 1.3.0. The code in Example
8-7 was developed using this version of the SDK and
Python 3.7.4. The code in Example 8-7 uses the SDK to
first authorize to the API and then retrieve a list of all the
network devices and client health statuses.

Example 8-7 Python Script That Exemplifies the Use of


the Cisco DNA Center Python SDK
Click here to view code image

#! /usr/bin/env python
from dnacentersdk import api

# Create a DNACenterAPI connection object;


# it uses DNA Center sandbox URL, username and
password
DNAC = api.DNACenterAPI(username="devnetuser",
password="Cisco123!",

base_url="https://sandboxdnac2.cisco.com")

# Find all devices


DEVICES = DNAC.devices.get_device_list()

# Print select information about the retrieved


devices
print('{0:25s}{1:1}{2:45s}{3:1}
{4:15s}'.format("Device Name", "|", \
"Device Type", "|", "Up Time"))
print('-'*95)
for DEVICE in DEVICES.response:
print('{0:25s}{1:1}{2:45s}{3:1}
{4:15s}'.format(DEVICE.hostname, \
"|", DEVICE.type, "|", DEVICE.upTime))
print('-'*95)

# Get the health of all clients on Thursday,


August 22, 2019 8:41:29 PM GMT
CLIENTS =
DNAC.clients.get_overall_client_health(timestamp="
1566506489000")

# Print select information about the retrieved


client health statistics
print('{0:25s}{1:1}{2:45s}{3:1}
{4:15s}'.format("Client Category", "|",\
"Number of Clients", "|", "Clients Score"))
print('-'*95)
for CLIENT in CLIENTS.response:
for score in CLIENT.scoreDetail:
print('{0:25s}{1:1}{2:<45d}{3:1}{4:
<15d}'.format(
score.scoreCategory.value, "|",
score.clientCount, "|", \
score.scoreValue))
print('-'*95)

First, this example imports the api class from


dnacentersdk. Next, it instantiates the api class and
creates a connection to the always-on Cisco DevNet
Sandbox DNA Center instance and stores the result of
that connection in a variable called DNAC. If the
connection was successfully established, the DNAC
object has all the API endpoints mapped to methods that
are available for consumption.
DNAC.devices.get_device_list() provides a list of all
the devices in the network and stores the result in the
DEVICES dictionary. The same logic applies to the
client health status.
DNAC.clients.get_overall_client_health(timesta
mp='1566506489000') returns a dictionary of health
statuses for all the clients at that specific time, which
translates to Thursday, August 22, 2019 8:41:29 PM
GMT. When the data is extracted from the API and
stored in the variables named DEVICES for all the
network devices and CLIENTS for all the client health
statuses, a rudimentary table is displayed in the console,
with select information from both dictionaries. For the
DEVICES variable, only the device name, device type,
and device uptime are displayed, and for the CLIENTS
variable, only the client health category, number of
clients, and client score for each category are displayed.
The output of the Python script should look similar to
that in Figure 8-10.

Figure 8-10 Output of the Python Script from


Example 8-7

CISCO SD-WAN
Cisco SD-WAN (Software-Defined Wide Area Network)
is a cloud-first architecture for deploying WAN
connectivity. Wide-area networks have been deployed for
a long time, and many lessons and best practices have
been learned throughout the years. Applying all these
lessons to software-defined networking (SDN) resulted
in the creation of Cisco SD-WAN. An important feature
of SDN is the separation of the control plane from the
data plane.

The control plane includes a set of protocols and features


that a network device implements so that it can
determine which network path to use to forward data
traffic. Spanning tree protocols and routing protocols
such as OSPF (Open Shortest Path First), EIGRP
(Enhanced Interior Gateway Routing Protocol), and BGP
(Border Gateway Protocol) are some of the protocols that
make up the control plane in network devices. These
protocols help build the switching or routing tables in
network devices to enable them to determine how to
forward network traffic.
The data plane includes the protocols and features that a
network device implements to forward traffic to its
destination as quickly as possible. Cisco Express
Forwarding (CEF) is a proprietary switching mechanism
that is part of the data plane. It was developed
specifically to increase the speed with which data traffic
is forwarded through network devices. You can read
more about the control and data planes in Chapter 17,
“Networking Components.”

Historically, the control plane and data plane were part


of the network device architecture, and they worked
together to determine the path that the data traffic
should take through the network and how to move this
traffic as fast as possible from its source to its
destination. As mentioned previously, software-defined
networking (SDN) suggests a different approach.

SDN separates the functionality of the control plane and


data plane in different devices, and several benefits
result. First, the cost of the resulting network should be
lower as not all network devices have to implement
expensive software and hardware features to
accommodate both a control plane and data plane. The
expensive intelligence from the control plane is
constrained to a few devices that become the brains of
the network, and the data plane is built with cheaper
devices that implement only fast forwarding. Second, the
convergence of this new network, which is the amount of
time it takes for all devices to agree on a consistent view
of the network, should be much lower than in the case of
the non-SDN architectures of the past. In networks of
similar sizes, the ones built with network devices that
implement both the control plane and the data plane in
their architecture take much longer to exchange all the
information needed to forward data traffic than do the
networks that implement separate control and data
plane functionality in their architecture. Depending on
the size of a network, this could mean waiting for
thousands of devices to exchange information through
their control plane protocols and settle on a certain view
of the network or wait for tens of SDN controllers to
accomplish the same task; the convergence time
improvements are massive.

Cisco currently has two SD-WAN offerings. The first one,


based on the Viptela acquisition, is called Cisco SD-
WAN; the second one, based on the Meraki acquisition,
is called Meraki SD-WAN. We already covered Cisco
Meraki at the beginning of this chapter; this section
covers Cisco SD-WAN based on the Viptela acquisition.

You’ve already seen some of the advantages that SDN


brings to WAN connectivity. Based on this new
architecture and paradigm, the Cisco SD-WAN offering
contains several products that perform different
functions:

vManage: Cisco vManage is a centralized network management


system that provides a GUI and REST API interface to the SD-WAN
fabric. You can easily manage, monitor, and configure all Cisco SD-
WAN components through this single pane of glass.

vSmart: Cisco vSmart is the brains of the centralized control plane for
the overlay SD-WAN network. It maintains a centralized routing table
and centralized routing policy that it propagates to all the network Edge
devices through permanent DTLS tunnels.

vBond: Cisco vBond is the orchestrator of the fabric. It authenticates


the vSmart controllers and the vEdge devices and coordinates
connectivity between them. The vBond orchestrator is the only
component in the SD-WAN fabric that needs public IP reachability to
ensure that all devices can connect to it.

vEdge: Cisco vEdge routers, as the name implies, are Edge devices that
are located at the perimeter of the fabric, such as in remote offices, data
centers, branches, and campuses. They represent the data plane and
bring the whole fabric together and route traffic to and from their site
across the overlay network.

All the components of the Cisco SD-WAN fabric run as


virtual appliances, and the vEdges are also available as
hardware routers.
Separating the WAN fabric this way makes it more
scalable, faster to converge, and cheaper to deploy and
maintain. On top of a transport-independent underlay
that supports all types of WAN circuits (MPLS, DSL,
broadband, 4G, and so on), an overlay network is being
built that runs OMP (Overlay Management Protocol).
Much like BGP, OMP propagates throughout the
network all the routing information needed for all the
components of the fabric to be able to forward data
according to the routing policies configured in vManage.

Cisco vManage provides a REST API interface that


exposes the functionality of the Cisco SD-WAN software
and hardware features. The API resources that are
available through the REST interface are grouped in the
following collections:

Administration: For management of users, groups, and local


vManage instance

Certificate Management: For management of SSL certificates and


security keys

Configuration: For creation of feature and device configuration


templates and creation and configuration of vManage clusters

Device Inventory: For collecting device inventory information,


including system status

Monitoring: For getting access to status, statistics, and related


operational information about all the devices in the network every 10
minutes from all devices

Real-Time Monitoring: For gathering real-time monitoring


statistics and traffic information approximately once per second

Troubleshooting Tools: For API calls used in troubleshooting, such


as to determine the effects of applying a traffic policy, updating
software, or retrieving software version information

Cisco vManage exposes a self-documenting web interface


for the REST API, based on the OpenAPI specification.
This web interface is enabled by default and can be
accessed at
https://vManage_IP_or_hostname:port/apidocs.
vManage_IP_or_hostname is the IP address or
hostname of the Cisco vManage server, and the port is
8443 by default. The rest of this chapter uses the always-
on Cisco DevNet SD-WAN Sandbox, available at
https://sandboxsdwan.cisco.com. The username for this
vManage server is devnetuser, and the password is
Cisco123!. At this writing, this sandbox is running Cisco
SD-WAN version 18.3 for all components of the fabric.
Because this sandbox will be updated in time, or you
might be using a different version, you should check
https://developer.cisco.com for the latest information on
all Cisco REST APIs documentation and changes. After
you specify the credentials, the self-documenting web
interface of
https://sandboxsdwan.cisco.com:8443/apidocs looks as
shown in Figure 8-11.

Figure 8-11 Cisco SD-WAN OpenAPI Specification-


Based Interface

This web interface displays a list of all the REST API


resources available, the methods associated with each
one of them, and the model schema of the responses. The
option of trying out each API call and exploring the
returned data is also available.
Let’s explore the Cisco vManage REST API next. The API
documentation can be found at https://sdwan-
docs.cisco.com/Product_Documentation/Command_Re
ference/Command_Reference/vManage_REST_APIs.
At this link, you can find all the information needed on
how to interact with the REST API, all the resources
available, and extensive explanations.

You need to establish a connection to the Cisco vManage


instance. The initial connection is established through an
authorization request to
https://sandboxsdwan.cisco.com:8443/j_security_chec
k. The information sent over this POST call is URL form
encoded and contains the username and password
mentioned previously. The curl request for this
authorization request should look as follows:

Click here to view code image

curl -c - -X POST -k \

https://sandboxsdwan.cisco.com:8443/j_security_check
\
-H 'Content-Type: application/x-www-form-
urlencoded' \
-d 'j_username=devnetuser&j_password=Cisco123!'

The -c - option passed to the curl request specifies that


the returned authorization cookie should be printed to
the console. The -k option bypasses SSL certificate
verification as the certificate for this sandbox is self-
signed. The output of the command should look as
follows:

Click here to view code image

# Netscape HTTP Cookie File

# https://curl.haxx.se/docs/http-cookies.html

# This file was generated by libcurl! Edit at your


own risk.
#HttpOnly_sandboxsdwan.cisco.com. FALSE / TRUE
0 JSESSIONID.
v9QcTVL_ZBdIQZRsI2V95vBi7Bz47IMxRY3XAYA6.4854266f-
a8ad-4068-9651-
d4e834384f51

The long string after JSESSIONID is the value of the


authorization cookie that will be needed in all
subsequent API calls.

Figure 8-12 shows the same API call in Postman.

Figure 8-12 Cisco SD-WAN REST API Authorization


Call

The status code of the response should be 200 OK, the


body should be empty, and the JSESSIONID cookie
should be stored under the Cookies tab. The advantage
with Postman is that it automatically saves the
JSESSIONID cookie and reuses it in all API calls that
follow this initial authorization request. With curl, in
contrast, you have to pass in the cookie value manually.
To see an example, you can try to get a list of all the
devices that are part of this Cisco SD-WAN fabric.
According to the documentation, the resource that will
return this information is /dataservice/device. It will
have to be a GET call, and the JSESSIONID cookie needs
to be passed as a header in the request. The curl
command to get a list of all the devices in the fabric
should look like as follows:

Click here to view code image


curl -X GET -k \

https://sandboxsdwan.cisco.com:8443/dataservice/device
\

-H 'Cookie:
JSESSIONID=v9QcTVL_ZBdIQZRsI2V95vBi7Bz47IMxRY3XAYA6.4

854266f-a8ad-4068-9651-d4e834384f51'

The response from the always-on Cisco DevNet vManage


server should look similar to the one in Example 8-8.

Example 8-8 List of Devices That Are Part of the Cisco


SD-WAN Fabric

Click here to view code image

{
... omitted output
"data" : [
{
"state" : "green",
"local-system-ip" : "4.4.4.90",
"status" : "normal",
"latitude" : "37.666684",
"version" : "18.3.1.1",
"model_sku" : "None",
"connectedVManages" : [
"\"4.4.4.90\""
],
"statusOrder" : 4,
"uuid" : "4854266f-a8ad-4068-9651-
d4e834384f51",
"deviceId" : "4.4.4.90",
"reachability" : "reachable",
"device-groups" : [
"\"No groups\""
],
"total_cpu_count" : "2",
"certificate-validity" : "Valid",
"board-serial" : "01",
"platform" : "x86_64",
"device-os" : "next",
"timezone" : "UTC",
"uptime-date" : 1567111260000,
"host-name" : "vmanage",
"device-type" : "vmanage",
"personality" : "vmanage",
"domain-id" : "0",
"isDeviceGeoData" : false,
"lastupdated" : 1567470387553,
"site-id" : "100",
"controlConnections" : "5",
"device-model" : "vmanage",
"validity" : "valid",
"system-ip" : "4.4.4.90",
"state_description" : "All daemons
up",
"max-controllers" : "0",
"layoutLevel" : 1,
"longitude" : "-122.777023"
},
... omitted output
]
}

The output of this GET API call is too verbose to be fully


displayed. We encourage you to explore this API call and
observe the full response on your own computer.

The body of the response is in JSON format and contains


information about all the devices in the SD-WAN fabric.
As of this writing, this specific fabric contains the
following:

One Cisco vManage server

One Cisco vSmart server

One Cisco vBond server

Four Cisco vEdge routers

For each device, the response includes status, geographic


coordinates, role, device ID, uptime, site ID, SSL
certificate status, and more. You can build the same
request in Postman and send it to vManage as shown in
Figure 8-13. The body of the response is very similar to
the one received from the curl command.
Figure 8-13 Getting a List of All the Devices in the
Cisco SD-WAN Fabric

While exploring the Cisco SD-WAN REST API, let’s get a


list of all the device templates that are configured on the
Cisco DevNet vManage server. According to the API
documentation, the resource that will return this
information is /dataservice/template/device. You pass in
the JSESSIONID value in the cookie header and build
the following curl command:

Click here to view code image

curl -X GET -k \

https://sandboxsdwan.cisco.com:8443/dataservice/template/device
\

-H 'Cookie:
JSESSIONID=v9QcTVL_ZBdIQZRsI2V95vBi7Bz47IMxRY3XAYA6.48

54266f-a8ad-4068-9651-d4e834384f51'

The response from the vManage server at


https://sandboxsdwan.cisco.com should look as shown
in Example 8-9.

Example 8-9 List of Device Configuration Templates


Click here to view code image

{
"data" : [
{
"templateDescription" : "VEDGE BASIC
TEMPLATE01",
"lastUpdatedOn" : 1538865915509,
"templateAttached" : 15,
"deviceType" : "vedge-cloud",
"templateId" : "72babaf2-68b6-4176-
92d5-fa8de58e19d8",
"configType" : "template",
"devicesAttached" : 0,
"factoryDefault" : false,
"templateName" :
"VEDGE_BASIC_TEMPLATE",
"lastUpdatedBy" : "admin"
}
],
... output omitted
}

The response contains details about the only device


template available on this vManage server. The template
is called VEDGE_BASIC_TEMPLATE, it is of type
vedge-cloud (which means it can be applied to vEdge
devices), and it currently has no devices attached to it.

The same information is returned by vManage when


using Postman to get the list of all device templates. As
before, the JSESSIONID cookie is already included with
Postman and does not need to be specified again. Figure
8-14 shows the Postman client interface with all the
parameters needed to retrieve a list of all the device
configuration templates available on a specific vManage
instance.
Figure 8-14 Getting Device Configuration Templates

Next, let’s use Python to build a script that will go


through the same steps: Log in to vManage, get a list of
all the devices in the SD-WAN fabric, and get a list of all
device templates available. No SDK will be used in this
case; this will help you see the difference between this
code and the Python code you used earlier in this
chapter. Since no SDK will be used, all the API resources,
payloads, and handling of data will have to be managed
individually.

The Python requests library will be used extensively in


this sample code. You should be familiar with this library
from Chapter 7. Example 8-10 shows a possible version
of the Python 3 script that accomplishes these tasks. The
script was developed using Python 3.7.4 and version
2.22.0 of the requests library. The json library that
comes with Python 3.7.4 was also used to deserialize and
load the data returned from the REST API into a Python
object; in this case, that object is a list. In this code, first,
the import keyword makes the two libraries requests
and json available for use within the script. Since the
connection in the script is made to an instance of
vManage that is in a sandbox environment and that uses
a self-signed SSL certificate, the third and fourth lines of
the script disable the warning messages that are
generated by the requests library when connecting to
REST API endpoints that are secured with self-signed
SSL certificates. Next, the script specifies the vManage
hostname and the username and password for this
instance; this example uses the same vManage server
used earlier in this chapter. The code then specifies the
base URL for the vManage REST API endpoint:
https://sandboxsdwan.cisco.com:8443. The code shows
the authentication resource (j_security_check) and the
login credentials, and then the login URL is built as a
combination of the base URL and the authentication API
resource. In the next line, a new request session
instance is created and stored in the SESS variable.

Example 8-10 Python Script Showcasing How to


Interact with the Cisco SD-WAN REST API
Click here to view code image

#! /usr/bin/env python
import json
import requests
from requests.packages.urllib3.exceptions
import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)

# Specify Cisco vManage IP, username and


password
VMANAGE_IP = 'sandboxsdwan.cisco.com'
USERNAME = 'devnetuser'
PASSWORD = 'Cisco123!'

BASE_URL_STR =
'https://{}:8443/'.format(VMANAGE_IP)

# Login API resource and login credentials


LOGIN_ACTION = 'j_security_check'
LOGIN_DATA = {'j_username' : USERNAME,
'j_password' : PASSWORD}
# URL for posting login data
LOGIN_URL = BASE_URL_STR + LOGIN_ACTION

# Establish a new session and connect to Cisco


vManage
SESS = requests.session()
LOGIN_RESPONSE = SESS.post(url=LOGIN_URL,
data=LOGIN_DATA, verify=False)
# Get list of devices that are part of the
fabric and display them
DEVICE_RESOURCE = 'dataservice/device'
# URL for device API resource
DEVICE_URL = BASE_URL_STR + DEVICE_RESOURCE

DEVICE_RESPONSE = SESS.get(DEVICE_URL,
verify=False)
DEVICE_ITEMS =
json.loads(DEVICE_RESPONSE.content)['data']

print('{0:20s}{1:1}{2:12s}{3:1}{4:36s}{5:1}
{6:16s}{7:1}{8:7s}'\
.format("Host-Name", "|", "Device Model",
"|", "Device ID", \
"|", "System IP", "|", "Site ID"))
print('-'*105)

for ITEM in DEVICE_ITEMS:


print('{0:20s}{1:1}{2:12s}{3:1}{4:36s}{5:1}
{6:16s}{7:1}{8:7s}'\
.format(ITEM['host-name'], "|",
ITEM['device-model'], "|", \
ITEM['uuid'], "|", ITEM['system-
ip'], "|", ITEM['site-id']))
print('-'*105)
# Get list of device templates and display them
TEMPLATE_RESOURCE =
'dataservice/template/device'
# URL for device template API resource
TEMPLATE_URL = BASE_URL_STR + TEMPLATE_RESOURCE

TEMPLATE_RESPONSE = SESS.get(TEMPLATE_URL,
verify=False)
TEMPLATE_ITEMS =
json.loads(TEMPLATE_RESPONSE.content)['data']

print('{0:20s}{1:1}{2:12s}{3:1}{4:36s}{5:1}
{6:16s}{7:1}{8:7s}'\
.format("Template Name", "|", "Device
Model", "|", "Template ID", \
"|", "Attached devices", "|", "Template
Version"))
print('-'*105)

for ITEM in TEMPLATE_ITEMS:


print('{0:20s}{1:1}{2:12s}{3:1}{4:36s}{5:1}
{6:<16d}{7:1}{8:<7d}'\
.format(ITEM['templateName'], "|",
ITEM['deviceType'], "|", \
ITEM['templateId'], "|""
ITEM['devicesAttached'], "|", \
ITEM['templateAttached']))
print('-'*105)
Using this new session, a POST request is sent to the
login URL, containing the username and password as
payload and disabling the SSL certificate authenticity
verification by specifying verify=False. At this point, a
session is established to the DevNet Sandbox vManage
instance. This session can be used to interact with the
vManage REST API by getting, creating, modifying, and
deleting data.

The code specifies the API resource that will return a list
of all the devices in the SD-WAN fabric:
dataservice/device. The complete URL to retrieve the
devices in the fabric is built on the next line by
combining the base URL with the new resource. The
DEVICE_URL variable will look like
https://sandboxsdwan.cisco.com:8443/dataservice/devi
ce. Next, the same session that was established earlier is
used to perform a GET request to the device_url
resource. The result of this request is stored in the
variable aptly named DEVICE_RESPONSE, which
contains the same JSON-formatted data that was
obtained in the previous curl and Postman requests,
with extensive information about all the devices that are
part of the SD-WAN fabric. From that JSON data, only
the list of devices that are values of the data key are
extracted and stored in the DEVICE_ITEMS variable.

Next, the header of a rudimentary table is created. This


header contains the fields Host-Name, Device Model,
Device ID, System IP, and Site ID. From the extensive
list of information contained in the DEVICE_ITEMS
variable, only these five fields will be extracted and
displayed to the console for each device in the fabric. The
code next prints a series of delimiting dashes to the
console to increase the readability of the rudimentary
table. The next line of code has a for loop that is used to
iterate over each element of the DEVICE_ITEMS list
and extract the hostname, device model, device ID,
system IP address, and site ID for each device in the
fabric and then display that information to the console.
The code then prints a series of dashes for readability
purposes. Next, the same logic is applied to GET data
from the API but this time about all the device templates
that are configured on this instance of vManage. The
URL is built by concatenating the base URL with the
device template resource, dataservice/template/device.
The same session is reused once more to obtain the data
from the REST API. In the case of the device templates,
only the template name, the type of device the template
is intended for, the template ID, the number of attached
devices to each template, and the template version are
extracted and displayed to the console.

If you run this script in a Python 3.7.4 virtual


environment with the requests library version 2.22.0
installed, you get output similar to that shown in Figure
8-15.

Figure 8-15 Output of the Python Script from


Example 8-10

This chapter has explored several Cisco solutions and


their REST APIs. Authentication and authorization
methods have been explained, and basic information has
been obtained from the APIs. This chapter has provided
a basic introduction to these extensive APIs and the
features that they expose. We encourage you to continue
your exploration of these APIs and build your own use
cases, automation, and network programmability
projects.
EXAM PREPARATION TASKS
As mentioned in the section “How to Use This Book” in
the Introduction, you have a couple of choices for exam
preparation: the exercises here, Chapter 19, “Final
Preparation,” and the exam simulation questions on the
companion website.

REVIEW ALL KEY TOPICS


Review the most important topics in this chapter, noted
with the Key Topic icon in the outer margin of the page.
Table 8-2 lists these key topics and the page number on
which each is found.

Table 8-2 Key Topics

Key Topic ElementDescriptionPage Number

L Qualities of a good SDK 1


i 7
s 7
t

L Advantages of SDKs 1
i 7
s 7
t

L The Meraki cloud platform provides several APIs 1


i from a programmability perspective 7
s 8
t

L Cisco DNA Center REST APIs and SDKs 1


i 9
s 0
t
L Cisco SD-WAN products that perform different 2
i functions 0
s 2
t

DEFINE KEY TERMS


Define the following key terms from this chapter and
check your answers in the glossary:

software development kit (SDK)


Python Enhancement Proposals (PEP)
Bluetooth Low Energy (BLE)
MQ Telemetry Transport (MQTT)
Cisco Digital Network Architecture (DNA)
data as a service (DaaS)
Software Image Management (SWIM) API
Plug and Play (PnP) API
Cisco Software Defined-WAN (SD-WAN)
Open Shortest Path First (OSPF)
Enhanced Interior Gateway Routing Protocol
(EIGRP)
Border Gateway Protocol (BGP)
Cisco Express Forwarding (CEF)
software-defined networking (SDN)
Overlay Management Protocol (OMP)
Chapter 9

Cisco Data Center and Compute


Management Platforms and APIs
This chapter covers the following topics:
Cisco ACI: This section describes Cisco ACI and the APIs it exposes.

Cisco UCS Manager: This section covers Cisco UCS Manager and the
public APIs that come with it.

Cisco UCS Director: This section goes over Cisco UCS Director and
its APIs.

Cisco Intersight: This section introduces Cisco Intersight and its


REST API interface.

This chapter begins exploring Cisco data center


technologies and the SDKs and APIs that are available
with them. First, it provides an introduction to Cisco
Application Centric Infrastructure (ACI) and its
components. This chapter also looks at the Cisco ACI
REST API and the resources exposed over the API, as
well as how to use a popular Python library called
acitoolkit to extract data from the API. This chapter
examines next Cisco Unified Computing System (UCS)
and how all its components work together to offer one of
the most comprehensive and scalable data center
compute solutions available today. This chapter also
provides an overview of Cisco UCS Manager, the XML
API it provides, and how to interact with this API by
using curl commands and the Cisco UCS Manager SDK.
Cisco UCS Director takes data center automation to the
next level, and this chapter covers the tasks and
workflows that are available with it. The chapter also
discusses the Cisco UCS Director SDK and its
components and the curl commands that are used to
interact and extract data from the REST API. Finally, the
chapter covers Cisco Intersight, a software as a service
(SaaS) product that takes Cisco UCS management into
the cloud. The chapter wraps up by covering the REST
API interface of Cisco Intersight and the Python SDK
that comes with it.

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 9-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 9-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

Cisco ACI 1–3

Cisco UCS Manager 4–6

Cisco UCS Director 7–8

Cisco Intersight 9–10

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. On what family of switches does Cisco ACI run?


1. Cisco Catalyst 9000
2. Cisco Nexus 9000
3. Cisco Nexus 7000
4. Cisco Catalyst 6800

2. True or false: An ACI bridge domain can be


associated with multiple VRF instances.
1. True
2. False

3. What Cisco ACI REST API endpoint is used for


authentication?
1. https://APIC_IP_or_Hostname/api/aaaLogin.json
2. https://APIC_IP_or_Hostname/api/login
3. https://APIC_IP_or_Hostname/api/v1/aaaLogin
4. https://APIC_IP_or_Hostname/api/v1/login.json

4. In Cisco UCS Manager, what is the logical construct


that contains the complete configuration of a
physical server?
1. Server profile
2. Service profile
3. Template profile
4. None of the above

5. What is the Cisco UCS Manager Python SDK library


called?
1. ucsmsdk
2. ucssdk
3. ucsm
4. ciscoucsm

6. What is the managed object browser called in Cisco


UCS Manager?
1. Mobrowser
2. UCSMobrowser
3. Visore
4. UCSVisore
7. What is a Cisco UCS Director workflow?
1. The atomic unit of work in Cisco UCS Director
2. A single action with inputs and outputs
3. A collection of predefined tasks
4. A series of tasks arranged to automate a complex operation

8. What is the name of the header that contains the


Cisco UCS Director REST API access key?
1. X-Cloupia-Access-Key
2. X-Cloupia-Request-Key
3. X-Cloupia-Secret-Key
4. X-Cloupia-API-Access

9. How are the managed objects organized in Cisco


Intersight?
1. Management information table
2. Hierarchical management information tree
3. Management Information Model
4. Hierarchical Managed Objects Model

10. What does the Cisco Intersight REST API key


contain?
1. keyId and keySecret
2. token
3. accessKey and secretKey
4. cookie

FOUNDATION TOPICS
CISCO ACI
Cisco Application Centric Infrastructure (ACI) is the
SDN-based solution from Cisco for data center
deployment, management, and monitoring. The solution
is based on two components: the Cisco Nexus family of
switches and Cisco Application Policy Infrastructure
Controller (APIC).

The Cisco Nexus 9000 family of switches can run in two


separate modes of operation, depending on the software
loaded on them. The first mode is called standalone (or
NX-OS) mode, which means the switches act like regular
Layer 2/Layer 3 data center devices that are usually
managed individually. In the second mode, ACI mode,
the Cisco Nexus devices are part of an ACI fabric and are
managed in a centralized fashion. The central controller
for the ACI fabric is the Cisco Application Policy
Infrastructure Controller (APIC). This controller is the
main architectural component of the Cisco ACI solution
and provides a single point of automation and
management for the Cisco ACI fabric, policy
enforcement, and health monitoring. The Cisco APIC
was built on an API-first architecture from its inception.
On top of this API, a command-line interface (CLI) and a
graphical user interface (GUI) have been developed. The
API is exposed through a REST interface and is
accessible as a northbound interface for users and
developers to integrate and develop their own custom
solutions on top of the Cisco APIC and Cisco ACI fabric.
The Cisco APIC interacts with and manages the Cisco
Nexus switches through the OpFlex protocol, which is
exposed as a southbound interface. From an SDN
controller perspective (similar to the Cisco DNA Center
controller described in Chapter 8, “Cisco Enterprise
Networking Management Platforms and APIs”), a
northbound interface specifies the collection of protocols
that a user can use to interact with and program the
controller, while a southbound interface specifies the
protocols and interfaces that the controller uses to
interact with the devices it manages. Some of the
features and capabilities of the Cisco APIC are as follows:

Application-centric network policy for physical, virtual, and cloud


infrastructure

Data model–based declarative provisioning

Designed around open standards and open APIs

Cisco ACI fabric inventory and configuration

Software image management

Fault, event, and performance monitoring and management


Integration with third-party management systems such as VMware,
Microsoft, and OpenStack

Cloud APIC appliance for Cisco cloud ACI deployments in public cloud
environments

A minimum of three APICs in a cluster are needed for


high availability.

The Cisco ACI fabric is built in a leaf-and-spine


architecture. As the name implies, some of the Cisco
Nexus switches that are part of the ACI fabric are called
leaves and perform a function similar to that of an access
switch, to which both physical and virtual endpoint
servers are connected, and some of the switches are
called spines and perform a function similar to that of a
distribution switch to which all the access switches are
connected. Figure 9-1 provides a visual representation of
how all the Cisco ACI fabric components come together.

Figure 9-1 Cisco ACI Fabric Architecture

It is very important to choose the right switches for the


right functions as not all Cisco Nexus 9000 switches
support all functions in a leaf-and-spine architecture.
The leaf switches connect to all the spine switches and to
endpoint devices, including the Cisco APICs. The Cisco
APICs never connect to spine switches. Spine switches
can only connect to leaf switches and are never
interconnected with each other. The ACI fabric provides
consistent low-latency forwarding across high-
bandwidth links (40 Gbps, 100 Gbps, and 400 Gbps).
Data traffic with the source and destination on the same
leaf switch is handled locally. When the traffic source
and destination are on separate leaf switches, they are
always only one spine switch away. The whole ACI fabric
operates as a single Layer 3 switch, so between a data
traffic source and destination, it will always be at most
one Layer 3 hop.

The configuration of the ACI fabric is stored in the APIC


using an object-oriented schema. This configuration
represents the logical model of the fabric. The APIC
compiles the logical model and renders the policies into a
concrete model that runs in the physical infrastructure.
Figure 9-2 shows the relationship between the logical
model, the concrete model, and the operating system
running on the switches.

Figure 9-2 Relationship Between the Logical Model,


the Concrete Model, and the Operating System

Each of the switches contains a complete copy of the


concrete model. When a policy that represents a
configuration is created in the APIC, the controller
updates the logical model. It then performs the
intermediate step of creating a complete policy that it
pushes into all the switches, where the concrete model is
updated. The Cisco Nexus 9000 switches can only
execute the concrete model when running in ACI mode.
Each switch has a copy of the concrete model. If by any
chance, all the APIC controllers in a cluster go offline, the
fabric keeps functioning, but modifications to the fabric
policies are not possible.

The ACI policy model enables the specification of


application requirements. When a change is initiated to
an object in the fabric, the APIC first applies that change
to the policy model. This policy model change triggers a
change to the concrete model and the actual managed
endpoint. This management framework is called the
model-driven framework. In this model, the system
administrator defines the desired state of the fabric but
leaves the implementation up to the APIC. This means
that the data center infrastructure is no longer managed
in isolated, individual component configurations but
holistically, enabling automation and flexible workload
provisioning. In this type of infrastructure, network-
attached services can be easily deployed as the APIC
provides an automation framework to manage the
complete lifecycle of these services. As workloads move
and changes happen, the controller reconfigures the
underlying infrastructure to ensure that the policies are
still in place for the end hosts.

The Cisco ACI fabric is composed of physical and logical


components. These components are recorded in the
Management Information Model (MIM) and can be
represented in a hierarchical management information
tree (MIT). Each node in the MIT represents a managed
object (MO). An MO can represent a concrete object,
such as a switch, an adapter, a power supply, or a logical
object, such as an application profile, an endpoint group,
or an error message. All the components of the ACI
fabric can be represented as managed objects.

Figure 9-3 provides an overview of the MIT and its


elements.

Figure 9-3 Cisco ACI Management Information Tree

The MIT hierarchical structure starts at the top with the


root object and contains parent and child nodes. Each
node in the tree is an MO, and each object in the fabric
has a distinguished name (DN). The DN describes the
object and specifies its location in the tree. The following
managed objects contain the policies that control the
operation of the fabric:

APICs: These are the clustered fabric controllers that provide


management, application, and policy deployment for the fabric.

Tenants: Tenants represent containers for policies that are grouped


for a specific access domain. The following four kinds of tenants are
currently supported by the system:

User: User tenants are needed by the fabric administrator to cater


to the needs of the fabric users.

Common: The common tenant is provided by the system and can


be configured by the fabric administrator. This tenant contains
policies and resources that can be shared by all tenants. Examples
of such resources are firewalls, load balancers, and intrusion
detection systems.

Infra: The infra tenant is provided by the system and can be


configured by the fabric administrator. It contains policies that
manage the operation of infrastructure resources.

Management: The management tenant is provided by the system


and can be configured by the fabric administrator. This tenant
contains policies and resources used for in-band and out-of-band
configuration of fabric nodes.
Access policies: These policies control the operation of leaf switch
access ports, which provide fabric connectivity to resources such as
virtual machine hypervisors, compute devices, storage devices, and so
on. Several access policies come built in with the ACI fabric by default.
The fabric administrator can tweak these policies or create new ones, as
necessary.

Fabric policies: These policies control the operation of the switch


fabric ports. Configurations for time synchronization, routing
protocols, and domain name resolution are managed with these
policies.

VM domains: Virtual machine (VM) domains group virtual machine


controllers that require similar networking policy configurations. The
APIC communicates with the VM controller to push network
configurations all the way to the VM level.

Integration automation framework: The Layer 4 to Layer 7


service integration automation framework enables a system to respond
to services coming online or going offline.

AAA policies: Access, authentication, and accounting (AAA) policies


control user privileges, roles, and security domains for the ACI fabric.

The hierarchical policy model fits very well with the


REST API interface. As the ACI fabric performs its
functions, the API reads and writes to objects in the MIT.
The API resources represented by URLs map directly
into the distinguished names that identify objects in the
MIT.

Next, let’s explore the building blocks of the Cisco ACI


fabric policies.

Building Blocks of Cisco ACI Fabric Policies


Tenants are top-level MOs that identify and separate
administrative control, application policies, and failure
domains. A tenant can represent a customer in a
managed service provider environment or an
organization in an enterprise environment, or a tenant
can be a convenient grouping of objects and policies. A
tenant’s sublevel objects can be grouped into two
categories: tenant networking and tenant policy. Figure
9-4 provides a graphical representation of how a tenant
is organized and the main networking and policy
components.

Figure 9-4 Cisco ACI Tenant Components

The tenant networking objects provide Layer 2 and Layer


3 connectivity between the endpoints and consist of the
following constructs: VRF (virtual routing and
forwarding) instances, bridge domains, subnets, and
external networks. Figure 9-5 displays in more detail
how the tenant networking constructs are organized.

Figure 9-5 Cisco ACI Tenant Networking


Components

VRF instances, also called contexts and private


networks, are isolated routing tables. A VRF instance
defines a Layer 3 address domain. A tenant can contain
one or multiple VRF instances. VRF instances exist on
any leaf switch that has a host assigned to the VRF
instance. All the endpoints within a Layer 3 domain must
have unique IP addresses because traffic can flow
between these devices if allowed by the policy.
Bridge domains represent the Layer 2 forwarding
domains within the fabric and define the unique MAC
address space and flooding domain for broadcast,
unknown unicast, and multicast frames. Each bridge
domain is associated with only one VRF instance, but a
VRF instance can be associated with multiple bridge
domains. Bridge domains can contain multiple subnets,
which is different from regular VLANs, which are usually
associated with only one subnet each.

Subnets are the Layer 3 networks that provide IP address


space and gateway services for endpoints to be able to
connect to the network. Each subnet is associated with
only one bridge domain. Subnets can be the following:

Public: A subnet can be exported to a routed connection.

Private: A subnet is confined within its tenant.

Shared: A subnet can be shared and exposed in multiple VRF


instances in the same tenant or across tenants as part of a shared
service.

External bridged networks connect the ACI fabric to


legacy Layer 2/Spanning Tree Protocol networks. This is
usually needed as part of the migration process from a
traditional network infrastructure to an ACI network.

External routed networks create a Layer 3 connection


with a network outside the ACI fabric. Layer 3 external
routed networks can be configured using static routes or
routing protocols such as BGP, OSPF, and EIGRP.

The tenant policy objects are focused on the policies and


services that the endpoints receive. The tenant policy
consists of application profiles, endpoint groups (EPGs),
contracts, and filters. Figure 9-6 shows how the tenant
policy objects, application profiles, and EPGs are
organized in different bridge domains.
Figure 9-6 Cisco ACI Tenant Policy Components

An application profile defines the policies, services, and


relationships between EPGs. An application profile
contains one or more EPGs. Applications typically
contain multiple components, such as a web-based front
end, an application logic layer, and one or more
databases in the back end. The application profile
contains as many EPGs as necessary, and these EPGs are
logically related to providing the capabilities of the
application.

The EPG is the most important object in the policy


model. An EPG is a collection of endpoints that have
common policy requirements, such as security, virtual
machine mobility, QoS, or Layer 4 to Layer 7 services. In
the Cisco ACI fabric, each endpoint has an identity
represented by its address, a location, and attributes, and
it can be physical or virtual. Endpoint examples include
servers, virtual machines, clients on the internet, and
network-attached storage devices. Rather than configure
and manage endpoints individually, you can place them
in EPGs and manage them as a group. Policies apply to
EPGs and never to individual endpoints. Each EPG can
only be related to one bridge domain.
Contracts define the policies and services that get applied
to EPGs. Contracts can be used for redirecting service to
a Layer 4 to Layer 7 device, assigning QoS values, and
controlling the traffic flow between EPGs. EPGs can only
communicate with other EPGs based on contract rules.
Contracts specify the protocols and ports allowed
between EPGs. If there is no contract, inter-EPG
communication is disabled by default. For intra-EPG
communication, no contract is required as this traffic is
always allowed by default. The relationship between an
EPG and a contract can be either a consumer or a
provider. EPG providers expose contracts with which a
consumer EPG must comply. When an EPG consumes a
contract, the endpoints in the consuming EPG can
initiate communication with any endpoint from the
provider EPG. Figure 9-7 displays this contractual
relationship between providing and consuming EPGs.

Figure 9-7 Cisco ACI Application Profiles and


Contracts

Filters are the objects that define protocols and port


numbers used in contracts. Filter objects can contain
multiple protocols and ports, and contracts can consume
multiple filters.

APIC REST API


As mentioned previously, the APIC REST API is a
programmatic interface that uses the REST architecture.
The API accepts and returns HTTP or HTTPS messages
that contain JSON or XML documents. Any
programming language can be used to generate the
messages and the JSON or XML documents that contain
the API methods or managed object (MO) attributes.
Whenever information is retrieved and displayed, it is
read through the REST API, and whenever configuration
changes are made, they are written through the REST
API. The REST API also provides a way of subscribing to
push-based event notification, so that when a change
occurs in the MIT, an event can be sent through a web
socket.

The generic APIC REST API URI looks as follows:

https://APIC_Host:port/api/{mo|class}/{dn|clas
sname}.{xml|json}?[options]

Since the REST API matches one to one the MIT,


defining the URI to access a certain resource is
important. First, you need to define the protocol (http or
https) and the hostname or IP address of the APIC
instance. Next, /api indicates that the API is invoked.
After that, the next part of the URI specifies whether the
operation will be for an MO or a class. The next
component defines either the fully qualified domain
name for MO-based queries or the class name for class-
based queries. The final mandatory part of the request is
the encoding format, which can be either XML or JSON.
(The APIC ignores Content-Type and other headers, so
the method just explained is the only one accepted.) The
complete Cisco ACI REST API documentation with
information on how to use the API, all of the API
endpoints, and operations available can be found at
https://developer.cisco.com/docs/aci/.

APIC REST API username- and password-based


authentication uses a special URI, including aaaLogin,
aaaLogout, and aaaRefresh as the DN targets of a POST
operation. The payloads contain a simple XML or JSON
document containing the MO representation of an
aaaUser object. The following examples use the Cisco
DevNet always-on APIC instance available at
https://sandboxapicdc.cisco.com with a username value
of admin and a password of ciscopsdt to show how to
interact with an ACI fabric using the APIC REST API
interface. Using curl, the authentication API call should
look as shown in Example 9-1.

Example 9-1 curl Command for Cisco APIC


Authentication
Click here to view code image

curl -k -X POST \

https://sandboxapicdc.cisco.com/api/aaaLogin.json
\
-d '{
"aaaUser" : {
"attributes" : {
"name" : "admin",
"pwd" : "ciscopsdt"
}
}
}'

The returned information from the APIC should look as


shown in Example 9-2.

Example 9-2 Cisco APIC Authentication Response


Click here to view code image

{
"totalCount" : "1",
"imdata" : [
{
"aaaLogin" : {
"attributes" : {
"remoteUser" : "false",
"firstLoginTime" : "1572128727",
"version" : "4.1(1k)",
"buildTime" : "Mon May 13
16:27:03 PDT 2019",
"siteFingerprint" :
"Z29SSG/BAVFY04Vv",
"guiIdleTimeoutSeconds" :
"1200",
"firstName" : "",
"userName" : "admin",
"refreshTimeoutSeconds" : "600",
"restTimeoutSeconds" : "90",
"node" : "topology/pod-1/node-
1",
"creationTime" : "1572128727",
"changePassword" : "no",
"token" :
"pRgAAAAAAAAAAAAAAAAAAGNPf39fZd71fV6DJWidJoqxJmHt1Fephm-

w6Q0I5byoafVMZ29a6pL+4u5krJ0G2Jdrvl0l2l9cMx/o0ciIbVRfFZruCEgqsPg8+dbjb8kWX02FJLcw9Qp

sg98s5QfOaMDQWHSyqwOObKOGxxglLeQbkgxM8/fgOAFZxbKHMw0+09ihdiu7jTb7AAJVZEzYzXA==",

"unixUserId" : "15374",
"lastName" : "",
"sessionId" :
"1IZw4uthRVSmyWWH/+S9aA==",
"maximumLifetimeSeconds" :
"86400"
}
...omitted output
}

The response to the POST operation contains an


authentication token that will be used in subsequent API
operations as a cookie named APIC-cookie.

Next, let’s get a list of all the ACI fabrics that are being
managed by this APIC instance. The URI for this GET
operation is
https://sandboxapicdc.cisco.com/api/node/class/fabric
Pod.json, and the APIC-cookie header, are specified for
authentication purposes. The curl request should look
similar to the one shown in Example 9-3.

Example 9-3 curl Command to Get a List of All ACI


Fabrics
Click here to view code image
curl -k -X GET \

https://sandboxapicdc.cisco.com/api/node/class/fabricPod.json
\
-H 'Cookie: APIC-
cookie=pRgAAAAAAAAAAAAAAAAAAGNPf39fZd71fV6DJWidJoqxJmHt1Fephmw6Q0

I5byoafVMZ29a6pL+4u5krJ0G2Jdrvl0l2l9cMx/o0ciIbVRfFZruCEgqsPg8+dbjb8kWX02FJLcw9Qpsg

98s5QfOaMDQWHSyqwOObKOGxxglLeQbkgxM8/fgOAFZxbKHMw0+09ihdiu7jTb7AAJVZEzYzXA=='

The response received from this instance of the APIC


should look like the one in Example 9-4.

Example 9-4 curl Command Response with a List of


All ACI Fabrics
Click here to view code image

{
"totalCount" : "1",
"imdata" : [
{
"fabricPod" : {
"attributes" : {
"id" : "1",
"monPolDn" : "uni/fabric/monfab-
default",
"dn" : "topology/pod-1",
"status" : "",
"childAction" : "",
"modTs" : "2019-10-
26T18:01:13.491+00:00",
"podType" : "physical",
"lcOwn" : "local"
}
}
}
]
}

From this response, we can see that the always-on Cisco


DevNet Sandbox APIC instance manages only one ACI
fabric, called pod-1. Next, let’s find out more information
about pod-1 and discover how many devices are part of
this fabric. The API URI link for the resource that will
return this information is
https://sandboxapicdc.cisco.com/api/node/class/topolo
gy/pod-1/topSystem.json. We specify again APIC-
cookie, and the GET request should look like the one in
Example 9-5.

Example 9-5 curl Command to Get ACI Pod


Information
Click here to view code image

curl -k -X GET \

https://sandboxapicdc.cisco.com/api/node/class/topology/pod-
1/topSystem.json \
-H 'Cookie: APIC-
cookie=pRgAAAAAAAAAAAAAAAAAAGNPf39fZd71fV6DJWidJoqxJmHt1Fephmw6Q0

I5byoafVMZ29a6pL+4u5krJ0G2Jdrvl0l2l9cMx/o0ciIbVRfFZruCEgqsPg8+dbjb8kWX02FJLcw9Qpsg

98s5QfOaMDQWHSyqwOObKOGxxglLeQbkgxM8/fgOAFZxbKHMw0+09ihdiu7jTb7AAJVZEzYzXA=='

The redacted response from the APIC should look similar


to the one shown in Example 9-6.

Example 9-6 REST API Response Containing Details


About the Cisco ACI Pod
Click here to view code image

{
"imdata" : [
{
"topSystem" : {
"attributes" : {
"role" : "controller",
"name" : "apic1",
"fabricId" : "1",
"inbMgmtAddr" : "192.168.11.1",
"oobMgmtAddr" : "10.10.20.14",
"systemUpTime" :
"00:04:33:38.000",
"siteId" : "0",
"state" : "in-service",
"fabricDomain" : "ACI Fabric1",
"dn" : "topology/pod-1/node-
1/sys",
"podId" : "1"
}
}
},
{
"topSystem" : {
"attributes" : {
"state" : "in-service",
"siteId" : "0",
"address" : "10.0.80.64",
"fabricDomain" : "ACI Fabric1",
"dn" : "topology/pod-1/node-
101/sys",
"id" : "101",
"podId" : "1",
"role" : "leaf",
"fabricId" : "1",
"name" : "leaf-1"
}
}
},
{
"topSystem" : {
"attributes" : {
"podId" : "1",
"id" : "102",
"dn" : "topology/pod-1/node-
102/sys",
"address" : "10.0.80.66",
"fabricDomain" : "ACI Fabric1",
"siteId" : "0",
"state" : "in-service",
"role" : "leaf",
"name" : "leaf-2",
"fabricId" : "1"
}
}
},
{
"topSystem" : {
"attributes" : {
"fabricId" : "1",
"name" : "spine-1",
"role" : "spine",
"podId" : "1",
"id" : "201",
"dn" : "topology/pod-1/node-
201/sys",
"state" : "in-service",
"siteId" : "0",
"fabricDomain" : "ACI Fabric1",
"address" : "10.0.80.65"
}
}
}
],
"totalCount" : "4"
}

From the response, we can see that this ACI fabric is


made up of four devices: an APIC, two leaf switches, and
one spine switch. Extensive information is returned
about each device in this response, but it was modified to
extract and display just a subset of that information. You
are encouraged to perform the same steps and explore
the APIC REST API either using the Cisco DevNet
sandbox resources or your own instance of APIC.

As of this writing, there are several tools and libraries


available for Cisco ACI automation. An ACI Python SDK
called Cobra can be used for advanced development. For
basic day-to-day configuration and monitoring tasks and
for getting started with ACI automation, there is also a
Python library called acitoolkit. The acitoolkit library
exposes a subset of the APIC object model that covers the
most common ACI workflows.

Next, we will use acitoolkit to build a Python script that


retrieves all the endpoints from an ACI fabric. Additional
information about the endpoints—such as the EPGs they
are members of, application profiles that are applied to
those EPGs, and tenant membership, encapsulation,
MAC, and IP addresses—will be displayed for each
endpoint. A Python script that uses acitoolkit and
accomplishes these tasks might look as the one shown in
Example 9-7.
Example 9-7 acitoolkit Example
Click here to view code image

#! /usr/bin/env python
import sys
import acitoolkit.acitoolkit as aci

APIC_URL = 'https://sandboxapicdc.cisco.com'
USERNAME = 'admin'
PASSWORD = 'ciscopsdt'

# Login to APIC
SESSION = aci.Session(APIC_URL, USERNAME,
PASSWORD)
RESP = SESSION.login()
if not RESP.ok:
print('Could not login to APIC')
sys.exit()

ENDPOINTS = aci.Endpoint.get(SESSION)
print('{0:19s}{1:14s}{2:10s}{3:8s}{4:17s}
{5:10s}'.format(
"MAC ADDRESS",
"IP ADDRESS",
"ENCAP",
"TENANT",
"APP PROFILE",
"EPG"))
print('-'*80)

for EP in ENDPOINTS:
epg = EP.get_parent()
app_profile = epg.get_parent()
tenant = app_profile.get_parent()
print('{0:19s}{1:14s}{2:10s}{3:8s}{4:17s}
{5:10s}'.format(
EP.mac,
EP.ip,
EP.encap,
tenant.name,
app_profile.name,
epg.name))

The latest version of acitoolkit can be found at


https://github.com/datacenter/acitoolkit. Follow the
steps at this link to install acitoolkit. The acitoolkit
library supports Python 3, and version 0.4 of the library
is used in Example 9-7. The script was tested successfully
with Python 3.7.4.

First, the acitoolkit library is imported; it will be


referenced in the script using the short name aci. Three
variables are defined next: APIC_URL contains the
URL for the APIC instance that will be queried (in this
case, the Cisco DevNet always-on APIC sandbox), and
USERNAME and PASSWORD contain the login
credentials for the APIC instance. Next, using the
Session method of the aci class, a connection is
established with the APIC. The Session method takes as
input the three variables defined previously: the
APIC_URL, USERNAME, and PASSWORD. Next,
the success of the login action is verified. If the response
to the login action is not okay, a message is displayed to
the console (“Could not login to APIC”), and the script
execution ends. If the login was successful, all the
endpoints in the ACI fabric instance are stored in the
ENDPOINTS variable. This is done by using the get
method of the aci.Endpoint class and passing in the
current session object. Next, the headers of the table—
with the information that will be extracted—are
displayed to the console. As mentioned previously, the
MAC address, the IP address, the encapsulation, the
tenant, the application profile, and the EPG will be
retrieved from the APIC for all the endpoints in the
fabric. The for iterative loop will go over one endpoint at
a time and, using the get_parent() method, will go one
level up in the MIT and retrieve the parent MO of the
endpoint, which is the EPG. Recall that all endpoints in
an ACI fabric are organized in EPGs. The parent object
for an endpoint is hence the EPG of which that endpoint
is a member. Going one level up in the MIT, the parent
object of the EPG is the application profile, and going
one more level up, the parent object of the application
profile is the tenant object. The epg, app_profile, and
tenant variables contain the respective EPG, application
profile, and tenant values for each endpoint in the fabric.
The last line of code in the script displays to the console
the required information for each endpoint. The output
of the script should look similar to the output shown in
Figure 9-8.

Figure 9-8 Output of the Python Script from


Example 9-7

UCS MANAGER
Cisco Unified Computing System (UCS) encompasses
most of the Cisco compute products. The first UCS
products were released in 2009, and they quickly
established themselves as leaders in the data center
compute and server market. Cisco UCS provides a
unified server solution that brings together compute,
storage, and networking into one system. While initially
the UCS solution took advantage of network-attached
storage (NAS) or storage area networks (SANs) in order
to support requirements for large data stores, with the
release of Cisco HyperFlex and hyperconverged servers,
large storage data stores are now included with the UCS
solution.
Cisco UCS B-series blade servers, C-series rack servers,
S-series storage servers, UCS Mini, and Cisco HyperFlex
hyperconverged servers can all be managed through one
interface: UCS Manager. UCS Manager provides unified,
embedded management of all software and hardware
components of Cisco UCS. Cisco UCS Manager software
runs on a pair of hardware appliances called fabric
interconnects. The two fabric interconnects form an
active/standby cluster that provides high availability.
The UCS infrastructure that is being managed by UCS
Manager forms a UCS fabric that can include up to 160
servers. The system can scale to thousands of servers by
integrating individual UCS Manager instances with Cisco
UCS Central in a multidomain Cisco UCS environment.

UCS Manager participates in the complete server


lifecycle, including server provisioning, device discovery,
inventory, configuration, diagnostics, monitoring, fault
detection, and auditing and statistics collection. All
infrastructure that is being managed by UCS Manager is
either directly connected to the fabric interconnects or
connected through fabric extenders. Fabric extenders, as
the name implies, have the function of offering
additional scalability in connecting servers back to the
fabric interconnects. They are zero-management, low-
cost, and low-power devices that eliminate the need for
expensive top-of-rack Ethernet and Fibre Channel
switches. Figure 9-9 shows how all these components
connect to each other.
Figure 9-9 Cisco Unified Computing System
Connectivity

All Cisco UCS servers support Cisco SingleConnect


technology. Cisco SingleConnect is a revolutionary
technology that supports all traffic from the servers
(LAN, SAN, management, and so on) over a single
physical link. The savings that this technology brings
only as part of the cabling simplification is orders of
magnitude higher than competing products.

Cisco UCS Manager provides an HTML 5 graphical user


interface (GUI), a command-line interface (CLI), and a
comprehensive API. All Cisco UCS fabric functions and
managed objects are available over the UCS API.
Developers can take advantage of the extensive API and
can enhance the UCS platform according to their unique
requirements. Tools and software integrations with
solutions from third-party vendors like VMware,
Microsoft, and Splunk are already publicly available. We
will briefly see later in this chapter how the Cisco UCS
PowerTool for UCS Manager and the Python software
development kit (SDK) can be used to automate and
programmatically manage Cisco UCS Manager.

With Cisco UCS Manager, the data center servers can be


managed using an infrastructure-as-code framework.
This is possible through another innovation that is
included with the Cisco UCS solution: the service profile.
The service profile is a logical construct in UCS Manager
that contains the complete configuration of a physical
server. All the elements of a server configuration—
including RAID levels, BIOS settings, firmware revisions
and settings, adapter settings, network and storage
settings, and data center connectivity—are included in
the service profile. When a service profile is associated
with a server, Cisco UCS Manager automatically
configures the server, adapters, fabric extenders, and
fabric interconnects to match the configuration specified
in the service profile. With service profiles, infrastructure
can be provisioned in minutes instead of days. With
service profiles, you can even pre-provision servers and
have their configurations ready before the servers are
even connected to the network. Once the servers come
online and get discovered by UCS Manager, the service
profiles can be automatically deployed to the server.

The UCS Manager programmatic interface is the XML


API. The Cisco UCS Manager XML API accepts XML
documents over HTTP or HTTPS connections. Much as
with Cisco ACI, the configuration and state information
for Cisco UCS is stored in a hierarchical tree structure
known as the management information tree (MIT). The
MIT, which contains all the managed objects in the Cisco
UCS system, is accessible through the XML API. Any
programming language can be used to generate XML
documents that contain the API methods. One or more
managed objects can be changed with one API call.
When multiple objects are being configured, the API
operation stops if any of the MOs cannot be configured,
and a full rollback to the state of the system before the
change was initiated is done. API operations are
transactional and are done on the single data model that
represents the whole system. Cisco UCS is responsible
for all endpoint communications, making UCS Manager
the single source of truth. Users cannot communicate
directly with the endpoints, relieving developers from
administering isolated, individual component
configurations. All XML requests are asynchronous and
terminate on the active Cisco UCS Manager.

All the physical and logical components that make up


Cisco UCS are represented in a hierarchical management
information tree (MIT), also known as the Management
Information Model (MIM). Each node in the tree
represents a managed object (MO) or a group of objects
that contains its administrative and operational states.
At the top of the hierarchical structure is the sys object,
which contains all the parent and child nodes in the tree.
Each object in Cisco UCS has a unique distinguished
name that describes the object and its place in the tree.
The information model is centrally stored and managed
by a process running on the fabric interconnects that is
called the Data Management Engine (DME). When an
administrative change is initiated to a Cisco UCS
component, the DME first applies that change to the
information model and then applies the change to the
actual managed endpoint. This approach is referred to as
a model-driven framework.

A specific managed object in the MIT can be identified by


its distinguished name (DN) or by its relative name (RN).
The DN specifies the exact managed object on which the
API call is operating and consists of a series of relative
names:

DN = {RN}/{RN}/{RN}/{RN}...

A relative name identifies an object in the context of its


parent object.

The Cisco UCS Manager XML API model includes the


following programmatic entities:

Classes: Classes define the properties and states of objects in the MIT.

Methods: Methods define the actions that the API performs on one or
more objects.
Types: Types are object properties that map values to the object state.

Several types of methods are available with the XML


API:

Authentication methods: These methods, which include the


following, are used to authenticate and maintain a session:

aaaLogin: Login method

aaaRefresh: Refreshes the authentication cookie

aaaLogout: Exits the session and deactivates the corresponding


authentication cookie

Query methods: These methods, which include the following, are


used to obtain information on the current configuration state of an
object:

configResolveDn: Retrieves objects by DN

configResolveClass: Retrieves objects of a given class

configResolveParent: Retrieves the parent object of an object

Configuration methods: These methods, which include the


following, are used to make configuration changes to managed objects:

configConfMo: Affects a single MO

configConfMos: Affects multiple subtrees

Since the query methods available with the XML API can
return large sets of data, filters are supported to limit
this output to subsets of information. Four types of filters
are available:

Simple filters: These true/false filters limit the result set of objects
with the Boolean value of True or False.

Property filters: These filters use the values of an object’s properties


as the inclusion criteria in a result set (for example, equal filter, not
equal filter, greater than filter)

Composite filters: These filters are composed of two or more


component filters (for example, AND filter, OR filter)

Modifier filter: This filter changes the results of a contained filter.


Currently only the NOT filter is supported. This filter negates the result
of a contained filter.

External applications can get Cisco UCS Manager state


change information either by regular polling or by event
subscription. Full event subscription is supported with
the API and is the preferred method of notification.
Polling usually consumes a lot of resources and should
be used only in limited situations.

Cisco UCS Manager provides a managed object browser


called Visore. Visore can be accessed by navigating to
https://<UCS-Manager-IP>/visore.html. The web
interface looks as shown in Figure 9-10.

Figure 9-10 Cisco UCS Manager Visore Interface

The whole MIT tree can be explored, and also queries for
specific DNs can be run from this interface. Additional
developer resources regarding Cisco UCS Manager can
be found on the Cisco DevNet website, at
https://developer.cisco.com/site/ucs-dev-center/.

Next, let’s explore the Cisco UCS Manager XML API. The
complete documentation of the Cisco UCS Manager
information model for different releases can be found at
https://developer.cisco.com/site/ucs-mim-ref-api-
picker/. At this site, you can find all the managed objects,
all the methods, all the types, all the fault and FSM rules,
and extensive documentation for each of them.

In order for data center administrators and developers to


become more familiar with the Cisco UCS system, Cisco
has released a software emulator. Cisco UCS Platform
Emulator is the Cisco UCS Manager application bundled
into a virtual machine (VM). The VM includes software
that emulates hardware communications for the Cisco
UCS system. The Cisco UCS Platform Emulator can be
used to create and test a supported Cisco UCS
configuration or to duplicate an existing Cisco UCS
environment for troubleshooting and development
purposes. The Cisco UCS Platform Emulator is delivered
as an .ova file and can run in nearly any virtual
environment. The complete Cisco UCS Manager
information model documentation is also bundled within
the UCS Platform Emulator.

As usual, the Cisco DevNet team makes available to the


larger DevNet community a series of sandboxes for
easier product discovery and development. So far in this
chapter, we have used always-on sandboxes. In this
example, we will use a reservable sandbox. As the name
suggests, reservable sandboxes can be reserved up to 7
days and are available only to the person who makes the
reservation. At this writing, there is a Cisco UCS
Manager sandbox that can be used to explore the XML
API. It is called UCS Management and can be found at
https://developer.cisco.com/sandbox. This sandbox
takes advantage of the Cisco UCS Platform Emulator
version 3.2(2.5).

At this point, to authenticate and get an authentication


cookie, we can use the curl command as follows:

Click here to view code image


curl -k -X POST https://10.10.20.110/nuova \
-H 'Content-Type: application/xml' \
-d '<aaaLogin inName="ucspe" inPassword="ucspe">
</aaaLogin>'

The IP address of the Cisco UCS Manager is


10.10.20.110, the XML API resource is /nuova, and
the authentication method used is aaaLogin. The
username and password are passed in an XML document
within the inName and inPassword variables. In this
case, both the username and password are ucspe. The
Content-Type header specifies the type of data that the
POST call will send to the XML API (which is, of course,
XML in this case).

The response should be similar to the following one:

Click here to view code image

<aaaLogin cookie="" response="yes"


outCookie="1573019916/7c901636-
c461-487e-bbd0-c74cd68c27be"
outRefreshPeriod="600"
outPriv="aaa,admin,ext-lan-config,ext-lan-
policy,ext-lan-
qos,ext-lan-security,ext-san-config,ext-san-
policy,ext-san-
security,fault,operations,pod-config,pod-
policy,pod-qos,pod-
security,read-only" outDomains="org-root"
outChannel="noencssl"
outEvtChannel="noencssl" outSessionId=""
outVersion="3.2(2.5)"
outName="" />

aaaLogin specifies the method used to log in, the "yes"


value confirms that this is a response, outCookie
provides the session cookie, outRefreshPeriod
specifies the recommended cookie refresh period (where
the default is 600 seconds), and the outPriv value
specifies the privilege level associated with the account.

Next, let’s get a list of all the objects that are part of the
compute class and are being managed by this instance of
Cisco UCS Manager. In order to accomplish this, we can
use the configFindDnsByClassId method. This
method finds distinguished names and returns them
sorted by class ID. The curl command should look
similar to the following one:

Click here to view code image

curl -k -X POST https://10.10.20.110/nuova \


-H 'Content-Type: application/xml' \
-d '<configFindDnsByClassId
classId="computeItem"
cookie="1573019916/7c901636-c461-487e-bbd0-
c74cd68c27be" />'

The XML API endpoint, https://10.10.20.110/nuova,


and the Content-Type header, application/xml, stay
the same. The XML data that is being sent to the Cisco
UCS Manager server is different. First, the
configFindDnsByClassId method is specified, and
then the two mandatory variables for classId and the
cookie are passed in. The classId specifies the object
class that in this case is the computeItem class, and the
cookie is being populated with the value of the
authentication cookie obtained previously.

The response in this case, as shown in Example 9-8,


contains a complete list of all the compute items that are
being managed by the 10.10.20.110 instance of Cisco
UCS Manager.

Example 9-8 List of Compute Items That Are Being


Managed by Cisco UCS Manager
Click here to view code image

<configFindDnsByClassId
cookie="1573019916/7c901636-c461-487e-bbd0-
c74cd68c27be"
response="yes" classId="computeItem">
<outDns>
<dn value="sys/chassis-
4/blade-8"/>
<dn value="sys/chassis-
5/blade-8"/>
<dn value="sys/chassis-
6/blade-8"/>
<dn value="sys/chassis-
6/blade-1"/>
<dn value="sys/chassis-
3/blade-1"/>
... omitted output
<dn value="sys/rack-unit-9"/>
<dn value="sys/rack-unit-8"/>
<dn value="sys/rack-unit-7"/>
<dn value="sys/rack-unit-6"/>
<dn value="sys/rack-unit-5"/>
<dn value="sys/rack-unit-4"/>
<dn value="sys/rack-unit-3"/>
<dn value="sys/rack-unit-2"/>
<dn value="sys/rack-unit-1"/>
</outDns>
</configFindDnsByClassId>

In our exploration of the Cisco UCS Manager XML API,


let’s now get more information about a specific compute
object. The method to retrieve a single managed object
for a specified DN is configResolveDn. The curl
command for this API request should look as shown in
Example 9-9.

Example 9-9 Using a curl Command to Retrieve


Information About a Compute Object

Click here to view code image

curl -k -X POST https://10.10.20.110/nuova \


-H 'Content-Type: application/xml' \
-d '<configResolveDn
cookie="1573019916/7c901636-c461-487e-bbd0-
c74cd68c27be"
dn="sys/chassis-4/blade-8" />'

Much as in the previous call, the API endpoint and


Content-Type header stay the same. The XML data
that is being sent with the request contains the method,
configResolveDn, the authentication cookie, and the
DN for which additional information is requested, which
in this case is the blade number 8 in chassis number 4:
sys/chassis-4/blade-8.

The response contains extensive information about the


blade server in slot 8 from the chassis with number 4, as
shown in Example 9-10.

Example 9-10 curl Command Response Containing


Information About a Compute Object

Click here to view code image

<configResolveDn dn="sys/chassis-4/blade-8"
cookie="1573019916/7c901636-c461-487e-bbd0-
c74cd68c27be" response="yes">
<outConfig>
<computeBlade
adminPower="policy" adminState="in-service"
assetTag=""
assignedToDn=""
association="none"
availability="available"
availableMemory="49152"
chassisId="4"
checkPoint="discovered"
connPath="A,B" connStatus="A,B" descr=""
discovery="complete"
discoveryStatus=""
dn="sys/chassis-4/blade-8" fltAggr="0"
fsmDescr=""
fsmFlags=""
fsmPrev="DiscoverSuccess"
fsmProgr="100" fsmRmtInvErrCode="none"
fsmRmtInvErrDescr=""
fsmRmtInvRslt=""
fsmStageDescr="" fsmStamp="2019-11-
06T04:02:03.896"
fsmStatus="nop"
fsmTry="0" intId="64508"
kmipFault="no" kmipFaultDescription=""
lc="undiscovered"
lcTs="1970-01-01T00:00:00.000"
localId="" lowVoltageMemory="not-applicable"
managingInst="A"
memorySpeed="not-applicable" mfgTime="not-
applicable"
model="UCSB-
B200-M4" name=""
numOf40GAdaptorsWithOldFw="0"

numOf40GAdaptorsWithUnknownFw="0"
numOfAdaptors="1"
numOfCores="8" numOfCoresEnabled="8"
numOfCpus="2"
numOfEthHostIfs="0"
numOfFcHostIfs="0" numOfThreads="16"
operPower="off"
operPwrTransSrc="unknown"
operQualifier="" operSolutionStackType="none"
operState="unassociated"
operability="operable"
originalUuid="1b4e28ba-2fa1-
11d2-
0408-b9a761bde3fb"
partNumber="" policyLevel="0"
policyOwner="local"
presence="equipped"
revision="0" scaledMode="none" serial="SRV137"
serverId="4/8"
slotId="8" totalMemory="49152"
usrLbl=""
uuid="1b4e28ba-2fa1-11d2-0408-
b9a761bde3fb"
vendor="Cisco Systems Inc"
vid=""/>
</outConfig>
</configResolveDn>

While interacting with the Cisco UCS Manager XML API


this way is possible, you can see that it becomes
cumbersome very quickly. The preferred way of working
with the XML API is either through Cisco UCS
PowerTool suite or the Cisco UCS Python SDK.

The Cisco UCS PowerTool suite is a PowerShell module


that helps automate all aspects of Cisco UCS Manager.
The PowerTool cmdlets work on the Cisco UCS MIT. The
cmdlets can be used to execute read, create, modify, and
delete operations on all the managed objects in the MIT.
The Cisco UCS PowerTool suite enables easy integration
with existing IT management processes and tools. The
PowerTool suite can be downloaded for Windows via PS
Gallery and for Linux from
https://community.cisco.com/t5/cisco-developed-ucs-
integrations/cisco-ucs-powertool-core-suite-for-
powershell-core-modules-for/ta-p/3985798.
Cisco UCS Python SDK is a Python module that helps
automate all aspects of Cisco UCS management,
including server, network, storage, and hypervisor
management. The Cisco UCS Python SDK works on the
Cisco UCS Manager MIT, performing create, read,
modify, or delete actions on the managed objects in the
tree. Python versions 2.7 and higher and version 3.5 and
higher are supported. The Cisco UCS Python module for
UCS Manager is called ucsmsdk and can be installed
using pip by issuing the following command at the
command prompt: pip install ucsmsdk. As of this
writing, the current version of the ucsmsdk module is
0.9.8.

The Cisco UCS Python SDK provides a utility called


convert_to_ucs_python that gives administrators and
developers the option of recording all the interactions
with the Cisco UCS Manager GUI and saving them into
an XML file. Running this XML file through the
convert_to_ucs_python tool automatically generates
Python code corresponding to the actions that were
performed in the GUI. Using this process, data center
automation efforts can be sped up orders of magnitude,
and simple tasks such as creating a new VLAN or
complex tasks such as configuring a service policy
template can be automated within seconds.

Next, let’s explore the Cisco UCS Python SDK and see
how to connect to a Cisco UCS Manager instance,
retrieve a list of all the compute blades in the system, and
extract specific information from the returned data. The
sample Python code is built in Python 3.7.4 using version
0.9.8 of the ucsmsdk module.

First, the UcsHandle class is imported. An instance of


this class is used to connect to Cisco UCS Manager. The
Cisco UCS Manager IP address, username, and password
are passed in as parameters to the instance of the
UcsHandle class that is called HANDLE. Several
methods are available with the UcsHandle class. In this
script only three are used:

HANDLE.login(): This method is used to log in to Cisco UCS


Manager.

HANDLE.query_classid(): This method is used to query the MIT


for objects with a specific class ID.

HANDLE.logout(): This method is used to log out from the Cisco


UCS Manager.

The BLADES variable contains a dictionary of all the


compute blades that are being managed by the
10.10.20.110 instance of Cisco UCS Manager. Within a
for loop, specific information regarding the DN, serial
number, administrative state, model number, and total
amount of memory for each blade is extracted and
displayed to the console. The Python script using the
Cisco UCS Manager SDK that accomplishes all of these
tasks looks as shown in Example 9-11.

Example 9-11 ucsmsdk Python Example

Click here to view code image

#! /usr/bin/env python
from ucsmsdk.ucshandle import UcsHandle
HANDLE = UcsHandle("10.10.20.110", "ucspe",
"ucspe")

# Login into Cisco UCS Manager


HANDLE.login()

# Retrieve all the compute blades


BLADES = HANDLE.query_classid("ComputeBlade")

print('{0:23s}{1:8s}{2:12s}{3:14s}
{4:6s}'.format(
"DN",
"SERIAL",
"ADMIN STATE",
"MODEL",
"TOTAL MEMORY"))
print('-'*70)
# Extract DN, serial number, admin state,
# model, and total memory for each blade
for BLADE in BLADES:
print('{0:23s}{1:8s}{2:12s}{3:14s}
{4:6s}'.format(
BLADE.dn,
BLADE.serial,
BLADE.admin_state,
BLADE.model,
BLADE.total_memory))

HANDLE.logout()

The results of running this script look as shown in Figure


9-11.

Figure 9-11 Output of the Python Script from


Example 9-11

CISCO UCS DIRECTOR


Automation delivers the essential scale, speed, and
repeatable accuracy needed to increase productivity and
respond quickly to business requirements in a data
center environment. Cisco UCS Director replaces manual
configuration and provisioning processes with
orchestration in order to optimize and simplify delivery
of data center resources.

This open private-cloud platform delivers on-premises


infrastructure as a service (IaaS) from the core to the
edge of the data center. Automated workflows configure,
deploy, and manage infrastructure resources across
Cisco and third-party computing, network, and storage
resources and converged and hyperconverged
infrastructure solutions. Cisco UCS Director supports the
industry’s leading converged infrastructure solutions,
including NetApp FlexFod and FlexPod Express, EMC
VSPEX, EMC VPLEX, and VCE Block. It delivers unified
management and orchestration for a variety of
hypervisors across bare-metal and virtualized
environments.

A self-service portal, a modern service catalog, and more


than 2500 multivendor tasks enable on-demand access
to integrated services across data center resources. Cisco
UCS Director allows IT professionals and development
teams to order and manage infrastructure services on
demand.

Cisco UCS Director is supported by a broad ecosystem.


Third-party hardware and solution vendors can use the
southbound APIs and the SDKs provided with them to
develop integrations into the Cisco UCS Director
management model. Northbound APIs can be used by
DevOps and IT operations management tools to interact
with Cisco UCS Director and perform all the functions
provided by the solution in a programmable and
automated fashion.

Cisco UCS Director provides comprehensive visibility


and management of data center infrastructure
components. From a data center management
perspective, the following are some of the tasks that can
be performed using Cisco UCS Director:

Create, clone, and deploy service profiles and templates for all Cisco
UCS servers and compute applications.

Manage, monitor, and report on data center components such as Cisco


UCS domains or Cisco Nexus devices.

Monitor usage, trends, and capacity across a converged infrastructure


on a continuous basis.

Deploy and add capacity to converged infrastructures in a consistent,


repeatable manner.

Cisco UCS Director also enables the creation of


workflows that provide automation services. These
automation workflows can be published and made
available to the end users of the data center resources
through on-demand portals. Once built and validated,
these workflows perform the same way every time, no
matter who triggers them. A data center administrator
can run them, or role-based access control can be
implemented to enable users and customers to run these
workflows on a self-service basis. From an infrastructure
automation perspective, some of the use cases that Cisco
UCS Director can help automate include the following:

Virtual machine provisioning and lifecycle management.

Compute, network, and storage resources configuration and lifecycle


management.

Bare-metal server provisioning, including operating system


installation.

Cisco UCS Director supports Cisco ACI by offering


automation workflows that orchestrate the APIC
configuration and management tasks. It also supports
multitenancy and the ability to define contracts between
different container tiers.

Cisco UCS Director can be managed using Cisco


Intersight, which is covered later in this chapter. Cisco
UCS Director is a 64-bit appliance that uses the standard
templates Open Virtualization Format (OVF) for
VMware vSphere and Virtual Hard Disk (VHD) for
Microsoft Hyper-V and can be downloaded from
www.cisco.com.

Next, let’s go over some essential concepts needed to


understand how the Cisco UCS Director orchestrator
works. First, there is the concept of a task. A task is an
atomic unit of work in Cisco UCS Director; it cannot be
decomposed into smaller actions and represents a single
action with inputs and outputs. Cisco UCS Director has a
task library that contains hundreds of predefined tasks,
such as an SSH command task (executing a command in
a Secure Shell session), an inventory collection task
(gathering information about available devices), a new
VM provisioning task (creating a new virtual machine),
and many more. In the event that there is no suitable
predefined task, the system offers the option of creating
custom tasks, as described later in this section.

The second concept is the workflow. A workflow is a


series of tasks arranged to automate a complex
operation. The simplest workflow contains a single task,
but workflows can contain any number of tasks.
Workflows are at the heart of Cisco UCS Director
orchestration. They automate processes of any level of
complexity. Workflows are built using the Workflow
Designer, which is a drag-and-drop interface. In
Workflow Designer, the tasks are arranged in sequence
and define inputs and outputs to those tasks. Loops and
conditionals can be implemented using flow of control
tasks. Every time a workflow is executed, a service
request is generated. Workflows can be scheduled for
later execution, and Cisco UCS Director stores details of
completed service requests. A service request can have
one of several states, depending on its execution status:
scheduled, running, blocked, completed, or failed.
Finally, libraries and catalogs are collections of
predefined tasks and workflows that can be used as
building blocks for more advanced workflows.

Let’s now explore the programmability and extensibility


of Cisco UCS Director. The Cisco UCS Director SDK is a
collection of technologies that enable developers to
extend the capabilities of Cisco UCS Director, access
Cisco UCS Director data, and invoke Cisco UCS
Director’s automation and orchestration operations from
any application. The Cisco UCS Director SDK includes
the Open Automation component. Scripting technologies
include the Cisco UCS Director PowerShell API, custom
tasks bundled in Cisco UCS Director script modules, and
the ability to write custom tasks using CloupiaScript, a
server-side JavaScript implementation.

The Cisco UCS Director SDK makes the following


possible:

Accessing Cisco UCS Director programmatically by using the Cisco UCS


Director REST API

Customizing Cisco UCS Director by creating custom workflow tasks

Extending Cisco UCS Director by using Cisco UCS Director Open


Automation to build connectors that support additional devices and
systems

Cisco UCS Director provides the Cisco UCS Director


Open Automation module to enable developers to
enhance the functionality of the Cisco UCS Director
appliance. Open Automation can be used to add modules
to Cisco UCS Director. A module is the topmost logical
entry point into Cisco UCS Director. In order to add or
extend the functionality of the system, a new module
must be developed and deployed on Cisco UCS Director.
A module developed using Open Automation behaves the
same way as any Cisco UCS Director built-in feature or
module. Open Automation is a Java SDK and framework
that contains all the resources needed to develop new
modules. Some of the use cases for Open Automation are
the following:

Adding the ability to control a new type of device with Cisco UCS
Director

Designing custom menus for displaying new devices or components

Taking inventory of new devices

Developing custom Cisco UCS Director reports and report actions

Developing tasks that can be used in workflows

Custom tasks enable developers to perform customized


operations on Cisco UCS Director resources. Custom
tasks are written using CloupiaScript, a language similar
to JavaScript. Custom tasks can be used like any other
task, including in workflows that orchestrate how the
system works. Script bundles are collections of custom
tasks that are included with each Cisco UCS Director
release and are used for a variety of specific applications.
Script bundles can be downloaded, and the custom tasks
that are contained in a bundle can be imported into Cisco
UCS Director. The main goal with custom tasks is to
expand the range of tasks that is available for use in
orchestration workflows.

Script modules are used to integrate third-party JARs


(Java Archives) and custom libraries with Cisco UCS
Director to add custom functionality to the Cisco UCS
Director user interface. Some script module operations
are already defined in Cisco UCS Director, such as
creating advanced controls to collect user input in
workflows and context workflow mapping, which enables
an administrator to attach workflows to custom actions
in a report in Cisco UCS Director. Script modules can be
exported and reused in different instances of Cisco UCS
Director. Script modules, although named similarly to
script bundles, have in fact a very different role. Script
bundles, as we’ve seen previously, are packaged
collections of workflow tasks that are released with Cisco
UCS Director. Script modules, on the other hand, make it
possible to add custom functionality to Cisco UCS
Director.

Cisco UCS Director PowerShell console is a Cisco-


developed application that provides a PowerShell
interface to the Cisco UCS Director REST API. The
console provides a set of PowerShell cmdlets wrapped in
a module to internally invoke the REST APIs over HTTP.
Each cmdlet performs a single operation. Cmdlets can be
chained together to accomplish more advanced
automation and data center management tasks. Figure 9-
12 shows the relationship between the PowerShell
console, Cisco UCS Director, and the infrastructure that
is being managed by it.

Figure 9-12 Cisco UCS Director PowerShell Console

Cisco UCS Director offers a REST API that enables


applications to consume or manipulate the data stored in
Cisco UCS Director. Applications use HTTP or HTTPS
requests from the REST API to perform
Create/Read/Update/Delete (CRUD) operations on
Cisco UCS Director resources. With an API call, a
developer can execute Cisco UCS Director workflows and
change the configuration of switches, adapters, policies,
and any other hardware and software components. The
API accepts and returns HTTP messages that contain
JavaScript Object Notation (JSON) or Extensible
Markup Language (XML) documents.

To access the Cisco UCS Director REST API, a valid user


account and an API access key are needed. Cisco UCS
Director uses the API access key to authenticate an API
request. The access key is a unique security access code
that is associated with a specific Cisco UCS Director user
account. In order to retrieve the API access key for a
specific user, you first log in to Cisco UCS Director with
that specific user account. Then hover the mouse over
the user icon in the top-right corner and select Edit My
Profile from the drop-down list. On the Edit My Profile
page, select Show Advanced Settings and retrieve the API
access key from the REST API Access Key area. There is
also an option to regenerate the access key, if necessary.

Within the user advanced settings is an option to enable


the developer menu. By enabling the developer menu,
access to the REST API browser and the Report
Metadata features is turned on. The REST API browser
becomes visible under the Orchestration tab of Cisco
UCS Director and provides API information and API
code generation capabilities for all available APIs. The
Report Metadata option becomes available on all the
pages of the Cisco UCS Director GUI; when selected, it
returns the API code that the GUI is using to retrieve the
information that is displayed to the user in that specific
page. This code includes a complete URL that is ready to
paste into a browser to send the request to Cisco UCS
Director. Both the REST API browser and the Report
Metadata features are extremely valuable to developers
as they provide ready-to-use sample code and API calls
to all the resources available in Cisco UCS Director.
Figure 9-13 shows the Cisco UCS Director REST API
browser web interface.
Figure 9-13 Cisco UCS Director REST API Browser

Each REST API request must be associated with an


HTTP header called X-Cloupia-Request-Key, with its
value set to the REST API access key retrieved
previously. The REST API request must contain a valid
URL of the following format:

https://Cisco_UCS_Director/app/api/rest?
formatType=json&opName=operationName&op
Data=operationData

where

Cisco_UCS_Director: This is the IP address or hostname of the


Cisco UCS Director VM.

formatType: This can be either JSON or XML; it is JSON in this case.


(Only JSON is discussed throughout the rest of this chapter.)

opName: This is the API operation name that is associated with the
request (for example, userAPIGetMyLoginProfile), as explored later in
this chapter.

opData: This contains the parameters or the arguments associated


with the operation. Cisco UCS Director uses JSON encoding for the
parameters. If an operation doesn’t require any parameters, the empty
set {} should be used. When building the URL, escape characters
should be encoded as appropriate.

Next, let’s explore the Cisco UCS Director REST API by


using curl to construct API calls. Programming guides
and complete documentation of the Cisco UCS REST API
can be found at the following link:
https://www.cisco.com/c/en/us/support/servers-
unified-computing/ucs-director/products-
programming-reference-guides-list.html. The Cisco
DevNet team makes available a reservable sandbox
called “UCS Management” for learning purposes. This
sandbox contains an installation of Cisco UCS Director
and is available at https://developer.cisco.com/sandbox.
Cisco UCS Director version 6.7 has been used in the
following interactions with the REST API. The operation
with the name userAPIGetMyLoginProfile is used to
retrieve the profile of the user with the specific access key
that is passed in the request in order to identify the
group to which the user belongs. The curl command for
this operation looks as shown in Example 9-12.

Example 9-12 curl Command to Retrieve the User


Profile in Cisco UCS Director
Click here to view code image

curl -k -L -X GET \
-g 'https://10.10.10.66/app/api/rest?
formatType=json&opName=userAPIGetMyLoginProfi
le&opData={}' \
-H 'X-Cloupia-Request-Key:
8187C34017C3479089C66678F32775FE'

For this request, the -g parameter disables the curl


check for nested braces {}, the -k or -insecure
parameter allows curl to proceed and operate even if the
server uses self-signed SSL certificates, and the -L
parameter allows curl to follow the redirects sent by the
server. The URL for the request follows the requirements
discussed previously, using the /app/api/rest endpoint
to access the REST API and then passing the
formatType, opName, and opData as parameters.
The HTTP header for authentication is named X-
Cloupia-Request-Key and contains the value of the
access key for the admin user for the Cisco UCS Director
instance that runs on the server with IP address
10.10.10.66. The response from this instance of Cisco
UCS Director looks as shown in Example 9-13.

The operation name is contained in the response and is


indeed userAPIGetMyLoginProfile, serviceName
specifies the name of the back-end service (which is in
most cases InfraMgr), serviceResult contains a set of
name/value pairs or a JSON object if the request was
successful, and serviceError contains the error
message. If the request succeeds, the serviceError
value is set to null, and if the operation fails,
serviceError contains the error message.

Example 9-13 REST API Response Containing User


Profile Information

Click here to view code image

{
"opName" : "userAPIGetMyLoginProfile",
"serviceName" : "InfraMgr",
"serviceResult" : {
"email" : null,
"groupName" : null,
"role" : "Admin",
"userId" : "admin",
"groupId" : 0,
"firstName" : null,
"lastName" : null
},
"serviceError" : null}

As mentioned previously, Cisco UCS Director tasks and


workflows can have any number of input and output
variables. In order to retrieve the inputs for a specific
workflow, the userAPIGetWorkflowInputs
operation can be used with the name of the desired
workflow in the param0 field. Cisco UCS Director
comes by default with a large number of predefined
workflows. One of them is the “VMware OVF
Deployment,” which, as the name implies, can deploy
new VMware virtual machines based on OVF images.
The curl command in Example 9-14 contains the API
call to retrieve all the inputs for this workflow.

Example 9-14 curl Command to Retrieve Workflow


Inputs in Cisco UCS Director
Click here to view code image

curl -k -L -X GET \
-g 'http://10.10.10.66/app/api/rest?
formatType=json&opName=userAPIGetWorkflowInput
s&opData=
{param0:%22VMware%20OVF%20Deployment%22}' \
-H 'X-Cloupia-Request-Key:
8187C34017C3479089C66678F32775FE'

Notice that the name of the workflow is passed in the API


call in the param0 parameter and also that VMware
OVF Deployment is encoded, using single quotation
marks and spaces between the words. Example 9-15
shows a snippet of the response.

The response contains similar fields to the response in


Example 9-14. opName is confirmed as
userAPIGetWorkflowInputs, the back-end service
that responded to the request is once again InfraMgr,
serviceError is null (indicating that there were no
errors in processing the request), and serviceResult
contains a list called details, which includes all the
inputs and their properties for the VMware OVF
Deployment workflow.

Example 9-15 curl Command Response Containing


Workflow Inputs in Cisco UCS Director
Click here to view code image
{
"serviceResult" : {
"details" : [
{
"inputFieldValidator" :
"VdcValidator",
"label" : "vDC",
"type" : "vDC",
"inputFieldType" : "embedded-lov",
"catalogType" : null,
"isOptional" : false,
"name" : "input_0_vDC728",
"isMultiSelect" : false,
"description" : "",
"isAdminInput" : false
},
{
"isAdminInput" : false,
"description" : "",
"label" : "OVF URL",
"type" : "gen_text_input",
"isMultiSelect" : false,
"isOptional" : false,
"inputFieldType" : "text",
"catalogType" : null,
"name" : "input_1_OVF_URL465",
"inputFieldValidator" : null
},
...omitted output
]
},
"serviceName" : "InfraMgr",
"opName" : "userAPIGetWorkflowInputs",
"serviceError" : null}

CISCO INTERSIGHT
The Cisco Intersight platform provides intelligent cloud-
powered infrastructure management for Cisco UCS and
Cisco HyperFlex platforms. Cisco UCS and Cisco
HyperFlex use model-based management to provision
servers and the associated storage and networking
automatically. Cisco Intersight works with Cisco UCS
Manager and Cisco Integrated Management Controller
(IMC) to bring the model-based management of Cisco
compute solutions into one unified management
solution. Cisco Intersight offers flexible deployment
options either as software as a service (SaaS) on
https://intersight.com or running a Cisco Intersight
virtual appliance on premises. Some of the benefits of
using Cisco Intersight are the following:

It simplifies Cisco UCS and Cisco HyperFlex management with a single


management platform.

It makes it possible to scale across data center and remote locations


without additional complexity.

It automates the generation and forwarding of technical support files to


the Cisco Technical Assistance Center to accelerate the troubleshooting
process.

Full programmability and automation capabilities are available through


a REST API interface.

A streamlined upgrade process is available for standalone Cisco UCS


servers.

Getting started with Cisco Intersight involves the


following steps:

Step 1. Log in to https://intersight.com with a Cisco


ID account.
Step 2. Claim a device for the account. Endpoint
devices connect to the Cisco Intersight portal
through a device connector that is embedded in
the management controller of each system.

Step 3. (Optional) Add additional users to the new


account. Several roles are available, including
read-only and admin roles. Custom roles can be
created, if needed.

Cisco Intersight includes a REST API interface built on


top of the OpenAPI specification. The API
documentation, API schemas, and SDKs can be found at
https://intersight.com/apidocs. At this writing, Python
and PowerShell SDKs are available for download at the
previous link. The API accepts and returns messages that
are encapsulated in JSON documents and are sent over
HTTPS. The Intersight API is a programmatic interface
to the Management Information Model that is similar to
Cisco ACI and Cisco UCS Manager. Just like Cisco ACI
and Cisco UCS Manager MIMs, the Cisco Intersight MIM
is composed of managed objects. Managed objects or
REST API resources are uniquely identified by URI
(uniform resource identifier) or, as seen earlier in this
chapter, distinguished name (DN). Example of managed
objects include Cisco UCS servers; Cisco UCS fabric
interconnects; Cisco HyperFlex nodes and clusters;
server, network, and storage policies; alarms; statistics;
users; and roles. Cisco Intersight managed objects are
represented using a class hierarchy specified in the
OpenAPI specification. All the API resources are
descendants of the mo.Mo class. Table 9-2 shows the
properties that are common to all managed objects.

Table 9-2 Common Properties for All Managed


Objects in Cisco Intersight

Property NameDescription

M A unique identifier of the managed object instance.


oi
d

O The fully qualified class name of the managed object.


bj
ec
tT
yp
e

Ac The Intersight account ID for the managed object.


co
u
nt
M
oi
d
Cr The time when the managed object was created.
ea
te
Ti
m
e

M The time when the managed object was last modified.


od ModTime is automatically updated whenever at least
Ti one property of the managed object is modified.
m
e

O An array of owners, which represents effective


w ownership of the object.
ne
rs

Ta An array of tags that allow the addition of key/value


gs metadata to managed objects.

A An array containing the MO references of the ancestors


nc in the object containment hierarchy.
es
to
rs

Pa The direct ancestor of the managed object in the


re containment hierarchy.
nt

Every managed object has a unique Moid identifier


assigned when the resource is created. The Moid is used
to uniquely distinguish a Cisco Intersight resource from
all other resources. The Moid is a 12-byte string set
when a resource is created.

Each managed object can be addressed using a unique


uniform resource identifier (URI) that includes the
Moid. The URI can be used in any HTTPS request to
address the managed object. A generic Cisco Intersight
URI is of the following form:
https://intersight.com/path[?query]

The URI of a managed object includes the following:

https: The HTTPS protocol

intersight.com: The Cisco Intersight hostname

path: The path, organized in hierarchical form

query: An optional query after the question mark and typically used to
limit the output of the response to only specific parameters

For example, the URI of an object with Moid


48601f85ae74b80001aee589 could be:

https://intersight.com/api/v1/asset/DeviceRegis
trations/48601f85ae74b80001aee589

Every managed object in the Cisco Intersight


information model supports tagging. Tagging is used to
categorize objects by a certain common property, such as
owner, geographic location, or environment. Tags can be
set and queried through the Intersight API. Each tag
consists of a key and an optional value. Both the key and
the value are of type string.

Managed objects may include object relationships, which


are dynamic links to REST resources. Cisco Intersight
uses Hypermedia as the Engine of Application State
(HATEOAS) conventions to represent object
relationships. Object relationships can be links to self or
links to other managed objects, which, taken as a whole,
form a graph of objects. By using relationships as a first-
class attribute in the object model, many classes of
graphs can be represented, including trees and cyclic or
bipartite graphs.

Intersight provides a rich query language based on the


OData standard. The query language is represented
using URL query parameters for GET results. Several
types of data are supported with the Intersight queries,
including string, number, duration, data and time, and
time of day.
When a client sends an API request, the Intersight web
service must identify and authenticate the client. The
Intersight web service supports two authentication
methods:

API keys

Session cookies

An Intersight API key is composed of a keyId and a


keySecret. The API client uses the API key to
cryptographically sign each HTTP request sent to the
Intersight web service. The “signature” parameter is a
base 64–encoded digital signature of the message HTTP
headers and message content. API keys are generated in
the Settings > API section of the Intersight web interface.
As a best practice, it is recommended to generate
separate API keys for each client application that needs
access to the API.

Cookies are used primarily by the Intersight GUI client


running in a browser. When accessing the Intersight web
service, end users must first authenticate to
https://sso.cisco.com. When authentication is successful,
sso.cisco.com sends a signed SAML assertion to the
Intersight web service, and Intersight generates a session
cookie with a limited time span validity. The client must
send the session cookie in each API request.

Included with the Cisco Intersight REST API


documentation at https://intersight.com/apidocs are the
API reference documentation and an embedded REST
API client. Figure 9-14 shows the web interface for the
Cisco Intersight API reference documentation. In this
figure, the Get a list of
‘equipmentDeviceSummary’ instances API call is
selected. The query parameters that this specific API call
supports are displayed, as are the API URI for the
endpoint that will return the list of all the devices that
are being managed by Cisco Intersight for this specific
account. Much as with Postman, if the Send button is
clicked, the API call is triggered, and the response is
displayed in the Response Text window.

Figure 9-14 Cisco Intersight REST API Reference


Documentation

Under the Downloads section of


https://intersight.com/apidocs, the Cisco Intersight
Python and PowerShell SDKs can be downloaded. The
Python SDK covers all the functionality of the REST API
and offers Python classes and methods that can be used
to simplify Cisco Intersight automation projects. The
Python sample code in Example 9-16 was developed
using the Intersight module version 1.0 and Python 3.7.4.
This Python code replicates the earlier REST API call
equipmentDeviceSummary, which returns a list of
all the devices that are being managed by Cisco
Intersight for a specific account.

Example 9-16 Intersight Python Module Example


Click here to view code image

#! /usr/bin/env python
from intersight.intersight_api_client import
IntersightApiClient
from intersight.apis import
equipment_device_summary_api

# Create an Intersight API Client instance


API_INSTANCE = IntersightApiClient(
host="https://intersight.com/api/v1",\
private_key="/Path_to/SecretKey.txt",\
api_key_id="your_own_api_key_id")

# Create an equipment device handle


D_HANDLE =
equipment_device_summary_api.EquipmentDeviceSummaryApi(API_INSTANCE)

DEVICES =
D_HANDLE.equipment_device_summaries_get().results

print('{0:35s}{1:40s}{2:13s}{3:14s}'.format(
"DN",
"MODEL",
"SERIAL",
"OBJECT TYPE"))
print('-'*105)

# Loop through devices and extract data


for DEVICE in DEVICES:
print('{0:35s}{1:40s}{2:13s}
{3:14s}'.format(
DEVICE.dn,
DEVICE.model,
DEVICE.serial,
DEVICE.source_object_type))

The first two lines of Example 9-16 use the import


keyword to bring in and make available for later
consumption the IntersightApiClient Python class
that will be used to create a connection to the Cisco
Intersight platform and the
equipment_device_summary_api file, which
contains Python objects that are useful for retrieving
equipment that is being managed by Intersight. Every
Cisco Intersight REST API endpoint has a corresponding
Python file containing classes and methods that can be
used to programmatically process those REST API
endpoints. Next, an instance of the
IntersightApiClient class is created in order to
establish a connection and have a hook back to the Cisco
Intersight platform. Three parameters need to be passed
in to instantiate the class:

host: This parameter specifies the Cisco Intersight REST API base
URI.

private_key: This parameter specifies the path to the file that


contains the keySecret of the Intersight account that will be used to sign
in.

api_key_id: This parameter contains the keyId of the same Intersight


account. As mentioned previously, both the keyId and keySecret are
generated in the Intersight web interface, under Settings > API keys.

Next, an instance of the


EquipmentDeviceSummaryApi class is created and
stored in the D_HANDLE variable. This Python class
maps into the
/api/v1/equipment/DeviceSummaries REST API
resource. The D_HANDLE variable contains the handle
to that REST API resource. The
equipment_device_summaries_get method that is
available with the EquipmentDeviceSummaryApi
class is invoked next, and the results are stored in the
DEVICES variable, which contains a complete list of all
the equipment that is being managed by Cisco Intersight
for the user account with the keyId and keySecret with
which the initial connection was established. The for
loop iterates over the devices in the list and extracts for
each one the distinguished name, model, serial number,
and object type and displays this information to the
console. The output of this Python script for a test user
account looks as shown in Figure 9-15.
Figure 9-15 Output of the Python Script from
Example 9-16

EXAM PREPARATION TASKS


As mentioned in the section “How to Use This Book” in
the Introduction, you have a couple of choices for exam
preparation: the exercises here, Chapter 19, “Final
Preparation,” and the exam simulation questions on the
companion website.

REVIEW ALL KEY TOPICS


Review the most important topics in this chapter, noted
with the Key Topic icon in the outer margin of the page.
Table 9-3 lists these key topics and the page number on
which each is found.

Table 9-3 Key Topics

Key Topic ElementDescriptionPage Number

Parag Cisco Nexus 9000 switches 2


raph 1
6

Parag Cisco ACI fabric 2


raph 1
7

Parag The configuration of the ACI fabric 2


raph 1
8

Parag Physical and logical components of the Cisco 2


raph ACI fabric 1
9

Parag Cisco ACI fabric policies 2


raph 2
0

Parag Endpoint groups (EPGs) 2


raph 2
2

Parag The APIC REST API URI 2


raph 2
3

Parag Tools and libraries for Cisco ACI automation 2


raph 2
7

Parag UCS Manager and the server lifecycle 2


raph 3
0

Parag The UCS Manager programmatic interface 2


raph 3
1

Parag The Cisco software emulator 2


raph 3
4

Parag The Cisco UCS Python SDK 2


raph 3
7

Parag Cisco UCS Director 2


raph 4
0
Parag The Cisco UCS Director orchestrator 2
raph 4
0

Parag The programmability and extensibility of the 2


raph Cisco UCS Director 4
1

Parag Accessing the Cisco UCS Director REST API 2


raph 4
2

Parag REST API requests and the X-Cloupia- 2


raph Request-Key header 4
3

Parag The Cisco Intersight REST API interface 2


raph 4
7

Parag Client API requests and Intersight 2


raph 4
9

Parag The Cisco Intersight REST API 2


raph documentation 4
9

DEFINE KEY TERMS


Define the following key terms from this chapter and
check your answers in the glossary:

Application Centric Infrastructure (ACI)


Application Policy Infrastructure Controller (APIC)
Management Information Model (MIM)
management information tree (MIT)
managed object (MO)
distinguished name (DN)
virtual routing and forwarding (VRF) instance
endpoint group (EPG)
Unified Computing System (UCS)
Chapter 10

Cisco Collaboration Platforms and


APIs
This chapter covers the following topics:
Introduction to the Cisco Collaboration Portfolio: This section
introduces the collaboration portfolio by functionality and provides an
overview of the product offerings.

Webex Teams API: This section introduces Webex Teams and the
rich API set for managing and creating applications, integrations, and
bots.

Cisco Finesse: This section provides an overview of Cisco Finesse and


API categories, and it provides sample code and introduces gadgets.

Webex Meetings APIs: This section provides an introduction to the


high-level API architecture of Webex Meetings along with the Meetings
XML API for creating, updating, and deleting meetings.

Webex Devices: This section provides an overview of the Webex


Devices portfolio, xAPI, and sample applications to turn on presence
detector on devices.

Cisco Unified Communications Manager: This section provides


an overview of Cisco Call Manager and Cisco Administrative XML
(AXL), and it shows sample code for using the SDK.

Every day, millions of people rely on Cisco


collaboration and Webex solutions to collaborate with
their teams, partners, and customers. These products
help them work smarter, connect across boundaries,
and drive new innovative ideas forward. Cisco
products offer secure, flexible, seamless, and
intelligent collaboration. This chapter introduces the
various products as well as how to integrate these
collaboration products via APIs. It covers the
following:
Cisco Webex Teams

Cisco Webex Devices


Cisco Unified Communications Manager

Cisco Finesse

“DO I KNOW THIS ALREADY?” QUIZ


The “Do I Know This Already?” quiz allows you to assess
whether you should read this entire chapter thoroughly
or jump to the “Exam Preparation Tasks” section. If you
are in doubt about your answers to these questions or
your own assessment of your knowledge of the topics,
read the entire chapter. Table 10-1 lists the major
headings in this chapter and their corresponding “Do I
Know This Already?” quiz questions. You can find the
answers in Appendix A, “Answers to the ‘Do I Know This
Already?’ Quiz Questions.”

Table 10-1 “Do I Know This Already?” Section-to-


Question Mapping

Foundation Topics SectionQuestions

Introduction to the Cisco Collaboration Portfolio 1

Webex Teams API 2–4

Cisco Finesse 5, 6

Webex Meetings APIs 7

Webex Devices 8, 9

Cisco Unified Communications Manager 10

Caution
The goal of self-assessment is to gauge your mastery of
the topics in this chapter. If you do not know the
answer to a question or are only partially sure of the
answer, you should mark that question as wrong for
purposes of self-assessment. Giving yourself credit for
an answer that you correctly guess skews your self-
assessment results and might provide you with a false
sense of security.

1. Which of the following are part of the Cisco


collaboration portfolio? (Choose three.)
1. Video Calling
2. Bots
3. Remote Expert
4. Connected Mobility Experience

2. How does Webex Teams allow you to access APIs?


(Choose three.)
1. Integrations
2. Bots
3. Drones
4. Personal access tokens

3. Guest users of Webex Teams authenticate with


guest tokens, which use _____.
1. Base64
2. No token
3. JWT
4. Sessions

4. Which of the following use webhooks in Webex


Teams?
1. Bots
2. Guests
3. Nobody
4. Integrations

5. True or false: The Finesse desktop application is


completely built using APIs.
1. False
2. True

6. Finesse implements the XMPP specification. The


purpose of this specification is to allow the XMPP
server (for Notification Service) to get information
published to XMPP topics and then to send XMPP
events to entities subscribed to the topic. The
Finesse Notification Service then sends XMPP over
_______ messages to agents that are subscribed to
certain XMPP nodes.
1. MQTT
2. BOSH
3. HTTP
4. None of the above

7. Which of the following enables hosts/users to


update the information for a scheduled meeting that
they are able to edit?
1. SetMeeting
2. ListMeeting
3. CreateMeeting
4. Layered systems

8. Which of the following is the application


programming interface (API) for collaboration
endpoint software?
1. MQTT
2. TAPI
3. DevNet
4. xAPI

9. xAPI on a device can be accessed via which of the


following protocol methods? (Choose all that apply.)
1. SSH
2. FTP
3. HTTP
4. Websocket

10. Which of the following provides a mechanism for


inserting, retrieving, updating, and removing data
from Cisco Unified Communications Manager?
1. Session Initiation Protocol
2. Administration XML
3. Skinny
4. REST API

FOUNDATION TOPICS
INTRODUCTION TO THE CISCO
COLLABORATION PORTFOLIO
Cisco’s collaboration portfolio is vast, but it can be
logically broken down into essentially four high-level
components:

Unified Communications Manager: This product unifies voice,


video, data, and mobile apps.

Unified Contact Center: This product provides customers with


personalized omnichannel experiences.

Cisco Webex: This conferencing solution enables teamwork with


intuitive solutions that bring people together.

Cisco collaboration endpoints: These devices provide better-than-


being-there experiences via new devices.

Figure 10-1 depicts the Cisco collaboration portfolio and


the various products that fall into each of the categories.

Figure 10-1 Rich Collaboration Portfolio

Unified Communications
People work together in different ways. And they use a
lot of collaboration tools: IP telephony for voice calling,
web and video conferencing, voicemail, mobility, desktop
sharing, instant messaging and presence, and more.

Cisco Unified Communications solutions deliver


seamless user experiences that help people work together
more effectively—anywhere and on any device. They
bring real-time communication from your phone system
and conferencing solutions together with messaging and
chat and integrate with everyday business applications
using APIs.

Unified Communications solutions are available as on-


premises software, as partner-hosted solutions, and as a
service (UCaaS) from cloud providers.

The following sections describe the products available


under the Unified Communications umbrella.

Cisco Webex Teams


Cisco Webex Teams brings people and work together in a
single reimagined workspace in and beyond the meeting.
Webex Teams allows you to connect with people (inside
and outside your organization) and places all your tools
right in the center of your workflow. It breaks down the
silos that exist for some across the collaboration
experience.

Both Webex Teams and Webex Meetings have a Join


button you can click to easily join a meeting. This helps
ensure that meetings start on time and it streamlines the
process of joining a meeting. Cisco Webex Teams also
has features that will help you make decisions on which
meetings to prioritize and when to join them:

Seeing invitee status/participants: Every invitee can see who has


accepted/declined the meeting and who’s in the meeting live before
even joining. There is no need to switch back and forth between your
calendar and the meeting application.

Instantly switching between meetings: If you need to move from


one meeting to another, simply leave the meeting with one click and
join the other one with another click.

Easily informing attendees when you are running late: In


Webex Teams, you can message people in the meeting to keep them
posted on your status. Gone are the days of sending an email that no
one will read because they are in the meeting.

Webex Teams provides a space for a team to start


working—to discuss issues and share content—before a
meeting. When the meeting starts, all the discussion and
work from before the meeting are available right there in
the meeting space. You can simply share content from
any device in the meeting—directly from the space or
from another device, wirelessly and quickly.

An added benefit of joining a meeting through Webex


Teams is that during the meeting, everyone is an equal
participant and can mute noisy participants, record the
meeting, and do other meeting management tasks
without having to disrupt the flow of the meeting.

Cisco Webex Calling


Webex Calling is a cloud-based phone system that is
optimized for midsize businesses and provides the
essential business calling capabilities you are likely to
need. With Webex Calling, there’s no need to worry
about the expense and complexity of managing phone
system infrastructure on your premises anymore. Cisco
takes care of the Webex Cloud so you can focus on what
matters most.

You can choose from a wide range of Cisco IP phones to


make and receive calls. Enjoy the calling features you are
used to from a traditional phone system to help run your
organization smoothly and never miss a call. If you are a
mobile worker, or if you are out of the office, you can
make and receive calls on your smartphone, computer,
or tablet, using the Cisco Webex Teams app.

Webex Calling seamlessly integrates with Webex Teams


and Webex Meetings, so you can take collaboration
inside and outside your organization to a new level.
Customers and business partners can make high-
definition audio and video calls from the office, from
home, or on the go. Screen sharing, file sharing, real-
time messaging, and whiteboarding can turn any call
into a productive meeting. You can also add Cisco Webex
Board, Room, or Desk Device to get the most out of
Webex Teams and Meetings and improve teamwork.
Cisco Webex Calling delivers all the features of a
traditional PBX through a monthly subscription service.
Important qualities include the following:

An advanced set of enterprise-grade PBX features

A rich user experience that includes the CiscoWebex Calling app, for
mobile and desktop users, integrated with the Cisco Webex Teams
collaboration app

Support for an integrated user experience with Cisco Webex Meetings


and Webex Devices, including Cisco IP Phones 6800, 7800, and 8800
Series desk phones and analog ATAs

Delivery from a set of regionally distributed, geo-redundant data


centers around the globe

Service that is available across a growing list of countries in every


region

Protection of existing investment in any on-premises Cisco Unified


Communications Manager (Unified CM) licenses, through Cisco
Collaboration Flex Plan

A smooth migration to the cloud at your pace, through support of cloud


and mixed cloud and on-premises deployments

Cisco Unified Communications Manager (Unified CM)


Cisco Unified CM, often informally referred to as Call
Manager, is the core of Cisco’s collaboration portfolio. It
delivers people-centric user and administrative
experiences and supports a full range of collaboration
services, including video, voice, instant messaging and
presence, messaging, and mobility on Cisco as well as
third-party devices. Unified CM is the industry leader in
enterprise call and session management platforms, with
more than 300,000 customers worldwide and more than
120 million Cisco IP phones and soft clients deployed.

Unified Contact Center


The Cisco Unified Contact Center (Unified CC), also
called Finesse, is a next-generation desktop that is
designed to provide the optimal user experience for
agents. It is 100% browser based, so agent client
machines do not need any Unified CC–specific
applications. Cisco Finesse is a next-generation agent
and supervisor desktop designed to provide a
collaborative experience for the various communities
that interact with a customer service organization. It also
helps improve the customer experience and offers a user-
centric design to enhance customer care representative
satisfaction.

Cisco Finesse provides the following:

An agent and supervisor desktop that integrates traditional contact


center functions into a thin-client desktop.

A 100% browser-based desktop implemented through a Web 2.0


interface; no client-side installations are required.

A single customizable interface that gives customer care providers


quick and easy access to multiple assets and information sources.

Open Web 2.0 APIs that simplify the development and integration of
value-added applications and minimize the need for detailed desktop
development expertise.

Cisco Webex
Cisco Webex is a conferencing solution that allows
people to collaborate more effectively with each other
anytime, anywhere, and from any device. Webex online
meetings are truly engaging with high-definition video.
Webex makes online meetings easy and productive with
features such as document, application, and desktop
sharing; integrated audio/video; active speaker
detection; recording; and machine learning features.

Cisco Collaboration Endpoints


To support and empower the modern workforce, Cisco
has a “no-compromise” collaboration solution for every
room, on every desk, in every pocket, and into every
application. Its portfolio of collaboration devices
includes everything from voice to collaboration room
devices for small businesses to very large enterprises.
The majority of the portfolio has been redesigned to
make collaboration more affordable, accessible, and easy
to use. Many Cisco collaboration endpoints have received
the Red Dot Award for excellence in design.
The Cisco collaboration endpoint portfolio includes the
following devices:

Room-based devices: These include video system or codec in the


Cisco TelePresence MX Series, SX, and DX Series.

Cisco Webex Board: Cisco Webex Board allows you to wirelessly


present whiteboard and video or audio conference for team
collaboration.

Webex Share: The new Webex Share device allows easy, one-click
wireless screen sharing from the Webex Teams software client to any
external display with an HDMI port.

Collaboration desktop video devices: A range of options are


available, from entry-level HD video up to the lifelike DX video
collaboration devices.

IP Phone portfolio: Cisco IP Phone devices deliver superior voice


communications, with select endpoints supporting HD video and a
range of options that offer choices for businesses of various sizes and
with unique needs and budgets. The complete portfolio also supports
specialty use cases, such as in-campus wireless with WLAN handsets
and an audio-conferencing endpoint for small to large conference
rooms. The goal of the IP Phone portfolio is to deliver the highest-
quality audio communications with affordable, scalable options for
desktop video that are easy to use and administer so you can
collaborate effectively and achieve your desired business results.

Cisco Headset 500 Series: These headsets deliver surprisingly


vibrant sound for open workspaces. Now users can stay focused in
noisy environments with rich sound, exceptional comfort, and proven
reliability. The headsets offer a lightweight form factor designed for
workers who spend a lot of time collaborating in contact centers and
open workspaces. With the USB headset adapter, the 500 Series
delivers an enhanced experience, including automatic software
upgrades, in-call presence indicator, and audio customizations that
allow you to adjust how you hear the far end and how they hear you.

Cisco now provides more intelligent audio, video, and


usability, offering new ways for users to bring their
personal devices, such as smartphones or tablets, into a
phone or video meeting to further enhance the
collaboration experience.

API Options in the Cisco Collaboration Portfolio


The Cisco collaboration portfolio is rich in features. APIs
are used to integrate and scale these products in building
various applications. The following sections cover four
categories of collaboration APIs:
Webex Meetings APIs

Webex Teams

Contact Center (Finesse)

Endpoints

WEBEX TEAMS API

Webex Teams makes it easy for everyone on a team to be


in sync. Conversations in Webex Teams take place in
virtual meeting rooms called spaces. Some spaces live for
a few hours, while others become permanent fixtures of a
team’s workflow. Webex Teams allows conversations to
flow seamlessly between messages, video calls, and real-
time whiteboarding sessions.

Getting started with the Webex APIs is easy. These APIs


allow developers to build integrations and bots for
Webex Teams. APIs also allow administrators to perform
administrative tasks.

Webex APIs provide applications with direct access to


the Cisco Webex platform, giving you the ability to do the
following:

Administer the Webex Teams platform for an organization, add user


accounts, and so on

Create a Webex Teams team, space, and memberships

Search for people in the company

Post messages in a Webex Teams space

Get Webex Teams space history or be notified in real time when new
messages are posted by others

Figure 10-2 shows an overall picture of how Webex


Teams is organized. Only an admin has the ability to add
a new organization or a new user account for the
organization. An organization is made up of teams, and a
team can have one or more rooms. A person is an end
user who can be added to a room. People communicate
with each other or with everyone else in a room via
messages.

Figure 10-2 Webex Teams Building Blocks:


Organizations, Teams, Rooms, People, and Messages

API Authentication
There are four ways to access the Webex Teams APIs:

Personal access tokens

Integrations

Bots

Guest issuers

The Webex Representational State Transfer (REST) API


uses the Cisco Webex Common Identity (CI) account.
Once you create an account to join Webex Teams, you
have access to the Common Identity account, which
allows you to use the APIs and SDKs.

Personal Access Tokens


When making requests to the Webex REST API, an
authentication HTTP header is used to identify the
requesting user. This header must include an access
token, which may be a personal access token from the
developer site (https://developer.Webex.com), a bot
token, or an OAuth token from an integration or a guest
issuer application. The API reference uses your personal
access token, which you can use to interact with the
Webex API as yourself. This token has a short lifetime—it
lasts only 12 hours after logging in to the site—so it
shouldn't be used outside of app development. Figure 10-
3 shows the bearer token as obtained from the developer
portal.

Figure 10-3 Webex Teams: Getting a Personal


Access Token

Integrations
To perform actions on behalf of someone else, you need a
separate access token that you obtain through an OAuth
authorization grant flow. OAuth is supported directly
into the platform. With a few easy steps, you can have a
Webex Teams user grant permission to your app and
perform actions on that person’s behalf. Figure 10-4
shows how third-party apps can access the platform.

Figure 10-4 Webex Teams: Third-Party


Integrations

You use an integration to request permission to invoke


the Webex REST API on behalf of another Webex Teams
user. To provide security, the API supports the OAuth 2
standard, which allows a third-party integration to get a
temporary access token for authenticating API calls
instead of asking users for their password.

Here are a few easy steps to get started using an


integration:

Step 1. Register an integration with Webex Teams at


https://developer.Webex.com/my-apps/new.
Figure 10-5 shows the sample form on the
portal that allows you to create a new
integration.

Figure 10-5 Creating a New Integration via the


Developer Portal

Step 2. Request permission by using an OAuth grant


flow by invoking the flow via
https://webexapis.com/v1/authorize and
providing a redirect URL to come back to. After
the integration is created successfully, you see a
screen like the one in Figure 10-6, which also
shows the full authorization URL.
Figure 10-6 Successful Integration Results in OAuth
Credentials

Step 3. On the screen shown the Figure 10-7, click


Accept to obtain the authorization code for an
access token.

Figure 10-7 Using the OAuth Credentials and


Accepting Permissions

The redirect URL contains a code parameter in the query


string like so:
https://0.0.0.0:8080/?
code=NzAwMGUyZDUtYjcxMS00YWM4LTg3ZDYtNzd
hMDhhNWRjZGY5NGFmMjA3ZjEtYzRk_PF84_1eb65f
df-9643-417f-9974-ad72cae0e10f&state=set_state_here

Access Scopes
Scopes define the level of access that an integration
requires. Each integration alerts the end user with the
scope of the integration. Scopes determine what
resources the Access Token has access to. Table 10-2 lists
and describes the scopes.

Table 10-2 Webex Teams Scopes API Definitions

ScopeDescription

permission Full access to your Webex Teams


dialog.spark:all account

spark:people_read Read your company directory

spark:rooms_read List the titles of rooms that you’re


in

spark:rooms_write Manage rooms on your behalf

spark:memberships_r List the people in rooms that you’re


ead in

spark:memberships_w Invite people to rooms on your


rite behalf

spark:messages_read Read the content of rooms that


you’re in

spark:messages_write Post and delete messages on your


behalf

spark:teams_read List the teams you are a member of

spark:teams_write Manage teams on your behalf

spark:team_members List the people in the teams that


hips_read you are in

spark:team_members Add people to teams on your behalf


hips_write

spark:webhooks_read See all webhooks created on your


behalf

spark:webhooks_write Modify or delete webhooks created


on your behalf

The following sections examine some of the APIs you can


use to create rooms, add people, and send messages in a
room.

Organizations API
An organization is a set of people in Webex Teams.
Organizations may manage other organizations or may
be managed themselves. The Organizations API can be
accessed only by an administrator. Table 10-3 shows the
methods used with the Organizations API to get details
about organizations.

Table 10-3 Webex Teams: Organization API

MethodAPIDescription

G https://webexapis.com/v1/org List all the


E anizations organizations
T
G https://webexapis.com/v1/org Get details about an
E anizations/{orgId} organization
T

Note
The host name https://api.ciscospark.com has now
been changed to https://webexapis.com. The old
https://api.ciscospark.com will continue to work.

Teams API
A team is a group of people with a set of rooms that is
visible to all members of that team. The Teams API is
used to manage teams—to create, delete, and rename
teams. Table 10-4 lists the various operations that can be
performed on the Teams API.

Table 10-4 Webex Teams: Teams API

MethodAPIDescription

GET https://webexapis.com/v1/ List all teams


teams

POS https://webexapis.com/v1/ Create a new team


T teams

GET https://webexapis.com/v1/ Get details about a


teams/{teamId} particular team

PUT https://webexapis.com/v1/ Update details about


teams/{teamId} a team

DEL https://webexapis.com/v1/ Delete a team


ETE teams/{teamId}
For example, say that you want to use the Teams API to
create a new team named DevNet Associate Certification
Room. To do so, you use the POST method and the API
https://webexapis.com/v1/teams.

You can use a Python request to make the REST call.


Example 10-1 shows a Python script that sends a POST
request to create a new team. It initializes variables such
as the base URL, the payload, and the headers, and it
calls the request.

Example 10-1 Python Code to Create a New Team

Click here to view code image

""" Create Webex Team """

import json
import requests

URL = "https://webexapis.com/v1/teams"
PAYLOAD = {
"name": "DevNet Associate Certification
Team"
}
HEADERS = {
"Authorization": "Bearer
MDA0Y2VlMzktNDc2Ni00NzI5LWFiNmYtZmNmYzM3OTkyNjMxNmI0ND-

VmNDktNGE1_PF84_consumer",
"Content-Type": "application/json"
}
RESPONSE = requests.request("POST", URL,
data=json.dumps(PAYLOAD), headers=HEADERS)
print(RESPONSE.text)

Rooms API
Rooms are virtual meeting places where people post
messages and collaborate to get work done. The Rooms
API is used to manage rooms—to create, delete, and
rename them. Table 10-5 lists the operations that can be
performed with the Rooms API.

Table 10-5 Webex Teams: Rooms API


MethodAPIDescription

GE https://webexapis.com/v1/rooms List all the


T rooms

PO https://webexapis.com/v1/rooms Create a new


ST room

GE https://webexapis.com/v1/rooms/ Get room


T {roomId} details

GE https://webexapis.com/v1/rooms/ Get room


T {roomId}/meetingInfo meeting
details

PU https://webexapis.com/v1/rooms/ Update room


T {roomId} details

DE https://webexapis.com/v1/rooms/ Delete a room


LE {roomId}
TE

You can use the Rooms API to create a room. When you
do, an authenticated user is automatically added as a
member of the room. To create a room, you can use the
POST method and the

API https://webexapis.com/v1/rooms.

Example 10-2 show Python request code that creates a


room with the name DevAsc Team Room. It initializes
variables such as the base URL, the payload, and the
headers, and it calls the request. The header consists of
the bearer token of the authenticated user or the
integration along with other parameters.

Example 10-2 Python Request to Create a New Room

Click here to view code image


""" Create Webex Room """
import json
import requests
import pprint

URL = "https://webexapis.com/v1/rooms"
PAYLOAD = {
"title": "DevAsc Team Room"
}
HEADERS = {
"Authorization": "Bearer
MDA0Y2VlMzktNDc2Ni00NzI5LWFiNmYtZmNmYzM3OTkyNjMxNmI0ND-

VmNDktNGE1_PF84_consumer",
"Content-Type": "application/json"
}
RESPONSE = requests.request("POST", URL,
data=json.dumps(PAYLOAD), headers=HEADERS)
pprint.pprint(json.loads(RESPONSE.text))

Example 10-3 shows the response to creating a room.


The response includes the creation time and owner,
along with the ID, which can be used in subsequent calls.

Example 10-3 Response to the Successful Creation of a


Room
Click here to view code image

$ python3 CreateRoom.py
{'created': '2020-02-15T23:13:35.578Z',
'creatorId':
'Y2lzY29zcGFyazovL3VzL1BFT1BMRS8wYWZmMmFhNC1mN2IyLTQ3MWU-

tYTIzMi0xOTEyNDgwYmDEADB',
'id':
'Y2lzY29zcGFyazovL3VzL1JPT00vY2FhMzJiYTAtNTA0OC0xMWVhLWJiZWItYmY1MWQyNGRm

MTU0',
'isLocked': False,
'lastActivity': '2020-02-15T23:13:35.578Z',
'ownerId': 'consumer',
'title': 'DevAsc Team Room',
'type': 'group'}
$
You can use the Rooms API to get a list of all the rooms
that have been created. To do so, you can use the GET
method and the API https://webexapis.com/v1/rooms.

Example 10-4 shows how to use the curl command to


make the REST call. This script sends a GET request to
list all rooms that a particular user belongs to.

Example 10-4 curl Script for Getting a List of All


Rooms

Click here to view code image

$ curl -X GET \
https://webexapis.com/v1/rooms \
-H 'Authorization: Bearer
DeadBeefMTAtN2UzZi00YjRiLWIzMGEtMThjMzliNWQwZGEyZTljN-

WQxZTktNTRl_PF84_1eb65fdf-9643-417f-9974-
ad72cae0e10f'

Memberships API
A membership represents a person’s relationship to a
room. You can use the Memberships API to list members
of any room that you’re in or create memberships to
invite someone to a room. Memberships can also be
updated to make someone a moderator or deleted to
remove someone from the room. Table 10-6 lists the
operations that can be performed with respect to the
Memberships API, such as listing memberships and
adding a new member.

Table 10-6 Webex Teams: Memberships API

MethodAPIDescription

GE https://webexapis.com/v1/mem List
T berships memberships

PO https://webexapis.com/v1/mem Add a new


ST berships member

GE https://webexapis.com/v1/mem Get details about


T berships/{membershipId} a member

PU https://webexapis.com/v1/mem Update details


T berships/{membershipId} about a member

DE https://webexapis.com/v1/mem Delete a member


LE berships/{membershipId}
TE

You can use the Memberships API to add a new member


to a given room (that is, create a new membership) by
using the POST method and the API
https://webexapis.com/v1/memberships.

You can use Python to make a REST call. Example 10-5


shows a curl script that sends a POST request to add a
new member with email-id newUser@devasc.com to the
room.

Example 10-5 Python Script to Add a New Member to


a Room

Click here to view code image

""" Add new Member to a Webex Room """

import json
import requests
import pprint

URL = "https://webexapis.com/v1/memberships"
PAYLOAD = {
"roomId" :
"Y2lzY29zcGFyazovL3VzL1JPT00vY2FhMzJiYTAtNTA0OC0xMWVhLWJiZ-

WItYmY1MWQyNGRDEADB",
"personEmail": "newUser@devasc.com",
"personDisplayName": "Cisco DevNet",
"isModerator": "false"
}
HEADERS = {
"Authorization": "Bearer
MDA0Y2VlMzktNDc2Ni00NzI5LWFiNmYtZmNmYzM3OTkyNjMxNmI0ND-

VmNDktNGE1_PF84_consumer",
"Content-Type": "application/json"
}
RESPONSE = requests.request("POST", URL,
data=json.dumps(PAYLOAD), headers=HEADERS)
pprint.pprint(json.loads(RESPONSE.text))

Messages API
Messages are communications that occur in a room. In
Webex Teams, each message is displayed on its own line,
along with a timestamp and sender information. You can
use the Messages API to list, create, and delete messages.

Message can contain plaintext, rich text, and a file


attachment. Table 10-7 shows the API for sending
messages to Webex Teams.

Table 10-7 Webex Teams: Message API

MethodAPIDescription

GET https://webexapis.com/v1/mes List messages


sages

GET https://webexapis.com/v1/mes List one-to-one


sages/direct message

POS https://webexapis.com/v1/mes Post a new


T sages message

GET https://webexapis.com/v1/mes Get details about


sages/{messageId} a message

DEL https://webexapis.com/v1/mes Delete a message


ETE sages/{messageId}
You can use the Messages API to add a new member to a
given room (that is, create a new membership). To do so,
you use the POST method and the

API https://webexapis.com/v1/messages.

You can use a Python request to make a REST call.


Example 10-6 shows a curl script that sends a POST
message to add a new message to a particular room.

Example 10-6 Python Script to Add a New Message to


a Room
Click here to view code image

""" Send Webex Message """

import json
import requests
import pprint

URL = "https://webexapis.com/v1/messages"
PAYLOAD = {
"roomId" :
"Y2lzY29zcGFyazovL3VzL1JPT00vY2FhMzJiYTAtNTA0OC0xMWVhLWJiZ-

WItYmY1MWQyNGRmMTU0",
"text" : "This is a test message"
}
HEADERS = {
"Authorization": "Bearer
NDkzODZkZDUtZDExNC00ODM5LTk0YmYtZmY4NDI0ZTE5ZDA1MGI-

5YTY3OWUtZGYy_PF84_consumer",
"Content-Type": "application/json",
}
RESPONSE = requests.request("POST", URL,
data=json.dumps(PAYLOAD), headers=HEADERS)
pprint.pprint(json.loads(RESPONSE.text))

Bots
A bot (short for chatbot) is a piece of code or an
application that simulates a human conversation. Users
communicate with a bot via the chat interface or by
voice, just as they would talk to a real person. Bots help
users automate tasks, bring external content into the
discussion, and gain efficiencies. Webex Teams has a rich
set of APIs that make it very easy and simple for any
developer to add a bot to any Teams room. In Webex,
bots are similar to regular Webex Teams users. They can
participate in one-to-one and group spaces, and users
can message them directly or add them to a group space.
A special badge is added to a bot’s avatar in the Webex
Teams clients so users know they’re interacting with a
bot instead of a human.

A bot can only access messages sent to it directly. In


group spaces, bots must be @mentioned to access a
message. In one-to-one spaces, a bot has access to all
messages from the user. Bots are typically of the three
types described in Table 10-8.

Table 10-8 Bot Types

TypeDescription

N Events from external services are brought in and posted


o in Webex Teams. Examples of events include build
ti complete, retail inventory status, and temperature today.
fi
c
a
ti
o
n
b
o
t

C External systems that have APIs allow third-party apps


o to be integrated to control them. For example, you could
n control turning lights on or off by invoking a lights bot.
tr
o
ll
e
r
b
o
t

A Virtual assistants usually understand natural language,


s so a user can ask questions of bots as they would ask
si humans (for example, “@Merakibot, how many wifi
st devices are currently on floor 2?”)
a
n
t
b
o
t

Bot Frameworks and Tools


There are several bot frameworks that can greatly
simplify the bot development process by abstracting
away the low-level communications with the Webex
REST API, such as creating and sending API requests
and configuring webhooks. You can focus on building the
interaction and business logic of a bot. These are two
popular bot frameworks:

Flint: Flint is an open-source bot framework with support for regex


pattern matching for messages and more.

Botkit: Botkit is a popular open-source bot framework with advanced


conversational support as well as integrations with a comprehensive
array of natural language processing and storage providers.

One of the greatest starting points for learning about and


creating your own bots for Webex Teams is the DevNet
Code Exchange, at
https://developer.cisco.com/codeexchange/github/repo
/howdyai/botkit, which is shown in Figure 10-8.
Figure 10-8 DevNet Code Exchange: Building Your
First Bot

Guest Issuer
Guest issuer applications give guest users temporary
access to users within the organization. Guest issuers can
be created at https://developer.Webex.com/my-
apps/new/guest-issuer. To create a new guest issuer, the
only thing that is required is the name. A new guest
issuer ID and shared secret will be generated and can be
used subsequently. The main reason to use a guest issuer
is to interact with users who do not have a Webex Teams
account. These users might be visitors to a website who
you’d like to message with over Webex Teams. Or they
might be customers in a store with which you’d like to
have a video call. These guest users can interact with
regular Webex Teams users via tokens generated by a
guest issuer application.

Guest users of Webex Teams authenticate by using guest


tokens. Guest tokens use the JSON Web Token (JWT)
standard to create and share authentication credentials
between SDKs and widgets and the Webex REST API.
These tokens are exchanged for an access authentication
token that can be used for a limited time and limited
purpose to interact with regular Webex Teams users.
Each guest token should be associated with an individual
user of an application. The guest’s activity within Webex
Teams, such as message activity or call history, will
persist, just as it would for a regular Webex Teams user.
While guest users can interact with regular Webex Teams
users, they are not allowed to interact with other guests.
Example 10-7 shows a Python code snippet that creates a
JWT token from the guest issuer ID and secret and
passes it in the authentication headers. It is then possible
to use any of the APIs to interact with other users in the
system.

Example 10-7 Python Code to Generate a JWT Token


for a Guest Issuer

Click here to view code image

""" Generate JWT """

import base64
import time
import math
import jwt

EXPIRATION = math.floor(time.time()) + 3600 # 1


hour from now
PAYLOAD = {
"sub": "devASC",
"name": "devASC-guest",
"iss": "GUEST_ISSUER_ID",
"exp": EXPIRATION
}

SECRET = base64.b64decode('GUEST_ISSUE_SECRET')

TOKEN = jwt.encode(PAYLOAD, SECRET)

print(TOKEN.decode('utf-8'))
HEADERS = {
'Authorization': 'Bearer ' +
TOKEN.decode('utf-8')
}
Webex Teams SDKs
As of this writing, there is a variety of SDKs available;
some of them are official Webex Teams SDKs, and others
are from the community. The following is a selection of
the Web Teams SDKs that are available:

Go (go-cisco-Webex-teams): A Go client library (by jbogarin)

Java (spark-java-sdk): A Java library for consuming the RESTful


APIs (by Cisco Webex)

Node.js (ciscospark): A collection of Node.js modules targeting the


REST API (by Cisco Webex)

PHP (SparkBundle): A Symfony bundle (by CiscoVE)

Python (Webexteamssdk): An SDK that works with the REST APIs


in native Python (by cmlccie)

The following are some advanced APIs:

SDK for Android: Integrates messaging and calling into Android


apps (by Cisco Webex)

SDK for Browsers: Integrates calling into client-side JavaScript


applications (by Cisco Webex)

SDK for iOS: Integrates messaging and calling into iOS apps (by
Cisco Webex)

SDK for Windows: Integrates messaging and calling into Windows


apps (by Cisco Webex)

Widgets: Provides components that mimic the web user experience


(by Cisco Webex)

CISCO FINESSE
The Cisco Finesse desktop is a call agent and supervisor
desktop solution designed to meet the growing needs of
agents, supervisors, and the administrators and
developers who support them. The Cisco Finesse desktop
runs in a browser, which means you install Cisco Unified
Contact Center Express (Unified CCX), and agents start
by simply typing in the URL for the Unified CCX server.
The desktop is more than an agent state and call-control
application. It is an OpenSocial gadget container, built to
include third-party applications in a single agent desktop
experience. Rather than switching between applications,
agents have easy access to all applications and tools from
a single window, which increases their efficiency. Figure
10-9 shows the architecture and high-level flow of
Finesse, which involves the following steps:

Figure 10-9 Finesse High-Level Flow

Step 1. The call arrives from either the PSTN or a VoIP


connection to the gateway.
Step 2. The gateway then hands over the call to
Unified CM, which invokes the application that
was preregistered. In this case, it is handled by
Unified CCX.
Step 3. Unified CM notifies the Unified CCX about the
incoming call via JTAPI.

Step 4. After consulting the resource manager


(routing the call to the agent based on skills,
priority, and rebalancing), Unified CCX notifies
Finesse via computer telephony integration
(CTI) connection.
Step 5. Finesse performs internal processing and then
publishes a notification to Notification Service.

Step 6. The Finesse desktop receives this notification


from Notification Service via the Bidirectional-
streams Over Synchronous HTTP (BOSH)
connection (Extensible Messaging and Presence
Protocol [XMPP]).
Step 7. The agent makes a request to perform an
operation (such as answer a call); the app
makes this HTTP (REST) request to Finesse
Web Services.

Step 8. Finesse processes the request and then, if


necessary, requests action with Unified CCE via
CTI connection. Where applicable, Unified CCE
performs/forwards the requested action.
Step 9. Unified CCE notifies Finesse via CTI
connection about whether the request was
successful or caused an error.

Step 10. Finesse processes the notification


(successful/error) and publishes the
notification to Notification Service. The agent
web browser receives this notification from
Notification Service via the BOSH connection.

The Finesse agent goes through various states that


specifically pertain to the agent workflow (see Table 10-
9).

Table 10-9 User States in Finesse

StateDescription

LO The agent is signing in to the system. This is an


GI intermediate state.
N

LO The agent is signed off the system.


GO
UT

RE The agent is ready to take calls.


AD
Y
NO The agent is signed in but not ready to take calls. It
T_ could be on a break, or the shift might be over, or the
RE agent might be in between calls.
AD
Y

RE This is a transient state, as the agent gets chosen but


SE has not answered the call.
RV
ED

TA The agent is on a call.


LK
IN
G

H The agent puts the call on hold.


OL
D

Cisco Finesse API


The Cisco Finesse API is a modern, open-standards-
based web API, exposed via REST. Each function
available in the Cisco Finesse user interface has a
corresponding REST API that allows all types of
integrations for developers to use. The extensibility and
ease of use of the API are unprecedented on Unified
CCX. Agents and supervisors use the Cisco Finesse
desktop APIs to communicate between the Finesse
desktop and Finesse server, and they use Unified Contact
Center Enterprise (Unified CCE) or Unified Contact
Center Express (Unified CCX) to send and receive
information.

The Finesse APIs can be broadly classified into the


following categories:

User

Dialog
Queue

Team

ClientLog

Task Routing APIs

Single Sign-On

TeamMessage

Cisco Finesse supports both HTTP and HTTP Secure


(HTTPS) requests from clients. Cisco Finesse desktop
operations can be performed using one of the many
available REST-like HTTP/HTTPS requests. Operations
on specific objects are performed using the ID of the
object in the REST URL. For example, the URL to view a
single object (HTTP) would be as follows:

http://<FQDN>:
<port>/finesse/api/<object>/<objectID>

where FQDN is the fully qualified domain name of the


Finesse server.

Finesse configuration APIs require the application user


ID and password, which are established during
installation, for authentication purposes.

Finesse APIs use the following HTTP methods to make


requests:

GET: Retrieves a single object or list of objects (for example, a single


user or list of users).

PUT: Replaces a value in an object (for example, to change the state of


a user from NOT_READY to READY).

POST: Creates a new entry in a collection (for example, to create a new


reason code or wrap-up reason).

DELETE: Removes an entry from a collection (for example, to delete a


reason code or wrap-up reason).

Finesse uses the standard HTTP status codes (for


example, 200, 400, and 500) in the response to indicate
the overall success or failure of a request.

API Authentication
All Finesse APIs use HTTP BASIC authentication, which
requires the credentials to be sent in the authorization
header. The credentials contain the username and
password, separated by a single colon (:), within a
Base64-encoded string. For example, the authorization
header would contain the following string:

Click here to view code image

"Basic ZGV2YXNjOnN0cm9uZ3Bhc3N3b3Jk"

where ZGV2YXNjOnN0cm9uZ3Bhc3N3b3Jk is the


Base64-encoded string devasc:strongpassword (where
devasc is the username, and strongpassword is the
password). Example 10-8 shows three lines of code that
do Base64 encoding in order to plug the value in the
authorization headers.

Example 10-8 Python Code to Generate Base64


Encoding

Click here to view code image

""" Generate Base64 Encoding """


import base64
ENCODED =
base64.b64encode('devasc:strongpassword'.encode('UTF-
8'))
print(ENCODED.decode('utf-8'))

With Single Sign-On mode, the authorization header


would contain the following string:

"Bearer <authtoken>"

Finesse User APIs


Table 10-10 lists the various methods and User APIs to
perform operations with the user, such as listing, logging
in, and changing properties.
Table 10-10 Finesse User APIs

MethodAPIDescription

G http://<FQDN>/fine Get a copy of the user object


E sse/api/User/<id>
T

G http://<FQDN>/fine Get a list of all users


E sse/api/User
T

P http://<FQDN>/fine Sign in to the CTI server


U sse/api/User/<id>
T
with XML body:

<User>

<state>LOGIN</st
ate>

<extension>5250001
</extension>

</User>

P http://<FQDN>/fine Set the user’s state:


U sse/api/User/<id>
T
with XML body:
READY

<User>
NOT_READY
<state>READY</st
ate> LOGOUT

</User>

G http://<FQDN>/fine Get a list of phone books and


E sse/api/User/<id>/P the first 1500 associated
T honeBooks contacts for that user
A full list of all User state change APIs with details can be
found at
https://developer.cisco.com/docs/finesse/#!userchange-
agent-state/userchange-agent-state.

For example, the User—Sign in to Finesse API forces you


to sign-in. Say that you use the following information
with this API:

Finesse server FQDN: http://hq-uccx01.abc.inc

Agent name: Anthony Phyllis

Agent ID: user001

Agent password: cisco1234

Example 10-9 shows a simple call using Python requests.


The API call for user login uses the PUT request along
with an XML body that sets the state to LOGIN.

Example 10-9 Python Request: Finesse User Login

Click here to view code image

""" Finesse - User Login"""


import requests

URL = "http://hq-
uccx.abc.inc:8082/finesse/api/User/Agent001"
PAYLOAD = (
"<User>" +
" <state>LOGIN</state>" +
" <extension>6001</extension>" +
"</User>"
)

HEADERS = {
'authorization': "Basic
QWdlbnQwMDE6Y2lzY29wc2R0",
'content-type': "application/xml",
}
RESPONSE = requests.request("PUT", URL,
data=PAYLOAD, headers=HEADERS)
print(RESPONSE.text)
print(RESPONSE.status_code)
As another example, the User State Change API lets the
users change their state. State changes could be any one
as shown in Table 10-9. Say that you use the following
information with this API:

Finesse server FQDN: http://hq-uccx01.abc.inc

Agent name: Anthony Phyllis

Agent ID: user001

Agent password: cisco1234

The API changes the state to READY.

Example 10-10 shows a simple call using Python


requests. The API call for user login uses the PUT
request along with an XML body that sets the state to
READY.

Example 10-10 Python Request for a Finesse User


State Change

Click here to view code image

""" Finesse - User State Change"""


import requests

URL = "http://hq-
uccx.abc.inc:8082/finesse/api/User/Agent001"
PAYLOAD = (
"<User>" +
" <state>READY</state>" +
"</User>"
)

HEADERS = {
'authorization': "Basic
QWdlbnQwMDE6Y2lzY29wc2R0",
'content-type': "application/xml",
}
RESPONSE = requests.request("PUT", URL,
data=PAYLOAD, headers=HEADERS)
print(RESPONSE.text)
print(RESPONSE.status_code)

Finesse Team APIs


The Team object represents a team and contains the
URI, the team name, and the users associated with the
team. Table 10-11 shows the Finesse Team APIs to access
the Team object and list all team messages.

Table 10-11 Finesse Team APIs

MethodAPIDescription

G http://<FQDN>/finesse/api/ Allow a user to get a


E Team/<id>? copy of the Team
T includeLoggedOutAgents=tru object
e

G http://<FQDN>/finesse/api/ Get a list of all active


E Team/<teamid>/TeamMessa team messages for a
T ges particular team

A full list of the Team APIs, with details, can be found at


https://developer.cisco.com/docs/finesse/#team-apis.

The Python script in Example 10-11 shows how to make


an API call to get details about Team ID 2.

Example 10-11 Python Request to Get Finesse Team


Details
Click here to view code image

import requests
url = "https://hq-
uccx.abc.inc:8445/finesse/api/Team/2"
headers = {
'authorization': "Basic
QWdlbnQwMDE6Y2lzY29wc2R0",
'cache-control': "no-cache",
}
response = requests.request("GET", url,
headers=headers)
print(response.text)
Dialog APIs
The Dialog object represents a dialog (voice or non-
voice) between two or more users. There are many
flavors of dialog APIs. Table 10-12 shows the Finesse
dialog API, which lets two users make a call with each
other.

Table 10-12 Sample of Finesse Dialog APIs

MethodAPIDescription

P http://<FQDN>/finesse/api/U Allow a user to make


O ser/<id>/Dialogs a call
S
T with XML Body:

<Dialog>

<requestedAction>MAKE_C
ALL</requestedAction>

<fromAddress>6001</fromA
ddress>

<toAddress>6002</toAddres
s>

</Dialog>

P http://<FQDN>/finesse/api/D Allow a user to start


U ialog/<dialogId> recording an active
T call
with XML body:

<Dialog>

<requestedAction>START_R
ECORDING</requestedAction
>

<targetMediaAddress>6001<
/targetMediaAddress>

</Dialog>
A full list of the Dialog APIs, with details, can be found at
https://developer.cisco.com/docs/finesse/#dialog-apis.

The Python script in Example 10-12 shows how to make


an API call to make a call between extension 6001 and
extension 6002.

Example 10-12 Python Request to Initiate a Dialog


Between Two Numbers
Click here to view code image

""" Finesse - Initiate a dialog between two


numbers """
import requests

URL = "http://hq-
uccx.abc.inc:8082/finesse/api/User/Agent001/Dialogs"

PAYLOAD = (
"<Dialog>" +
"
<requestedAction>MAKE_CALL</requestedAction>" +
" <fromAddress>6001</fromAddress>" +
" <toAddress>6002</toAddress>" +
"</Dialog>"
)

HEADERS = {
'authorization': "Basic
QWdlbnQwMDE6Y2lzY29wc2R0",
'content-type': "application/xml",
'cache-control': "no-cache",
}
RESPONSE = requests.request("POST", URL,
data=PAYLOAD, headers=HEADERS)
print(RESPONSE.text)
print(RESPONSE.status_code)

Finesse Gadgets
As indicated earlier in this chapter, the Finesse desktop
application is an OpenSocial gadget container. This
means that an agent or anyone else can customize what
is on the desktop. Gadgets are built using HTML, CSS,

You might also like