Cisco ACI and Software Defined Networking – Part 1

Software Defined Networking or SDN has started to take the networking world by storm in the last few years. The goal of SDN was to bring the benefits of virtualization to the networking world like we seen in the server world. Decoupling the software from the networking stack gives organizations agility, intelligence and centralized management to address the rapidly changing environments.

Cisco was a little late entering the SDN game. Back in November 2013 they announced Cisco Application Centric Infrastructure (ACI) with the $863 million purchase of Insieme networks. The goal of ACI was to deploy network services to meet the needs of a specific application. This approached changed the paradigm on how we build and deployed network. As of this writing, Cisco ACI is only supported on the Nexus 9000 platform and is “hardware ready” on the Nexus 5600s. The Cisco ACI is managed by an Application Policy Infrastructure Controller or APIC.

So how does SDN change the way we look at networks? To really understand this shift we need to look back on how we built networks.

Traditional Networks

The traditional network was built using a three-tier design that is still widely used today. For those that have been involved with Cisco are most likely familiar with the three tier network design – core, aggregation and access layers. Here is a breakdown of each layer.

Core Layer – The Core is the backbone of the network and is where speed and reliability are important. No routing or network services operate at this layer.

Aggregation Layer – The Aggregation layer is concerned with routing between subnets, QoS, Security and other Network services.

Access Layer – The access layer contains endpoint devices like desktops, servers, printers, etc. It priority is delivering packets to the appropriate endpoint device.

This three-tier design was solid for campus network designs but started to run into limitations in the data center. The primary cause was server virtualization. When VMware started to explode on the scene 8 to 10 years ago we saw a large increase in east-west traffic. In large scale VMware deployments, the three-tier design started to run into scalability issues. 10 Gig Ethernet helps address some of those challenges that still didn’t solve everything.

There were also challenges around redundancy and the three-tier design. Redundancy is imperative inside the data center but the protocols we used in the networks of yesterday aren’t sufficient for today’s data center and the cloud. Spanning Tree Protocol or (STP) was designed to prevent loops in Ethernet networks and variations of it have been around for decades. In order for STP to prevent loops it had to block one of the redundant ports in the network. From a design standpoint this isn’t very efficient and can waste a lot of potential bandwidth. The other challenge was slow re-convergence of the network after a failure meaning that some kind of outage typically happened depending on the size of the network.

Network virtualization technologies like Virtual Port Channel (VPC) and Virtual Switching System (VSS) helped addresses some of these issues by creating logical switches on the network that appear to be a single switch on a single path from an STP standpoint. In that scenario spanning-tree would not block the redundant link giving you a more efficient use of your bandwidth. This is more of a smoke and mirrors method but works well to address the challenges around Spanning Tree.

Traditional network management was primarily done via the CLI or sometimes via a management GUI. Both methods were limited in large scale environments and automation was a challenge at best. Cisco IOS operated using a flat configuration file and didn’t have any APIs that allows for network programmability. So most Network Administrators would “Copy” the running configuration from one switch and “Paste” on to another switch with a few tweaks done in a text editor. This method was only so scalable and was prone to human error.

The biggest limitation was the device by device approach to administration and inconsistent or orphaned configurations. This typically caused issues that were difficult to troubleshoot and very time consuming to remediate. The open nature of these networks tended to create unsecure networks because security wasn’t tightly integrated into the design and didn’t come from a more restrictive approach. Lack of full traffic visibility became a big challenge. Especially in virtualized environments were the network team didn’t have visibility.

Spine-Leaf Networks and ACI

In the ACI network uses a Leaf-Spine architecture. The Leaf-Spine collapses and simplify the Three-Tier architecture with a CLOS architecture. In this topology we have more of a mesh topology with all leaf switches accessing the spine and vise versa. Leafs don’t connect to each other and Spines don’t connect to each other either. This design reduces latency and give you an optimal connection where all paths on the network are forwarding. Here is the breakdown.

Leaf – The Leaf is the access layer where endpoint devices connect.

Spine – Is the backbone layer that interconnects the Leaf layers.

Pretty simple, right? While the Spine-Leaf design doesn’t remove the limitations of Spanning Tree but its replaced with FabricPath (or TRILL the open standard) for loop prevention. FabricPath treats loop prevention like a Link-state routing protocol which are designed for very large WAN environments. Using FabricPath allows a data center network to scale beyond what is capable for Spanning-Tree, while allowing all paths on the network to be active and in a forwarding state. It gives the network the level of intelligence you see in large WAN networks were being able to scale is key.

Overlay Networking

A Leaf-Spine topology utilizes VXLANs or Virtual Extensible LAN as an Overlay Network to scale from an IP subnet standpoint. VXLAN is a tunneling protocol that encapsulates a virtual network. VXLAN supports up to 16 Million logical networks which is a considerable increase of the 4,096 logical networks that VLANs supports. This is accomplished by adding a 24-bit segment ID to the frame. But adding support for additional logical networks is not the only thing that VXLANs bring to the table. VXLANs allow for the migration of Virtual Machines across long distances which is important for the software defined datacenter. In addition, it allows for centralized management of network traffic which makes the network operate as more of a collective and not as individual independent devices. NVGRE is another network overlay technology that’s gaining momentum. I will discuss this one in a future blog post.

Now that we know some of the fundamental components that make up Software Defined Network. Lets dive deeper into ACI and breakdown how it works. Stay tuned for my next blog post.

Leave a Reply

Your email address will not be published. Required fields are marked *