Category Archives: Cisco

Cisco ACI and Software Defined Networking – Part 2

This month’s blog post is a continuation of last October’s post on Cisco ACI and Software Defined Networking – Part 1. In part 1, I gave some history on traditional networking and discussed how networking is evolving to accommodate the next generation data center and cloud computing with Software Defined Networking or SDN. This month we dive into Cisco’s approach to SDN – Cisco Application Centric Infrastructure or ACI.

At first glance

Cisco ACI is made up of two components. First, the Application Policy Infrastructure Controller or APIC is management software component for ACI. Its responsibility is to manage and apply network policy on the fabric.  The APIC is an appliance or UCS 2240 M3 (Frist Gen) or 2240 M4 (Second Gen) which is a 1U rack mount servers. They are typically in a clustered configuration for added resiliency. Second, the Nexus 9000 is the hardware component. They can run in traditional nexus mode or ACI mode. In ACI mode, all the management and configuration happens at the APIC level. In Nexus mode, management happens at the individual switch level.

Here is a list of ACI compatible hardware.

Object Model, Tenants and Contexts

The Object Model is the foundation of ACI and where it truly derives its power. For those who are familiar to programming. The Object Model that ACI uses operates like an Object in Object Operating programming. The object has a set of attributes that can be modified and shared across the entire model. This is a huge departure from how we managed switches in the past individually using a flat configuration file. In programming terms, the old way of configuring switches was a lot a procedural program. Each line in the startup configuration file was read into memory and becomes the running configuration.

The ACI Object model is made up of tenants which can be used by different customers, business units, or departments within an organization. When ACI is first turned on it creates a default Tenant. From there, additional Tenants can be created based on the needs of the organization.

Tenants are broken up into contexts which are different IP spaces or subnets with their own Virtual Routing and Forwarding (VRF) instance(s). Contexts are very similar to VLANs but are much more configurable and less limited than the traditional VLAN.

Endpoint Groups and Policies

Endpoint Groups or (EPGs) are a grouping of endpoint devices that share the same set of network services and that ultimately represent an application or business unit. An EPG can be a physical NIC, Virtual NIC, Port Group, IP Address, VLANs, VXLANs or DNS name. EPGs allow the network engineer to logically segregate a network based on the application. In the past, this would typically be done with VLAN(s) which would logically segment the network to Isolate for performance or security reasons. This can cause additional complexity which isn’t necessary needed. By default, a device can’t communicate on the network. This rule operates more like Fiber Channel SAN and less like an Ethernet LAN.

A policy consists of a Source EPGs (sEPG) and a destination EPGs (dEPG). Polices can be ingress and egress rules that can be used for Access Control, Quality of Service (QoS), or other network related services. Once you are in an Endpoint group you can communicate as long as you have IP reachability. A policy allows you to create an application group (web, app, and database servers and control the network communication between each. A policy essentially defines a security zone for a particular application. A policy enforcement matrix used to group sEPG(s) and dEPG(s) in a grid and where they meet is where policies are enforced.

Contracts and Filters

Contracts define how EPGs communicate with each other. Similar to a contract that you sign the defines an agreed upon outcome. In the ACI world, a contract is a set of rules that defines how the network will operate within a policy. Contracts can either be provided or consumed by an EPG. Filters are used to permit and deny traffic at Layer 2, 3 and 4. Filters are applied to both inbound and outbound interfaces. A Filter is essentially a Access Control List or (ACL) on the network.

Application Network Profiles

An Application Network Profile groups everything together (EPGs, contracts and filters) and that dictates how that traffic behaves on the network fabric for a specific application. For those that are familiar with UCS platform. An application network profile is very much like a service profile is to a server. It gives the network hardware an identity once defined.

Private Networks, Bridge Domains and Subnets

A private network is simply a L3 forwarding domain. When added to a context is acts just like a VRF in the traditional network world which can allows for a private network to have overlapping IP addresses without conflict. Bridge Domains or simply BDs are responsible for L2 forwarding like a VLAN. The difference is that you aren’t subject to the limitations of a VLANs on a traditional network like the 4096 VLAN limit. A subnet is defined under a Bridge Domain and creates a gateway. Much like a Switch Virtual Interface or SVI. A gateway is a logical interface that can exist on multiple nodes in the fabric.

In this post, I just scratched the surface on Cisco’s ACI by covering some basic concepts and terminology. Cisco’s ACI and SDN in general are changing the way Network Administrators and Engineers approach the design and administration of networks. New skills like basic scripting and programming will be required for Network Engineers as software takes more predominate role in the data center.

Cisco ACI and Software Defined Networking – Part 1

Software Defined Networking or SDN has started to take the networking world by storm in the last few years. The goal of SDN was to bring the benefits of virtualization to the networking world like we seen in the server world. Decoupling the software from the networking stack gives organizations agility, intelligence and centralized management to address the rapidly changing environments.

Cisco was a little late entering the SDN game. Back in November 2013 they announced Cisco Application Centric Infrastructure (ACI) with the $863 million purchase of Insieme networks. The goal of ACI was to deploy network services to meet the needs of a specific application. This approached changed the paradigm on how we build and deployed network. As of this writing, Cisco ACI is only supported on the Nexus 9000 platform and is “hardware ready” on the Nexus 5600s. The Cisco ACI is managed by an Application Policy Infrastructure Controller or APIC.

So how does SDN change the way we look at networks? To really understand this shift we need to look back on how we built networks.

Traditional Networks

The traditional network was built using a three-tier design that is still widely used today. For those that have been involved with Cisco are most likely familiar with the three tier network design – core, aggregation and access layers. Here is a breakdown of each layer.

Core Layer – The Core is the backbone of the network and is where speed and reliability are important. No routing or network services operate at this layer.

Aggregation Layer – The Aggregation layer is concerned with routing between subnets, QoS, Security and other Network services.

Access Layer – The access layer contains endpoint devices like desktops, servers, printers, etc. It priority is delivering packets to the appropriate endpoint device.

This three-tier design was solid for campus network designs but started to run into limitations in the data center. The primary cause was server virtualization. When VMware started to explode on the scene 8 to 10 years ago we saw a large increase in east-west traffic. In large scale VMware deployments, the three-tier design started to run into scalability issues. 10 Gig Ethernet helps address some of those challenges that still didn’t solve everything.

There were also challenges around redundancy and the three-tier design. Redundancy is imperative inside the data center but the protocols we used in the networks of yesterday aren’t sufficient for today’s data center and the cloud. Spanning Tree Protocol or (STP) was designed to prevent loops in Ethernet networks and variations of it have been around for decades. In order for STP to prevent loops it had to block one of the redundant ports in the network. From a design standpoint this isn’t very efficient and can waste a lot of potential bandwidth. The other challenge was slow re-convergence of the network after a failure meaning that some kind of outage typically happened depending on the size of the network.

Network virtualization technologies like Virtual Port Channel (VPC) and Virtual Switching System (VSS) helped addresses some of these issues by creating logical switches on the network that appear to be a single switch on a single path from an STP standpoint. In that scenario spanning-tree would not block the redundant link giving you a more efficient use of your bandwidth. This is more of a smoke and mirrors method but works well to address the challenges around Spanning Tree.

Traditional network management was primarily done via the CLI or sometimes via a management GUI. Both methods were limited in large scale environments and automation was a challenge at best. Cisco IOS operated using a flat configuration file and didn’t have any APIs that allows for network programmability. So most Network Administrators would “Copy” the running configuration from one switch and “Paste” on to another switch with a few tweaks done in a text editor. This method was only so scalable and was prone to human error.

The biggest limitation was the device by device approach to administration and inconsistent or orphaned configurations. This typically caused issues that were difficult to troubleshoot and very time consuming to remediate. The open nature of these networks tended to create unsecure networks because security wasn’t tightly integrated into the design and didn’t come from a more restrictive approach. Lack of full traffic visibility became a big challenge. Especially in virtualized environments were the network team didn’t have visibility.

Spine-Leaf Networks and ACI

In the ACI network uses a Leaf-Spine architecture. The Leaf-Spine collapses and simplify the Three-Tier architecture with a CLOS architecture. In this topology we have more of a mesh topology with all leaf switches accessing the spine and vise versa. Leafs don’t connect to each other and Spines don’t connect to each other either. This design reduces latency and give you an optimal connection where all paths on the network are forwarding. Here is the breakdown.

Leaf – The Leaf is the access layer where endpoint devices connect.

Spine – Is the backbone layer that interconnects the Leaf layers.

Pretty simple, right? While the Spine-Leaf design doesn’t remove the limitations of Spanning Tree but its replaced with FabricPath (or TRILL the open standard) for loop prevention. FabricPath treats loop prevention like a Link-state routing protocol which are designed for very large WAN environments. Using FabricPath allows a data center network to scale beyond what is capable for Spanning-Tree, while allowing all paths on the network to be active and in a forwarding state. It gives the network the level of intelligence you see in large WAN networks were being able to scale is key.

Overlay Networking

A Leaf-Spine topology utilizes VXLANs or Virtual Extensible LAN as an Overlay Network to scale from an IP subnet standpoint. VXLAN is a tunneling protocol that encapsulates a virtual network. VXLAN supports up to 16 Million logical networks which is a considerable increase of the 4,096 logical networks that VLANs supports. This is accomplished by adding a 24-bit segment ID to the frame. But adding support for additional logical networks is not the only thing that VXLANs bring to the table. VXLANs allow for the migration of Virtual Machines across long distances which is important for the software defined datacenter. In addition, it allows for centralized management of network traffic which makes the network operate as more of a collective and not as individual independent devices. NVGRE is another network overlay technology that’s gaining momentum. I will discuss this one in a future blog post.

Now that we know some of the fundamental components that make up Software Defined Network. Lets dive deeper into ACI and breakdown how it works. Stay tuned for my next blog post.