Cisco recently announced ACI 3.0, the latest release of the company’s data center fabric product.
New features in 3.0 include the ability to connect and manage geographically disparate data centers (what Cisco calls ACI Multi-Site), and integration with Kubernetes container orchestration software.
Connect 4 (Or 5)
ACI Multi-Site lets companies connect multiple data centers to balance workloads, share resources, and provide disaster recovery/business continuity capabilities. Cisco says it has tested support for up to 5 data centers, but it expects that number will go up over time.
As you might expect, for a data center to be incorporated into a Multi-Site domain, it must already run its own ACI fabric, including the APIC controller and Nexus EX or newer switches
Nexus generations prior to EX can still be used in an individual ACI fabric, but can’t be used for inter-site connectivity.
ACI Multi-Site is delivered as a virtual appliance that can run on a standard COTS server. This virtual appliance creates a single policy domain and a single namespace that stretches across all the individual data center fabrics.
From this appliance, administrators can configure how services and resources interact. For instance, a Web tier in site A could be configured to connect to application and database tiers in site B, which could be in a data center across town or in a different state.
ACI Multi-Site can also support live and cold VM migrations, and enable DR/BC use cases.
ACI Multi-Site also provides a general health score for individual sites.
Under the hood, ACI Multi-Site builds out a control plane using MP-BGP EVPN. Spine nodes in the individual data centers establish MP-BGP EVPN sessions to exchange MAC and IP addresses.
For the data plane, ACI Multi-Site uses VXLAN tunnels to move traffic between the sites.
While ACI Multi-Site can stretch L2 domains across data centers, Cisco says each individual ACI fabric retains its own local control plane via its APIC controller, and policies can be set up to isolate a failure in one data center from spreading to multiple sites.
More details about the network architecture are available in this Cisco white paper.
All Aboard The K8s Train
Also new with ACI 3.0 is support for Kubernetes, the open-source container orchestration software.
To support Kubernetes, a Virtual Machine Manager (VMM) domain specifically for Kubernetes is created on the APIC controller. APIC interacts with container hosts either via Cisco’s OpFlex southbound API or OVS.
ACI can support native Kubernetes networking policies and Cisco’s own endpoint groups and contracts. The goal is to let application developers work with familiar Kubernetes constructs, while also allowing network and security teams to layer on additional policies and controls via ACI.
The Kubernetes integration lets network operators provide automatic load balancing for containers, and can isolate Kubernetes tenants.
The integration also gives the APIC controller visibility into the container environment. The controller gathers information such as nodes, name spaces, and services. Cisco says this information can be correlated with telemetry data from the fabric to monitor basic network operations and performance.