This week it’s Greg was configuring spanning tree in the data centre and had a problem with a switch cluster that didn’t work proper.
How much networking do you need in a data centre ? Lets say you purchases 2 x 32 port 40GbE switches (common Trident2 configuration) for USD$30K and you use QSFP breakouts to give you 4 x 10GbE per 40GbE port.
For easier maths lets assume that you use 25 ports of 40GbE to give you 100 x 10GbE port in total. That’s enough ports for 50 blade chassis with 2 x 10GbE chassis. Each chassis holds eight servers for a total of 400 physical servers.
At a low density of 20 VMs per physical servers, that makes a total of 20 x 400 or 8000 VMs.
You don’t need even need a network to make that work, just two switches.
Nexus 9000 is really cheap. So how will Cisco make it’s revenue ? What will be the actual price of ACI licenses ? For that matter what about VMware NSX ? Both companies have paid around a billion dollars to build these products and will expect to recover that cost. Will customers pay for expensive software ?
The Nexus 9000 has limited L2 and L3 features and may not be the right solution to replace your existing switches. It’s not clear whether they do MLAG etc.
Nexus 7000 and ACI/APIC integration is likely to have some limitations or specific criteria . Will it need new line cards ? What about Supervisor upgrades ? And will the NX-OS Plus arrive on time and in good, reliable condition ?
Noting the similarities between Juniper QFabric and Cisco ACI and vindication of Juniper’s strategy.
Concerns about the lack of standards in SDN and specifically the lack of certainty around interoperability between ACI, NSX, OpenDaylight and many others.
Talking about the fact that the data centre is not one single network, it’s many small networks connected together.