HSRP is the first hop redundancy Cisco property protocol which allows a transparent failover of the first-hop gateway. Many technologies have been slightly modified to use it efficiently. In this article although Anycast hsrp will be explained but first I want to first explain how basically HSRP works.
HSRP has Version 1 and 2. The difference between version 1 and 2 is; version 2 supports MD5 authentication. Version 1 and 2 use different virtual MAC address range which can be important for OTV implementation. Also BFD support of version 2 is important for efficiently detecting failure of active HSRP router since tightening control plane timers can affect performance negatively. Many bfd implementations are supported on the data plane.
In figure-1 HSRP v2 is enabled between two gateway devices. HSRP uses one virtual IP and one virtual MAC concept. (Same like VRRP, different than GLBP) Within distributed control plane network design, like in figure-1, one router will be elected active and other will be elected as standby based on the priority value. Bigger priority wins since Hsrp is a layer 3 protocol.(Yes you can generalize if it is layer 2 lower wins)
If control plane would be centralize like in Cisco VSS you would not need to deploy Hsrp, since the SVI would be hosted on both devices and both devices still would actively forward the traffic.
For the data center, since the idea was implementing FCoE on the N7K and keep the fabric topology as separate, MLAG has been implemented in slightly different way by the Cisco. On Nexus 7K, VPC is used and it is different than VSS. Difference is important from the first hop redundancy protocol point of view, since the VPC is used on maximum two devices and both devices have their own control plane.
Separate control plane and separate data plane with VPC, so we need first hop redundancy protocol with VPC. Unfortunately GLBP is not supported but both HSRP and VRRP is supported by the VPC.
Figure-2 shows one pair Nexus device in each data center. Although picture show the right Nexus as standby-hsrp, it still forwards the traffic since same Virtual IP and virtual mac is used actively by both devices. The picture also shows PACL (Port ACL) between the data center. First hop redundancy protocol isolation is important concept, so let me explain.
Assume north-bound traffic is coming to Site-1, but the destination is in Site-2 and for optimal path selection from the north site of network nothing implemented. (LISP, IP Mobility, DNS). Although traffic pass through data center interconnect link and reach the destination in the Site-2, we at least want to send return traffic directly from site-2 to north to prevent triangulation.
FHRP isolation might be seen that solves every problem, but in real life, case may not be so easy. Assume you have stateful devices in front of gateway devices, if traffic hits site-1 and site-2 devices don’t have a state in their table, traffic would be dropped. So either you will accept triangulation and implement source-based NAT on devices or you will carry the state information (Cisco ASA Cluster) although there are arguments that it is not a good idea if data center interconnect link fails.
Let’s turn to our main topic. So far we have covered classical distributed control plane switches without layer 2 multipath ( MLAG in this case ) and hsrp interrelation, then with VSS we covered centralized control plane, lastly I explained distributed control plane with MLAG which is VPC.
The limitation of VPC is only two switches can be act as a one logical device. But if you want to get rid of spanning tree at least in the core of architecture, more scalable design so more host port might be supported then large scale bridging can be an option although after network overlays such as VXLAN, NVGRE there are a lot of discussion about it. Fabric path is the large scale bridging solution of Cisco.
Anycast hsrp is applicable to fabric path. We are not limited to maximum two devices like in VSS or VPC. Also Juniper and HP can support more than two devices for their MLAG solution without large scale bridging and leaf/spine architecture.
Beginning with the release 6.2(2) Cisco support anycast HSRP on Nexus 7000, so for layer 3 forwarding at the spine layer, limitation is not two anymore. I know some discussion for fabric path and its layer 3 forwarding limitations, so it is important to have this feature if you decided to implement leaf and spine architecture and Cisco as a vendor.
First, all leaf and spine switches has to have anycast hsrp feature in their software so code upgrade might be necessary. And they support Hsrp v2 since anycast hsrp works only with hsrp v2.Code supports up to four devices as HSRP gateway maximum for now.
Behind the science, anycast switch ID is advertised by the spine switches and IS-IS calculates the cost running SPF to the switch ID and can use all four nodes so layer 2 ECMP is achieved.
After you enable anycast HSRP, one device will be selected as active, one standby and all the other devices will be in listen mode. The difference is all nodes will respond with the same virtual mac and will forward actively.
If the devices which are not in active state can forward the traffic why then one device is in the active state and other is in standby, both for VPC and fabric path topologies ?.
Assume you have device connected to only standby hsrp device as orphan port which means not connected to VPC. The traffic for that device will be handled by the active HSRP device, so VPC peer-link will be used for those devices which are not connected to VPC.