Ten years ago, MPLS was state of the art for Wide Area Networks. Larger branch offices might even have redundant MPLS T1 circuits from different providers for diversity. The Software Defined WAN and shift to delivering services over the Internet has transformed branch office networking in recent years.
With bandwidth intensive Internet services, architectures based hub and spoke with MPLS circuits are not cost effective. And the revenue per bit of bandwidth will fall below the cost per bit sometime in 2017. But these are only some of challenges facing traditional telecom providers.
The backbones of web scale service providers like Amazon (see below), Facebook, Google, and Microsoft rival those of traditional telecom providers. They have their own global fiber networks, help fund new submarine cables, and design their own network hardware and software. It is hardly surprising that AT&T and Verizon have worked hard to transform themselves into consumer services and media companies.
What about connecting enterprise data centers to the public cloud? AT&T NetBond, Verizon Secure Cloud Interconnect, or Level 3 Cloud Connect are managed services that use the service provider’s shared MPLS network for transport. But these services place the enterprise in “walled gardens” without carrier diversity or control over the geography of the circuits. For high bandwidth connections, dedicated DWDM wavelengths are more cost-effective and lower latency without the overhead and shared backbone of MPLS.
In the diagram below, enterprise data centers leverage the backbone of the public cloud providers to connect across multiple regions. Charges are only for data transfer out (think the last lines of “Hotel California”) and only pennies per gigabyte. The enterprise pays for DWDM circuits (green lines) to cloud hubs in multiple regions. The enterprise equipment (routers, firewalls, appliances, etc.) located at the hub can reach multiple cloud providers directly or via a cloud exchange.
All cloud providers connect their regions to both the public and private networks. Most often the public network (aka Internet) is used to deliver customer facing applications or services. Private networking (aka hybrid cloud) is used to deliver applications to internal clients and for access to private databases and applications.
Cloud providers also connect their regions together over private backbones. AWS (beige) provides access to its public side via BGP peering over a “public vlan” transit network. VPNs from Virtual Private Cloud (VPC) can travel over the AWS backbone to transit networks in any region. The VPN tunnel terminates in the Enterprise Data Center or any Cloud Hub without leaving AWS infrastructure.
By default, per VPC hardware-based VPNs are 500Mb but this limit can be raised to 1Gb on request. The per VPC bandwidth of “private vlans” via a Cloud Exchange is set when the virtual circuit is defined varies from 50Mb to 500Mb. With a dedicated 1Gb or 10GB connection that bypasses the Cloud Exchange, all vlans share the overall connection bandwidth.
With Azure (blue), access to its public side is beneficial for low latency access to Office365, Exchange and other Microsoft hosted services. But VPNs from Virtual Networks (VNet) are software-based and limited to 100Mb per tunnel. Like AWS, the per VNet bandwidth of “private vlans” via a Cloud Exchange is set when the virtual circuit is defined varies from 50Mb to 500Mb. But that virtual circuit can connect to VNets in any US region across the Azure backbone. And a single VNet can connect to Express Route connections across multiple regions.
Cloud Hubs also facilitate cloud to cloud integration (see below). An application in Azure can call an API gateway in AWS over the private network without hairpins back to the enterprise data center. You can even pass the traffic through firewalls located in the Cloud Hub if required. These geographically dispersed colocation facilities can also be used for out of region tape backup, regional hubs for branch offices, or remotely located managed private clouds.