After watching the 2016 AWS Reinvent keynote James Hamilton, I came away with the impression that there is elements that relate to Enterprise IT. So extracted the networking section and recorded some commentary and reactions that I had in a video format.
Here is my first attempt at doing some video. I would be interested in hearing what you think about it.
Here are the notes I prepared.
AWS Private WAN
* need to control the latency and loss between data centres
* does this apply to Enterprises – is this a validation of private WAN
* Amazon is running hundreds or even thousands of services over their networks and when working at this scale it would quite difficult to work out where a problem is.
* So building and operating your own WAN makes sense.
* Keep in mind that its actually small network – 68 POPS, one customers.. LOTS OF BANDWIDTH but nothing like a commercial SP with millions of pops, many thousands of customers. Different problems.
* 100G WAN makes the physical hard. Not the operation or software.
* HA DCs really similar to enterprise design
* only 14 regions (some enterprises have more than that)
* simplicity is winner or less is more.
* Transit Center is a “telecom closet” and similar principles in Enterprise design. Its where the WAN meets the LAN.
Fibre Cabling Design
* attention to detail is important. The physical nature of cables is key to AWS success.
* Most enterprises don’t focus on physical cabling as a key success factor in reliability and uptime.
* 80/20 rule – since you scale beyond a certain point, the gains from careful hardware design, good organisation and focus stop delivering
What a fine rant on networking
* the network device as a mainframe isn’t something I’ve actually realised but its a strong analogy.
* Once you break up the pieces, innovation has happened at a faster pace for servers and applications. It will happen in networking.
* Whitebox isn’t about ONLY about cost – it was about reliability.
* Show judgement and keep it simple. Software overlays deliver on the promise of keeping it simple. AWS
* Vendor support is a problem
* duplicating the problem is difficult or impossible (as we all know)
* if every network is a snowflake (they aren’t) so duplicating the problem is your cost not theirs. wrong incentives.
* 25G Ethernet is the future. 10G its over.
* 40G is almost 4 times the optic cost.
* 25G almost the same price as 10G
ASICS and Custom Hardware
* Broadcom ASICs – but also mentioned Cavium, Mellanox, Marvell, Barefoot,
* 13 Terabit speeds coming.
Software Defined Networking
* need to have it to offer multi-tenant networking.
& end of discussions
Using Hardware Adapters to accelerate SDN performance
* offload the network and leave the resources of the servers for applications.
* Co-processor 1/10th power, 1/10th latency, 1/10th the cost.
* Latency improves performance and using h/w acceleration in NICs improves latency by big margins.
* This is so important that they make their own NIC silicon.