Networks support applications. Okay, that might seem a little obvious, but it needs to be said from time to time. 🙂 In that vein, I often find it useful to get a better grip on the applications people are putting on networks, and how they expect the network to behave. In that vein, I picked up Cloud Architecture Patterns recently. Cloud computing is something that’s “in the air,” but there are precious few books that actually explain the architecture of cloud systems, especially without diving into lots of detail about orchestration and other stuff.
Bill Wilder begins his book on cloud architecture with a primer on scalability. What does it mean to scale from an application perspective, particularly in a cloud environment? What is the difference between horizontal and vertical scaling, or scale up and scale out? What does it mean to say an application is “cloud native?” The second chapter drives this theme forward, covering the horizontal scale out pattern. What’s important here, for the network engineer, is how the horizontal scale out pattern interacts with the network. Scale out assumes the network will have enough parallel lanes to direct incoming sessions to a variable number of compute resources – hence, it drives virtualization, as well as interacting with quality of service concerns.
Wilder looks at several compute patterns, among which are the queue centric workflow and the map/reduce workflow. The queue centric pattern is probably the most common in data centers, dividing applications into front end and back end services, or rather the web and services tiers. The impact on the network is important; interVLAN routing becomes important in separating these tiers. Policy implementation and enforcement along this divide is an area network architects need to pay attention to when designing towards cloud services.
A separate chapter is dedicated to eventual consistency, is interesting as well as well explained. Network engineers need to consider eventual consistency when thinking through database synchronization support and business continuity. Another chapter is dedicated to database sharding; while this technique seems to be falling out of favor of late, it is still important for network engineers to understand. The weakest chapter here is the one on network latency — but then again, it’s probably a bit unfair to judge this chapter too harshly.
In each chapter, Wilder builds up an example application to illustrate the technology or technique he’s discussing. These asides are often useful, especially when attempting to evaluate the impact of the technology or pattern he’s discussing on network design.
Overall, Bill Wilder has done the networking community a great favor with this high level explanation of the various ways in which cloud architectures are built.