
When an application needs more of something (CPU, memory, disk, and so on) you’ve got two choices: you can scale up, or scale out.
In simple terms, scaling up means buying a bigger or faster box. Scaling out means replicating boxes (three servers instead of two, for example), and distributing your application across these units.
As we begin to reach the limits of Moore’s Law, scaling up becomes less of an option, so it makes sense to understand scale-out. But even without such limitations, there are plenty of good reasons to adopt scale-out architectures.
Today’s Datanauts episode delves into the nitty-gritty of how scale-out works for servers, networking, and storage; what scale-out means for application design and operations; and how vendors, open source projects, and cloud services are positioning themselves in a scale-out world.
Sponsor: Firebind
Firebind keeps your ISP honest. If a good Internet connection is critical to you or your customers’ business operations, trust Firebind to give you the visibility you need to keep business on track. Check out their website at firebind.com and request a free trial. Mention the Packet Pushers podcast and get a free month of service!
Show Links:
Datanauts 011: Understanding Leaf-Spine Networks – Packet Pushers
Scalable Microservices with Kubernetes – Udacity Course (free)
Kubernetes Bootcamp – GitHub
Red Hat Certificate of Expertise in Ansible Automation – Red Hat
The Complete Guide to the ELK Stack – Logz.io
Good discussion on the various aspects of scaling.
However in this podcast I would have to say that the real star of the show was “the podcast editor”…
Regards,
Alan