As the Datanauts’ Dreadnaught Class Battle Cruiser surveys an uncharted asteroid belt, we detect signs of something … interesting.
It’s CoreOS, a lightweight, Unix-like operating system that aims to make deploying containers as simple as ordering Earl Grey tea (hot, of course) on your food replicator.
And, in a universe where container-focused infrastructure continues to gain momentum, there are so many different ways to slice and dice your applications. Fortunately, we have a special red shirt on board to help navigate the dangerous odds found when zooming through space.
Our guest is Alex Polvi, CEO at CoreOS. He’ll walk us through CoreOS and related projects such as Tectonic, a commercially supported version of the Kubernetes container management platform.
This episode of Datanauts is brought to you by ITProTV. Enhance your technology aptitude.
ITProTVis the resource to keep your I.T. skills up to date, with engaging and informative video tutorials. For a free 7-day trial and 30% off the life of your account, go to itpro.tv/datanauts and use the code DATANAUTS30.
Part 1 – An Introduction To CoreOS
- What was you and your team’s goals when building and sharing CoreOS?
- Let’s start by setting a baseline on what CoreOS is and isn’t
- Docker plays a key role with CoreOS, it seems, thus indicating that this isn’t the platform for legacy or “fully installed” applications?
- What sorts of shops are using CoreOS in either pre-prod or actual production?
- Any particular workloads that seem to be great candidates?
- How about bad candidates?
- CoreOS doesn’t have a package manager, which makes me go “What?!” – Can you help us understand this design decision?
Part 2 – Engaging With Operations
- Let’s dive into the scenario where an organization wants to get serious with containers and applications that run well in them.
- How do I embrace this as an engineer focused on hardware and operations?
- How does this introduction affect my career?
- What tools and skills should I be focusing upon to either get ready for this world of containers and lightweight OS world (CoreOS) versus traditional infra?
- What happens when I want to run a new version of an application on CoreOS?
- We’ve heard about the importance of load balancers in previous shows – which “spray” the traffic across active containers. Does CoreOS work differently?
- And what about when I want to run a new version of CoreOS itself?
- Where do the containers and data go?
- What needs to be persistent in a CoreOS environment?
- Interesting quote from the CoreOS page: “If you’re currently running in the cloud, running a single CoreOS cluster on two different clouds or cloud + bare metal is supported and encouraged.”
- How is this achieved?
- Does the application have to be written in such a way to support a “microservices” sort of architecture?
- How do operations folks manage this? It sounds hard
Part 3 – Learning And Ecosystem
- For those that are looking at deploying CoreOS into their environment, where are good places / resources / people to turn to?
- What other projects are complementary to CoreOS – and do you have a “base set” of projects that you feel are required to really get started?
- “Walking the stack” … starting with GRUB
- Linux kernel
- Rkt (pronounced Rocket, architected to be secure and manageable)
- Etcd (distributed database built by CoreOS, required for Kubernetes)
- Flannel (overlay networking that allows containers to chat, IPs assigned to applications instead of servers)
- Kubernetes (container management system)
- All together, somewhere around 400 MB to run a full distributed system 🙂