If you just followed tech media and analysts, you might think public cloud was the right choice for pretty much every organization and business need. But that’s not always the case.
The Datanauts are joined by Avi Freedman, founder of network analytics company Kentik, to talk about cloud limitations, and why he’s seeing big organizations move away from the public cloud.
For companies that depend on packets being sent to end users to make money, they need to have sufficient control to manage performance and deliver results. But public cloud limits the control you have.
At the same time, the cost and complexity of running your own infrastructure is starting to come down. These factors are causing some organizations to rethink public cloud adoption.
The Datanauts and Avi talk about the technological and business cases for DIY and examine shifting definitions of private cloud. They also discuss trends including OpenStack (more talk than uptake) and how network operations has been doing DevOps before it was a thing.
This episode of Datanauts is brought to you by ITProTV. Enhance your technology aptitude. ITProTVis the resource to keep your I.T. skills up to date, with engaging and informative video tutorials. For a free 7-day trial and 30% off the life of your account, go to itpro.tv/datanauts and use the code DATANAUTS30.
Part 1 – Flight From The Public Cloud
- Avi’s observation: “Flight from public clouds to people running their own infra for people running real money over the ‘net”
- When you say, “real money over the ‘net,” you mean people making big money from the services they provide via the Internet?
- Why were they trying public cloud to begin with?
- When moving back to their own infrastructure, is it now private cloud?
- The biggest objection I hear to public cloud is security (I can’t trust who holds my data). And yet, cloud pundits argue that public cloud can be MORE secure than the private data center. Thoughts or observations?
- Is public cloud analogous to renting a car vs buying the car? Seems like a smart move for short lived workloads and/or workloads that need insane scale for some period of time, but not worth having your long-lived, persistent workloads there?
- Eventually the CapEx has to even out, especially as we are getting smarter at building and operating our own data centers (although some are obviously doing it better than others).
- Plus, my use case isn’t everyone else’s use case. I’ve found that data centers are almost entirely the same and yet utterly different. Same in their basic needs, different in how they are run and consumed.
Part 2 – Not OpenStack?
- Avi’s observation: “Little to no use of OpenStack or other ‘cloud’ vs just doing provisioning, monitoring, etc. on stacks of HW.”
- What do you mean, little to no use of OpenStack? But surely, it’s taking over the world…
- Okay, so if not the orchestrated data center, does that mean people have tried and rejected it? Or that the approach is undesirable?
- Do we think the traditional way of standing up hardware is good enough for most real-world IT operations?
- The drive for GIFEE seems to be because we couldn’t possibly deploy containers or VMs or unikernels fast enough to meet the demands of devs. So…not that many shops really doing that?
- I keep hearing how OpenStack is broken badly, caught up in politics, and so on. Will it never be useful for a broad audience?
- Gareth Rushgrove did an interesting talk at Velocity (in Santa Clara) recently. It focused a lot on GIFEE in a “for” and “against” format.
- I really enjoyed the balance between for/against slides, especially the argument that you’re not Google and don’t have their problems.
- Some quotes to consider:
- “Goal of SRE team isn’t zero outages – SRE and product devs are incentive aligned to spend the error budget to get maximum feature velocity” (Source)
- “What if you’re operating an air traffic control system or a nuclear power station? Your goal is probably closer to zero outages.”
- As Charity Majors says, Context is king! (Twitter source)
Part 3: Network Operations Has Been Doing DevOps For Decades
- Avi’s observation: “The world really does want to move to devops, but netops has been doing devops for decades (and SDN with XML push, commit/rollback, building tools, etc.)”
- I have felt that network automation tooling is far behind server automation tooling, mostly because networking is (a) a difficult discipline with (b) a large dependency tree and (c) a massive blast radius. Yet, you say we’ve been doing NetOps for decades.
- If the world wants to go to DevOps, that seems to contradict the “no one is using OpenStack” observation. Why the gap between desire and reality?
- How do we marry networking into the rest of data center operations, so that networking stops feeling like this separate thing? Or is that a stupid idea?
- In your opinion, have we gone too far in trying to break down silos?